abstract
stringlengths
1
4.43k
claims
stringlengths
14
189k
description
stringlengths
5
1.46M
The invention relates to methods of context-based mobile device feature control and mobile devices employing the same. One method comprises determining, with a mobile device, one or more contexts corresponding to the mobile device; selecting, from a predetermined set of security protocols, a security protocol corresponding to the determined one or more contexts; and adjusting a permission setting for one or more functional features of the mobile device based upon the selected security protocol. One apparatus comprises one or more features configure to input data, output data, transform data, or a combination thereof; and a controller configured to: determine one or more contexts corresponding to the mobile computing device, to select, from a predetermined set of security protocols, a security protocol corresponding to the determined one or more contexts, and to adjust a permission setting for the one or more functional features based upon the selected security protocol.
CLAIMS 1. A method for context-based control of mobile device features, comprising:determining with the mobile device one or more contexts corresponding to the mobile device, wherein the one or more contexts include a location of the mobile device, a wireless network to which the mobile device is connected, a connection between the mobile device and proximity of a security beacon, local time of the mobile device, detected movement of the mobile device, proximity of the mobile device to a second mobile device, or a combination thereof;selecting a security protocol corresponding to the determined one or more scenarios from a set of predetermined security protocols; andadjusting permission settings for one or more functional features of the mobile device based on the selected security protocol,wherein said permission setting includes restricting access to said one or more functional features by requiring authentication via a password, a biometric identifier, or a combination thereof, wherein said password and/or said biometric identifier correspond to said administrator of the security protocol,wherein the one or more functional features of the mobile device include a camera device, a microphone device, a sensor device, a display device, a data storage device, a network device, an input/output port device, an antenna, a software application, or a combination thereof; Or wherein the one or more functional features of the mobile device include a secure data storage area, an encryption engine, a decryption engine, or a combination thereof.2. The method of claim 1, wherein the security protocol consists of a mobile device management profile managed by the administrator on the mobile device.3. The method according to claim 1, wherein the permission setting further comprises: disabling the one or more functional features or enabling the one or more functional features.4. The method of claim 1, wherein the mobile device comprises a smartphone, a tablet computer, a laptop computer, an Internet of Things (IoT) device, a wearable computing device, or an in-vehicle computer.5. A mobile computing device comprising:one or more components configured to input data, output data, transform data, or a combination thereof; anda controller, the controller is configured to:determining one or more contexts corresponding to the mobile computing device, wherein the one or more contexts include a location of the mobile computing device, a wireless network to which the mobile computing device is connected, a proximity to a secure beacon, local time of the mobile computing device, detected movement of the mobile computing device, proximity of the mobile computing device to a second mobile device, or a combination thereof;selecting a security protocol corresponding to the determined one or more scenarios from a set of predetermined security protocols; andadjusting permission settings for the one or more functional features based on the selected security protocol,wherein said permission setting includes restricting access to said one or more functional features by requiring authentication via a password, a biometric identifier, or a combination thereof, wherein said password and/or said biometric identifier correspond to said administrator of the security protocol,wherein the one or more functional features of the mobile computing device comprise a camera device, a microphone device, a sensor device, a display device, a data storage device, a network device, an input/output port device, an antenna, a software application, or a combination thereof ; or wherein the one or more functional features of the mobile computing device include a secure data storage area, an encryption engine, a decryption engine, or a combination thereof.6. The mobile computing device of claim 5, wherein the security protocol consists of a mobile device management profile managed by the administrator on the mobile device.7. The mobile computing device of claim 5, wherein the permission setting further comprises: disabling the one or more functional features or enabling the one or more functional features.8. The mobile computing device of claim 5, wherein the mobile computing device comprises a smart phone, a tablet computer, or a laptop computer.9. A method of managing access to information in a secure location, comprising:installing a mobile device management profile on the mobile computing device;determining, with the mobile computing device, a spatial relationship between the mobile computing device and the secure location;selecting a security protocol corresponding to the determined spatial relationship from a predetermined set of security protocols of the mobile device management profile; andrestricting access to one or more data input devices of the mobile computing device based on a selected security protocol,wherein restricting access to the one or more data input devices includes requiring authentication via a password, a biometric identifier, or a combination thereof, wherein the password and/or the biometric identifier correspond to an administrator of the security protocol .10. The method of claim 9, wherein the one or more data input devices comprise camera devices, microphone devices, sensor devices, data storage devices, network devices, input/output port devices, antennas, or combinations thereof.
Method of Context-Based Control of Mobile Device Features, Mobile Using the Method Apparatus and method for managing access to information in a secure locationCross References to Related ApplicationsThis application claims the benefit of U.S. Provisional Patent Application No. 62/955,687, filed December 31, 2019, the entire contents of which are incorporated herein by reference.technical fieldThe present disclosure relates generally to mobile devices, and more particularly to methods of context-based control of mobile device features and mobile devices employing the same.Background techniqueMobile devices (eg, smartphones, tablets, laptops, other mobile computing devices, etc.) are widely used environments in which secure access policies related to sensitive information are implemented. Some such secure locations require surrender of the mobile device prior to entry to prevent unauthorized recording (e.g., using cameras, microphones, or other sensors) or copying (e.g., using portable storage devices or local network access). These policies, while effective, may be too restrictive for certain secure locations that may require access to certain functions of the mobile device (eg, phone calls, note-taking, etc.). Accordingly, there is a need for improved methods and systems that provide for finer control over features of mobile devices to provide information security at secure locations.Contents of the inventionIn one aspect, the present disclosure provides a method comprising: determining with a mobile device one or more contexts corresponding to the mobile device; selecting from a set of predetermined security protocols consistent with the determined one or more A security protocol corresponding to each context; and based on the selected security protocol, adjusting permission settings for one or more functional features of the mobile device.In another aspect, the present disclosure provides a mobile computing device comprising: one or more features configured to input data, output data, transform data, or a combination thereof; and a controller , the controller is configured to: determine one or more contexts corresponding to the mobile computing device; select a security protocol corresponding to the determined one or more contexts from a set of predetermined security protocols; And based on the selected security protocol, adjusting permission settings for the one or more functional features.In another aspect, the present disclosure provides a method of managing access to information in a secure location, comprising: installing a mobile device management configuration file on a mobile computing device; a spatial relationship between the security locations; selecting a security protocol corresponding to the determined spatial relationship from a set of predetermined security protocols of the mobile device management profile; and limiting access to the security location based on the selected security protocol. access to one or more data input devices of the mobile computing device.Description of drawingsFIG. 1 is a block diagram schematically illustrating a mobile computing device according to an embodiment of the present disclosure.Figure 2 is a block diagram schematically illustrating a network environment in which some embodiments of the present disclosure may operate.FIG. 3 is a block diagram schematically illustrating components of a mobile computing device that may be used to implement embodiments of the present disclosure.4 and 5 are flowcharts illustrating methods of context-based mobile device feature control in accordance with embodiments of the present technology.detailed descriptionAs noted above, mobile devices may contain various functional features (eg, device hardware, applications, application features, etc.) that raise varying degrees of concern from an information security perspective. Rather than restricting access to the entire mobile device in a secure location (e.g., by confiscating it), embodiments of the present disclosure are based on various contexts (e.g., device location, network connectivity, proximity to secure beacons, local time, combinations thereof, etc.) to provide feature-level enforcement of permission settings. Information security concerns can be mitigated (e.g., by restricting access to features related to data capture and data sharing) by enforcing permissions at the feature level (e.g., enabling, disabling, authentication requirements), while allowing little or no Device features that threaten information security and/or that may require persistent access (e.g., phone, health monitoring, note-taking, personal media, etc.).In this regard, several embodiments of the present technology provide methods and systems for context-based control of mobile device features. In one embodiment, a method includes: determining, with a mobile device, one or more contexts corresponding to the mobile device; selecting from a set of predetermined security protocols the security protocol corresponding to the determined one or more contexts. a security protocol; and adjusting permission settings for one or more functional features of the mobile device based on the selected security protocol.For example, in one embodiment, a corporate facility with a secure research and development environment may wish to prevent photography and file copying of mobile devices in the secure environment. By configuring security protocols to correspond to the security environment (e.g., via GPS geofencing, WiFi network connectivity, cell tower triangulation, proximity to security beacons, etc.), access to the mobile device's camera hardware and mass communication can be denied The permission of the application program feature corresponding to the storage device. The remaining features of the mobile device that are determined not to pose an information security threat, such as speaker and microphone access (e.g., though not accessible to applications that might use it for unauthorized recording), telephony applications, etc., can be retained without limitation.However, relevant contexts corresponding to different security profiles are not limited to locations, as other contexts may also be relevant. For example, in some cases, a combination of both location and time can define a context (eg, corresponding to the duration of a secure meeting in an unsecured environment). Other contextual information (eg, the mobile device's connection to a particular network) may also be relevant to selecting a security policy for use in a particular environment. Also, the relevant context need not involve location at all (for example, when participating in a meeting attended remotely, for security reasons it might be necessary to disable the audio recording).According to one aspect of the present disclosure, an administrator of a security environment can install and manage security policies on a mobile device (with the permission of its owner/user). For example, a mobile device management (MDM) profile may be installed and configured to enforce permission settings corresponding to a security profile, as will be readily understood by those skilled in the art.For example, Figure 1 is a block diagram schematically illustrating a mobile device on which some embodiments of the disclosed technology may operate. The mobile device 100 may include one or more input devices 120 that provide input to the processor 110 (eg, CPU, GPU, APU, etc.) to inform actions thereof. These actions may be mediated by a hardware controller that interprets signals received from the input device and communicates this information to the processor 110 using a communications protocol. Input devices 120 include, for example, a mouse, keyboard, touch screen, infrared sensor, biometric sensor, touch pad, wearable input device, camera or image based input device, microphone, or other user input device.Processor 110 may be a single processing unit or multiple processing units in a device, or distributed across multiple devices. Processor 110 may be coupled to other hardware devices, for example, by using a bus such as a PCI bus or a SCSI bus. Processor 110 may communicate with a hardware controller for a device such as display 130 . Display 130 can be used to display text and graphics. In some implementations, the display 130 provides graphical and textual visual feedback to the user. In some implementations, such as when the input device is a touch screen or is equipped with an eye direction monitoring system, the display 130 includes the input device as part of the display. In some embodiments, the display is separate from the input device. Examples of display devices are: LCD display screens, LED display screens, OLED display screens, projection, holographic or augmented reality displays (such as head-up display devices or head-mounted devices), etc. Other I/O devices 140 may also be coupled to the processor, such as a network card, video card, audio card, USB, Firewire or other external device, camera, printer, speakers, CD-ROM drive, DVD drive, disk drive, or Blu-ray device.In some embodiments, the mobile device 100 also includes communication means capable of communicating with a network node via a wireless or wire-based connection. A communication device may communicate with another device or server over a network using, for example, TCP/IP protocols. The mobile device 100 may utilize the communication device to distribute operations among a plurality of network devices.Processor 110 may access memory 150 in a device or distributed among multiple devices. Memory includes one or more of a variety of hardware devices for volatile and nonvolatile storage, and may include both read-only and writable memory. For example, memory may include random access memory (RAM), various caches, CPU registers, read only memory (ROM), and writable nonvolatile memory such as flash memory, hard drives, floppy disks, CDs, DVDs, Magnetic storage devices, tape drives, etc. Memory is not a propagated signal separate from the underlying hardware; therefore, memory is non-transitory. Memory 150 may include program memory 160 that stores programs and software, such as an operating system 162 , a context-based feature control system 164 , and other application programs 166 . Memory 150 may also contain data storage 170, for example, security protocols and permission settings, keys used to verify credentials and biometrics, mapping of permission settings to hardware devices, applications used to enable, disable or restrict access and/or Application features, configuration data, settings, user options or preferences, etc. may be provided to program memory 160 or any element of mobile device 100 .Some embodiments are operable with various other computing system environments or configurations. Examples of computing systems, environments, and/or configurations that may be suitable for use with the technology include, but are not limited to, personal computers, server computers, handheld or laptop devices, cellular telephones, wearable electronic devices, game consoles, tablet devices, Multiprocessor systems, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked PCs, minicomputers, mainframe computers, Internet of Things (IoT) devices, edge computing devices, distributed Computing environment, etc.FIG. 2 is a block diagram illustrating an overview of an environment 200 in which some embodiments of the disclosed technology may operate. Environment 200 may include one or more client computing devices 205A-D, examples of which may include mobile device 100 . Client computing device 205 may operate in a networked environment using logical connections through network 230 to one or more remote computers (eg, server computing devices). In some implementations, the context-based feature control system 164 may receive permission settings in a security protocol provided by an employer or other device administrator over the network 230, for example. Additionally, in some cases, certain authentication procedures set in the permission settings of the context-based feature control system 164 may specify that certificate verification with third parties over the network 230 is required.In some implementations, server 210 may be an edge server that receives client requests and coordinates fulfillment of those requests through other servers, such as servers 220A-C. Server computing devices 210 and 220 may include computing systems, such as mobile device 100 . Although each server computing device 210 and 220 is shown logically as a single server, each server computing device may be a distributed computing environment that includes multiple computing devices located at the same or geographically different physical locations. In some implementations, each server 220 corresponds to a group of servers.Client computing device 205 and server computing devices 210 and 220 may each act as servers or clients to other server/client devices. Server 210 may be connected to database 215 . Servers 220A-C may each be connected to a corresponding database 225A-C. As noted above, each server 220 may correspond to a group of servers, and each of these servers may share a database or may have their own database. Databases 215 and 225 may warehouse (eg, store) information. Although databases 215 and 225 are logically shown as a single unit, each of databases 215 and 225 may be a distributed computing environment containing multiple computing devices, may be located within their respective servers, or may be located on the same or geographically different physical location.Network 230 may be a local area network (LAN), wide area network (WAN), or any other wired or wireless network using any of a variety of networking protocols (eg, 802.11, cellular, Bluetooth, peer-to-peer, etc.). Network 230 may be the Internet or some other public or private network. Client computing device 205 may connect to network 230 through a network interface (eg, through wired or wireless communication). Although the connections between server 210 and server 220 are shown as separate connections, these connections may be any kind of local area network, wide area network, wired network or wireless network, including network 230 or separate public or private networks.FIG. 3 is a block diagram illustrating a component 300 that, in some implementations, may be used in a system employing the disclosed technology. Component 300 includes hardware 302 , general-purpose software 320 and specific-purpose components 340 . As noted above, systems implementing the disclosed techniques may use a variety of hardware, including processing unit 304 (e.g., CPU, GPU, APU, etc.), working memory 306, storage memory 308 (local storage or as a storage device such as storage 215 or 225 The interface of the remote storage device) and the input and output device 310. In various implementations, storage memory 308 may be one or more of: a local device, an interface to a remote storage device, or a combination thereof. For example, storage memory 308 may be a set of one or more hard drives (e.g., a redundant array of independent disks (RAID)) accessible through a system bus, or may be a cloud storage provider or accessible via one or more communication networks. Other network storage devices accessed (eg, network accessible storage (NAS) devices such as storage device 215 or storage provided by another server 220 ). Component 300 may be implemented in a client computing device, such as client computing device 205 , or in a server computing device, such as server computing device 210 or 220 .General software 320 may include various application programs, including operating system 322 , native programs 324 and basic input output system (BIOS) 326 . Specific component 340 may be a subcomponent of general software application 320 , such as native program 324 . Dedicated components 340 may contain context-based security profiles 344, security event monitors 346, rights enforcement modules 348, application programming interfaces 350 and components that may be used to provide user interfaces, transmit data, and APIs and other controls for controlling the dedicated components. keys, and other applications such as interface 342. In some implementations, component 300 may be in a computing system distributed across multiple computing devices, or may be an interface to a server-based application executing one or more dedicated components 340 .Context-based security profiles 344 may be user, administrator, or application provider-defined mappings between A) context and B) device hardware, applications, or a combination thereof to enable, disable, or limit specific security configuration file. Context-based security profiles 344 may also define which contexts correspond to which security profiles and which permission settings are used for that security profile. Context monitor 346 may identify contexts mapped in context-based security configuration files 344 . For example, the context monitor 346 may identify a change in location, a change in network connectivity, proximity to a safety beacon, the device's local time, or a combination thereof. Rights enforcement module 348 may enforce the rights defined for each security profile of context-based security profiles 344 . For example, the rights enforcement module 348 may disable applications, application features, or device hardware of the mobile device, and may enforce authentication (e.g., by verifying a received password, biometric information, PIN, etc.) before allowing access thereto. , or it may be determined that the authentication process is not required for some security profiles. Application interface 350 may enable, disable, or restrict (eg, via authentication) one or more applications and/or application features according to the mappings defined in context-based security configuration file 344 . The API may be invoked by the context-based security profile 344 following a context identified from the context monitor 346 and/or upon successful authentication by the authentication enforcement module 348 . In some implementations, enabling and/or disabling of applications can be performed via the current device's operating system, and enabling and/or disabling of application features can be performed via API calls to applications with those capabilities. Interface 342 may enable, disable, or restrict one or more device hardware features (eg, through authentication) according to mappings defined in context-based security profiles 344 .Those skilled in the art will understand that the components shown in Figures 1 to 3 above, as well as the components in each of the flowcharts discussed below, may be varied in various ways. For example, the order of logic may be rearranged, sub-steps may be performed in parallel, illustrated logic may be omitted, additional logic may be included, and so forth. In some implementations, one or more of the components described above may perform one or more processes as described below.FIG. 4 is a flowchart 400 illustrating a method of context-based mobile device feature control in accordance with an embodiment of the present technology. Flowchart 400 may be an example of, or encompass aspects of, a method that a mobile device (eg, mobile device 100 and/or component 300) may perform as described with reference to FIGS. 1-3.The method includes determining, with the mobile device, one or more contexts corresponding to the mobile device (block 410). According to one aspect of the present technology, the determining features of block 410 may be performed by a mobile device (e.g., mobile device 100 and/or component 300) in conjunction with context-based feature control system 164 and or context monitor 346, as described with reference to FIGS. 3 described.The method further includes selecting a security protocol corresponding to the determined one or more contexts from a set of predetermined security protocols (block 420). According to one aspect of the present technology, the selection feature of block 420 may be performed by a mobile device (e.g., mobile device 100 and/or component 300) in conjunction with context-based feature control system 164 and/or context-based security profile 344, as Described with reference to Figures 1 to 3.The method further includes adjusting permission settings for one or more functional features of the mobile device based on the selected security protocol (block 430). According to one aspect of the present technology, the adjusting features of block 430 may be performed by a mobile device (e.g., mobile device 100 and/or component 300) in conjunction with context-based feature control system 164 and/or rights enforcement module 348, as described with reference to FIG. to 3.FIG. 5 is a flowchart 500 illustrating a method of managing access to information in a secure location in accordance with an embodiment of the present technology. Flowchart 500 may be an example of, or encompass aspects of, a method that a mobile device (eg, mobile device 100 and/or component 300) may perform as described with reference to FIGS. 1-3.The method includes installing a mobile device management profile on the mobile computing device (block 510). According to one aspect of the present technology, in some cases, the installation features of block 510 may be performed by a mobile device (eg, mobile device 100 and/or component 300 ) in conjunction with processor 110 , as described with reference to FIGS. 1-3 .The method further includes determining, with the mobile computing device, a spatial relationship between the mobile computing device and the secure location (block 520). According to one aspect of the present technology, in some cases, the determining features of block 520 may be performed by a mobile device (eg, mobile device 100 and/or component 300 ) in conjunction with context monitor 346 , as described with reference to FIGS. 1-3 .The method further includes selecting a security protocol corresponding to the determined spatial relationship from a set of predetermined security protocols of the mobile device management profile (block 530). In accordance with an aspect of the present technology, in some cases, the selection feature of block 530 may be performed by a mobile device (e.g., mobile device 100 and/or component 300) in conjunction with a context-based security profile 344, as described with reference to FIGS. 1-3 mentioned.The method further includes restricting access to one or more data input devices of the mobile computing device based on the selected security protocol (block 540). In accordance with one aspect of the present technology, in some cases, the restrictive features of block 540 may be performed by a mobile device (eg, mobile device 100 and/or component 300 ) in conjunction with rights enforcement module 348 , as described with reference to FIGS. 1-3 .It should be noted that the methods described above describe possible implementations and that operations and steps may be rearranged or otherwise modified and other implementations are possible. Furthermore, embodiments from two or more approaches may be combined.The functions described herein may be implemented in hardware, software executed by a processor, firmware or any combination thereof. Other examples and implementations are within the scope of the disclosure and the appended claims. Features implementing functions may also be physically located at various locations, including being distributed such that portions of functions are implemented at different physical locations.Also, as used herein (in claims), as in a list of items (e.g., beginning with a phrase such as "at least one of" or "one or more of") The use of "or" in a list of items) indicates an inclusive list such that, for example, at least one of A, B or C means A or B or C or AB or AC or BC or ABC (ie, A and B and C ). Also, as used herein, the phrase "based on" should not be construed as a reference to a closed set of conditions. For example, an exemplary step described as "based on condition A" may be based on both condition A and condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase "based on" should be interpreted in the same manner as the phrase "based at least in part on."From the foregoing it will be appreciated that, while specific embodiments of the invention have been described herein for purposes of illustration, various modifications may be made without departing from the scope of the invention. Moreover, in the foregoing description, numerous specific details are discussed in order to provide a thorough and enabling description of embodiments of the present technology. One skilled in the relevant art will recognize, however, that the present disclosure may be practiced without one or more of these specific details. In other instances, well-known structures or operations commonly associated with memory systems and devices are not shown or described in detail to avoid obscuring other aspects of the technology. In general, it should be understood that various other devices, systems and methods, in addition to those specific embodiments disclosed herein, may be within the scope of the present technology.
Methods and devices are disclosed utilizing a silicon-containing barrier layer. A method of forming a barrier layer on a semiconductor device is disclosed. A semiconductor device is provided. A silicon-containing material is deposited on the semiconductor device. The silicon-containing material is processed in a reactive ambient. The barrier layer can be made primarily oxide, primarily nitride or both by the reactive ambient selected. A semiconductor device is disclosed. The semiconductor device includes a substrate, a gate oxide, a silicon-containing barrier layer and a gate electrode. The gate oxide is formed over the substrate. The silicon-containing barrier layer is formed over the gate oxide by causing silicon atoms of a precursor layer react with a reactive agent. The gate electrode is formed over the silicon-containing barrier layer. Other embodiments utilizing a barrier layer are disclosed.
What is claimed is: 1. A device comprising:a substrate having at least one semiconductor layer; a semiconductor device fabricated proximate to the substrate; and a silicon-containing barrier layer containing no metal formed over at least a portion of the semiconductor device by subjecting silicon-containing material in a precursor layer formed over the portion of the semiconductor device to a reactive agent selected to react with silicon of the silicon-containing material. 2. The device of claim 1, wherein the reactive agent is selected from the group consisting of NH3, N2, O2, O3, and NO and the silicon-containing barrier layer comprises oxynitride.3. A semiconductor device comprising:a substrate; a source formed in the substrate; a drain formed in the substrate; a gate oxide formed over the substrate; a silicon-containing barrier layer containing no metal vapor deposited over the gate oxide and processed in a reactive ambient; and a gate electrode formed over the silicon-containing barrier layer. 4. The semiconductor device of claim 3, wherein the silicon-containing barrier layer is processed for at least 60 seconds at a pressure of 450 Torr and at a temperature range of 700[deg.] C. to 900[deg.] C.5. The semiconductor device of claim 3 further comprising:a second silicon-containing barrier layer vapor deposited over the gate electrode and processed in a reactive ambient. 6. The semiconductor device of claim 3, wherein the silicon containing barrier layer is formed from hexamethyldisilazane.7. The semiconductor device of claim 3, wherein the reactive ambient is a nitridizing agent and the barrier layer is primarily nitride.8. The semiconductor device of claim 3, wherein the reactive ambient is an oxidizing agent and the barrier layer is primarily oxide.9. A semiconductor device comprising:a substrate having at least one semiconductor layer; a metal layer formed over the substrate; and a silicon-containing barrier layer containing no metal formed over the metal layer by depositing a silicon-containing material over the metal layer and causing silicon atoms of the silicon-containing material to react with a reactant. 10. A semiconductor device comprising:a substrate having at least one semiconductor layer; a transistor structure formed proximate to the substrate, the transistor structure having: a source formed in the substrate; a drain formed in the substrate; and a gate oxide layer formed over the substrate substantially between the source and drain; and a primarily oxide silicon-containing barrier layer formed over the gate oxide layer by reacting silicon atoms of the silicon-containing barrier layer with a primarily oxidizing reactant. 11. A semiconductor device comprising:a substrate having at least one semiconductor layer; a transistor structure formed proximate to the substrate, the transistor structure having: a source formed in the substrate; a drain formed in the substrate; and a gate oxide layer formed over the substrate substantially between the source and drain; and an oxynitride silicon containing barrier layer formed over the gate oxide layer by reacting silicon atoms of the silicon-containing barrier layer with a oxidizing and nitridizing reactant.
This application is related to commonly assigned U.S. patent application Ser. Nos. 09/653,096, METHOD FOR FORMING A DIELECTRIC LAYER TO INCREASE SEMICONDUCTOR DEVICE PERFORMANCE, filed Aug. 31,2000, by Powell et al. and Ser. No. 09/653,298, METHOD FOR FORMING A DIELECTRIC LAYER AT A LOW TEMPERATURE, filed Aug. 31, 2000, by Mercaldi et al., the disclosures of which are incorporated herein by reference.FIELD OF THE INVENTIONThe present invention relates to the field of semiconductors and, more particularly, to an improved barrier layer for increasing semiconductor performance.BACKGROUND OF THE INVENTIONThere is a constant demand for semiconductor devices of a reduced size. The performance of semiconductor capacitors, transistors, electrode layers and the like in semiconductor devices becomes more critical as device size decreases. Accordingly, processes that result in increased device performance are critical to improved semiconductor device fabrication. For example, capacitor and transistor performance can be improved by limiting diffusion of oxygen to transistor active areas or capacitor electrodes.Barrier layers are generally used in circuitry and semiconductor devices to enhance performance by reducing diffusion, migration and reaction. Accordingly, there is a continuing need for improved barrier layer technology directed at improving semiconductor device performance.SUMMARY OF THE INVENTIONThis need is met by the present invention wherein a method of forming a barrier layer on a semiconductor device is disclosed. According to one embodiment of the present invention, a semiconductor device is provided. A silicon-containing material is deposited on the semiconductor device. The silicon-containing material is processed in a reactive ambient.According to another embodiment of the present invention, a semiconductor device is disclosed. The semiconductor device includes a substrate, a gate oxide, a silicon-containing barrier layer and a gate electrode. The gate oxide is formed over the substrate. The silicon-containing barrier layer is formed over the gate oxide. The gate electrode is formed over the silicon-containing barrier layer.Other methods and devices are disclosed.BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGSThe following detailed description of the present invention can be best understood when read in conjunction with the accompanying drawings, where like structure is indicated with like reference numerals.FIG. 1A illustrates a semiconductor device using a barrier layer according to one embodiment of the present invention.FIG. 1B illustrates a transistor semiconductor device utilizing a barrier layer according to one embodiment of the present invention.FIG. 2A is a flowchart of a method for fabricating a barrier layer according to another embodiment of the present invention.FIG. 2B illustrates exemplary thickness measurements of the barrier layer using the method of FIG. 2A.FIG. 3 illustrates capacitance characteristics of a semiconductor device utilizing a barrier layer according to another embodiment of the present invention.FIG. 4 illustrates a barrier layer according to another embodiment of the present invention.FIG. 5 is an illustration of a computer system for use with embodiments of the present invention.DETAILED DESCRIPTION OF THE INVENTIONFIG. 1A illustrates a semiconductor device 108 using a barrier layer 102 according to one embodiment of the present invention. The semiconductor device 108 is merely illustrated schematically in FIG. 1 and is typically fabricated proximate to a substrate 101. More specifically, the semiconductor device 108 may be formed in, on or over the substrate 101. For the purposes of defining and describing the present invention, it is noted that a semiconductor device 108 may comprise a transistor, capacitor, electrode, insulator or any of a variety of components commonly utilized in semiconductor structures. The substrate 101 may comprise one or more semiconductor layers or semiconductor structures which may define portions of the semiconductor device 108. The barrier layer 102 is formed over the semiconductor device 108. Generally, the barrier layer 102 is formed by depositing one or more precursor materials from a silane or silazane source and converting the deposited materials into the barrier layer 102 by subsequent processing of the deposited materials. The subsequent processing of the deposited materials involves subjecting the deposited materials to a reactive agent, such as an oxidizing or nitridizing species, which will react with silicon in the deposited materials. The barrier layer 102 reduces or prevents diffusion or migration of dopants into and out of the semiconductor device 108 and reaction or oxidation of the materials forming the semiconductor device 108.FIG. 1B illustrates a transistor semiconductor device 109 utilizing a barrier layer 102 according to another embodiment of the present invention. A source 105 is formed in a substrate 101. A drain 106 is formed in the substrate 101. A gate oxide layer 104 is formed over the substrate 101 from the source 105 to the drain 106. A barrier layer 102 is formed over the gate oxide layer 104. An electrode or gate electrode 103 is formed over the barrier layer 102. The source 105, the drain 106, the substrate 101, the gate oxide layer 104 and the gate electrode 103 may be provided in accordance with convention techniques of semiconductor fabrication.The barrier layer 102 is fabricated by vapor depositing one or more selected materials or precursors from a silicon source and subsequently processing those materials or precursors. The silicon source may be a silazane or a silane source such as hexamethyldisilazane (HMDS). Other silicon sources which may be used are tetramethyldisilazane, octamethylcyclotetrasilazine, hexamethylcyclotrisilazine, diethylaminotrimethylsilane or dimethylaminotrimethylsilane. The selected material is processed in a reactive ambient to create a final desirable silicon-containing barrier layer. Reactive ambients include oxygenating or nitridating species which will react with silicon to form the silicon-containing barrier layer. Some reactive ambients are NH3, N2, O2, O3, NO and the like. The resulting silicon-containing barrier layer is the barrier layer 102 and may comprise a layer that is primarily nitride, primarily oxide or an oxynitride depending on the reactive ambient selected. The silicon-containing barrier layer contains no metal.The barrier layer 102 prevents dopants, such as boron, in the gate electrode 103 from diffusing into the gate oxide layer 104, the source 105 and the drain 106. The barrier layer 102 also prevents reactions between the gate electrode 103 and the gate oxide layer 104, prevents migration of dopants from the gate electrode 103 to other areas of the semiconductor device, prevents oxidation of the gate electrode 103 and prevents the formation of silicides on the gate electrode.FIG. 2A illustrates a method for fabricating a barrier layer according to one embodiment of the present invention. A wafer or substrate is provided at block 201. The wafer or substrate is cleaned using hydrofluoric acid (HF) at block 202. A silicon-containing material is vapor deposited onto the surface of the wafer at block 203 from a silicon source. The silicon-containing material is treated or processed using rapid thermal nitridation (RTN) in an NH3 ambient at block 204 resulting in creation of the barrier layer. The temperature, anneal time and processing pressure are selected to obtain desired barrier layer characteristics. A wet oxidation layer is formed over the barrier layer at block 205.FIG. 2B illustrates thickness measurements of the barrier layer and wet oxidation layer created using the method of FIG. 2A using various processing conditions. In this figure, the wet oxidation has a thickness of 300 Å. For this particular example, FIG. 2B illustrates that a suitable barrier layer may be formed at about 450 Torr and 850[deg.] C., over a processing time of 60 seconds with minimal oxidation of the underlying silicon substrate. It is noted that the 850[deg.] C. processing temperature is lower than the processing temperature (typically 950[deg.] C.) used to create barrier layers using conventional methods. In addition, the 60 seconds processing time is lower that the processing time used to create barrier layers using conventional methods (typically 45 minutes). However, the processing time can be longer without a detrimental affect if silane or silazane silicon sources are used because they are self limiting.Generally, conventional barrier layers are processed using temperature ranges of 700[deg.] C. to 1050[deg.] C., processing time of 10 seconds to 60 minutes, and processing pressure of 760 torr. Whereas, the barrier layer of the present invention is typically processed using temperature ranges of 500[deg.] C. to 900[deg.] C., processing time of 30 seconds to 5 minutes, and processing pressure of 450 torr. It is contemplated that variations to these ranges may also result in suitable barrier layer formation.Referring to FIGS. 1B and 3, FIG. 3 illustrates the capacitance characteristics of a semiconductor device 109 utilizing a barrier layer 102 according to the present invention. The capacitance characteristics of a device with a conventional barrier layer with a N+ PH3 doped polysilicon gate electrode are illustrated at 301. Line 302 illustrates the capacitance characteristics of a device with a conventional barrier layer and a BF2 doped polysilicon gate electrode. Line 303 shows the capacitance characteristics of a barrier layer 102 created by vapor depositing HMDS with a N+ PH3 doped polysilicon gate electrode 103. Line 304 shows the capacitance characteristics of a device with a barrier layer 102 created by vapor depositing HMDS with a BF2 doped polysilicon gate electrode. Comparing the capacitance values of lines 301 and 302 with lines 303 and 304, it is noted that negative bias capacitance is enhanced by the present invention. The barrier layers used in lines 303 and 304 were processed using NH3 and O2.In addition, line 302 shows how the conventional barrier layer suffers boron diffusion into the gate and active areas (note the shift in threshold voltage at 306). Line 307 shows that the measured work function, associated with the vapor deposited HMDS barrier layers of lines 303 and 304 match theoretical values.FIG. 4 illustrates use of a barrier layer 402 according to another embodiment of the present invention. The barrier layer 402 is located between a dielectric 403 and a electrode 401. The barrier layer 402 is created by depositing a silicon-containing material (from silazane or silane type silicon sources). The layer is then post-processed in a reactive ambient. The dielectric 403 is of a material susceptible to oxygen migration such as Ta2O5. The electrode is of a material such as P-Si, SiGe, a metal, or any other electrode material suitable for use in semiconductor based charge storage devices.FIG. 5 is an illustration of a computer system 512 that can use and be used with embodiments of the present invention. As will be appreciated by those skilled in the art, the computer system 512 would include ROM 514, mass memory 516, peripheral devices 518, and I/O devices 520 in communication with a microprocessor 522 via a data bus 524 or another suitable data communication path. The mass memory 516 can include silicon-containing barrier layers in, for example, transistor structures or charge storage structures. These devices can be fabricated according with the various embodiments of the present invention.For the purposes of describing and defining the present invention, formation of a material "on" a substrate or layer refers to formation in contact with a surface of the substrate or layer. Formation "over" a substrate or layer refers to formation either above or in contact with a surface of the substrate.As stated earlier, barrier layers fabricated using the present invention can be used for a variety of purposes. Some examples follow, but embodiments of the present invention are not limited to these. A barrier layer can be formed on top of metals to prevent oxidation of metals. A barrier layer can be placed between metals and silicon containing materials to prevent agglomeration, the formation of silicides. A barrier layer can be used in a P+ or N+ gate to prevent dopant, hydrogen, or flourine in-diffusion into the gate dielectric reducing defect density and increasing performance and reliability. A barrier layer can be used in post gate stack and pre oxidation steps to prevent oxygen in-diffusion into active areas of the transistor. A barrier layer can be used to prevent oxidation of gate electrodes with subsequent processing steps when using materials such as polysilicon, Si-Ge, W or other transistion metals. A barrier layer can be used with a storage dielectric, such as non-volatile random access memory, and may be used to reduce degradation of tunnel oxide performance.Having described the present invention in detail and by reference to preferred embodiments thereof, it will be apparent that modifications and variations are possible without departing from the scope of the present invention defined in the appended claims.
A method of forming (and an apparatus for forming) a metal oxide layer on a substrate, particularly a semiconductor substrate or substrate assembly, using a vapor deposition process, one or more alcohols, and one or more metal-containing precursor compounds.
What Is Claimed Is: 1. A method of manufacturing a semiconductor structure, the method comprising : providing a semiconductor substrate or substrate assembly; providing at least one alcohol of the formula R (OH) r wherein R is an organic group and r is 1 to 3; providing at least one metal-containing precursor compound of the formula Ml (NRl) w (NR2R3) z (Formula I), M2R4q (Formula II), or Lewis Base adducts of Formula II, wherein: M'and Mz are each independently a metal; R', R2, R3, and R4 are each independently hydrogen or an organic group; w is 0 to 4 ; z is 1 to 8; q is 1 to 5 ; and w, z, and q are dependent on the oxidation states of the metals; and contacting the precursor compounds to form a metal oxide layer on one or more surfaces of the semiconductor substrate or substrate assembly using a vapor deposition process. 2. The method of claim 1 wherein the semiconductor substrate or substrate assembly is a silicon wafer. 3. The method of claim 1 wherein the metal oxide layer is a dielectric layer. 4. The method of claim 3 wherein the metal oxide dielectric layer comprises two or more different metals. 5. The method of claim 4 wherein the two or more different metals are in the form of alloys, solid solutions, or nanolaminates. 6. The method of claim 1 wherein M'and M2 are each independently selected <Desc/Clms Page number 26> from the group of metals consisting of Groups 3,4, 5,6, 7,13, 14, and the lanthanides. 7. The method of claim 6 wherein M'and Ne are each independently selected from the group of metals consisting of Y, La, Pr, Nd, Gd, Ti, Zr, Hf, Nb, Ta, Al, and Si. 8. The method of claim 1 wherein the metal oxide layer has a thickness of about 30 A to about 80 A. 9. The method of claim 1 wherein each R is independently a (C 1-C 10) organic group. 10. The method of claim 1 wherein R', R2, R3, and R4 are each independently hydrogen or a (C1-C6) organic group. 11. The method of claim 1 wherein w is 0 to 2 and z is 2 to 6. 12. The method of claim 1 wherein q is 2 to 3. 13. The method of claim 1 wherein the metal oxide layer comprises one metal. 14. The method of claim 1 wherein the metal oxide layer comprises anatase TiO2. 15. A method of manufacturing a semiconductor structure, the method comprising: providing a semiconductor substrate or substrate assembly within a deposition chamber; providing at least one alcohol of the formula R (OH) r wherein R is an organic group and r is 1 to 3 ; providing at least one metal-containing precursor compound of the <Desc/Clms Page number 27> formula Ml (NR) w (NR2R3) z (Formula I), M2R4q (Formula II), or Lewis Base adducts of Formula II, wherein: M'and M2 are each independently a metal; R', R2, R3, and R4 are each independently hydrogen or an organic group; w is 0 to 4 ; z is 1 to 8 ; q is 1 to 5; and w, z, and q are dependent on the oxidation states of the metals; vaporizing the precursor compounds to form vaporized precursor compounds; and directing the vaporized precursor compounds to the semiconductor substrate or substrate assembly to form a metal oxide dielectric layer on one or more surfaces of the semiconductor substrate or substrate assembly. 16. The method of claim 15 wherein the precursor compounds are vaporized in the presence of an inert carrier gas. 17. The method of claim 15 wherein M'and M2 are each independently selected from the group of metals consisting of Groups 3,4, 5,6, 7,13, 14, and the lanthanides. 18. The method of claim 15 wherein vaporizing and directing the precursor compounds is accomplished using a chemical vapor deposition process. 19. The method of claim 18 wherein the temperature of the semiconductor substrate or substrate assembly is about 100 C to about 600 C. 20. The method of claim 18 wherein the semiconductor substrate or substrate assembly is in a deposition chamber having a pressure of about 0.1 torr to about 10 torr. <Desc/Clms Page number 28> 21. The method of claim 18 wherein vaporizing and directing the precursor compounds is accomplished using an atomic layer deposition process comprising a plurality of deposition cycles. 22. The method of claim 21 wherein during the atomic layer deposition process the metal-containing layer is formed by alternately introducing the precursor compounds during each deposition cycle. 23. The method of claim 21 wherein the temperature of the semiconductor substrate or substrate assembly is about 25 C to about 400 C. 24. The method of claim 21 wherein the semiconductor substrate or substrate assembly is in a deposition chamber having a pressure of about 10-4 torr to about 1 torr. 25. The method of claim 15 wherein the metal oxide layer comprises one metal. 26. A method of forming a metal oxide layer on a substrate, the method comprising: providing a substrate ; providing at least one alcohol of the formula R (OH), wherein R is an organic group and r is 1 to 3; providing at least one metal-containing precursor compound of the formula M' (NR') W (NRR3) Z (Formula I), M2R4q (Formula II), or Lewis Base adducts of Formula II, wherein: M'and M'are each independently a metal;R', R2, R3, and R4 are each independently hydrogen or an organic group; w is 0 to 4 ; z is 1 to 8; q is 1 to 5; and w, z, and q are dependent on the oxidation states of the metals; and <Desc/Clms Page number 29> contacting the precursor compounds to form a metal oxide layer on the substrate using a vapor deposition process. 27. The method of claim 26 wherein the substrate is a silicon wafer. 28. The method of claim 26 wherein M'and M2 are each independently selected from the group of metals consisting of Groups 3,4, 5,6, 7,13, 14, and the lanthanides. 29. The method of claim 28 wherein M'and M2 are each independently selected from the group of metals consisting of Y, La, Pr, Nd, Gd, Ti, Zr, Hf, Nb, Ta, Al, and Si. 30. The method of claim 26 wherein the metal oxide layer has a thickness of about 30 A to about 80 A. 31. The method of claim 26 wherein each R is independently a (C 1-C 10) organic group. 32. The method of claim 26 wherein R', R2, R3, and R4 are each independently hydrogen or a (C1-C6) organic group. 33. The method of claim 26 wherein w is 0 to 2 and z is 2 to 6. 34. The method of claim 26 wherein q is 2 to 3. 35. The method of claim 26 wherein the metal oxide comprises one metal. 36. The method of claim 26 wherein the metal oxide layer comprises anatase TiO2. 37. A method of forming a metal oxide layer on a substrate, the method <Desc/Clms Page number 30> comprising : providing a substrate; providing at least one alcohol of the formula R (OH) r wherein R is an organic group and r is 1 to 3; providing at least one metal-containing precursor compound of the formula M' (NR') W (NWW), : (Formula I), M2R4q (Formula II), or Lewis Base adducts of Formula II, wherein:M'and Nf are each independently a metal;R', R2, R3, and R4 are each independently hydrogen or an organic group; w is 0 to 4 ; z is 1 to 8 ; q is 1 to 5; and w, z, and q are dependent on the oxidation states of the metals; vaporizing the precursor compounds to form vaporized precursor compounds; and directing the vaporized precursor compounds to the substrate to form a metal oxide layer on the substrate. 38. The method of claim 37 wherein vaporizing and directing the precursor compounds is accomplished using a chemical vapor deposition process. 39. The method of claim 37 wherein vaporizing and directing the precursor compounds is accomplished using an atomic layer deposition process comprising a plurality of deposition cycles. 40. The method of claim 37 wherein the metal oxide layer comprises one metal. 41. A method of manufacturing a memory device structure, the method comprising: providing a substrate having a first electrode thereon; providing at least one alcohol of the formula R (OH) r wherein R is an <Desc/Clms Page number 31> organic group and r is 1 to 3; providing at least one metal-containing precursor compound of the formula M' (NR') W (NR2R3) Z (Formula I), M2R4q (Formula II), or Lewis Base adducts of Formula II, wherein:M'and M2 are each independently a metal;R', R2, R3, and R4 are each independently hydrogen or an organic group; w is 0 to 4 ; z is 1 to 8; qis 1 to 5 ; and w, z, and q are dependent on the oxidation states of the metals; vaporizing the precursor compounds to form vaporized precursor compounds ; directing the vaporized precursor compounds to the substrate to form a metal oxide dielectric layer on the first electrode of the substrate; and forming a second electrode on the dielectric layer. 42. The method of claim 41 wherein vaporizing and directing the precursor compounds is accomplished using a chemical vapor deposition process. 43. The method of claim 41 wherein vaporizing and directing the precursor compounds is accomplished using an atomic layer deposition process comprising a plurality of deposition cycles. 44. The method of claim 41 wherein the metal oxide dielectric layer comprises two or more different metals. 45. The method of claim 44 wherein the two or more different metals are in the form of alloys, solid solutions, or nanolaminates. 46. The method of claim 41 wherein the metal oxide dielectric layer comprises one or more of ZrO2, HfO2, Ta203, A1203, TiO2, and an oxide of a lanthanide. <Desc/Clms Page number 32> 47. A vapor deposition apparatus comprising: a vapor deposition chamber having a substrate positioned therein; one or more vessels comprising one or more alcohols of the formula R (OH) r wherein R is an organic group and r is 1 to 3; and one or more vessels comprising one or more precursor compounds of the formula M' (NR') W (NR2R3) z (Formula I), M2R4q (Formula II), or Lewis Base adducts of Formula II, wherein: M'and M'are each independently a metal;R', R2, R3, and R4 are each independently hydrogen or an organic group; w is 0 to 4 ; z is 1 to 8; q is 1 to 5; and w, z, and q are dependent on the oxidation states of the metals. 48. The apparatus of claim 47 wherein the substrate is a silicon wafer. 49. The apparatus of claim 47 further comprising one or more sources of an inert carrier gas for transferring the precursors to the vapor deposition chamber.
<Desc/Clms Page number 1> SYSTEMS AND METHODS FOR FORMING METAL OXIDESUSING ALCOHOLS FIELD OF THE INVENTIONThis invention relates to methods of forming a metal oxide layer on a substrate using one or more alcohols and one or more metal-containing precursor compounds during a vapor deposition process. The precursor compounds and methods are particularly suitable for the formation of a metal oxide layers on semiconductor substrates or substrate assemblies. BACKGROUND OF THE INVENTIONThe continuous shrinkage of microelectronic devices such as capacitors and gates over the years has led to a situation where the materials traditionally used in integrated circuit technology are approaching their performance limits. Silicon (i. e. , doped polysilicon) has generally been the substrate of choice, and silicon dioxide (SiO2) has frequently been used as the dielectric material with silicon to construct microelectronic devices. However, when the Si02 layer is thinned to 1 nm (i. e. , a thickness of only 4 or 5 molecules), as is desired in the newest micro devices, the layer no longer effectively performs as an insulator due to the tunneling current running through it. Thus, new high dielectric constant materials are needed to extend device performance. Such materials need to demonstrate high permittivity, barrier height to prevent tunneling, stability in direct contact with silicon, and good interface quality and film morphology. Furthermore, such materials must be compatible with the gate material, electrodes, semiconductor processing temperatures, and operating conditions. High quality thin oxide films of metals, such as ZrO2, HfO2, A1203, and YSZ deposited on semiconductor wafers have recently gained interest for use in memories (e. g. , dynamic random access memory (DRAM) devices, static <Desc/Clms Page number 2> random access memory (SRAM) devices, and ferroelectric memory (FERAM) devices). These materials have high dielectric constants and therefore are attractive as replacements in memories for SiOZ where very thin layers are required. These metal oxide layers are thermodynamically stable in the presence of silicon, minimizing silicon oxidation upon thermal annealing, and appear to be compatible with metal gate electrodes. Specifically, for gate dielectrics, La203, HfO2, and ZrO2 are also promising as they possess relatively high values for permittivity and bandgap. This discovery has led to an effort to investigate various deposition processes to form layers, especially dielectric layers, based on metal oxides. Such deposition processes have included vapor deposition, metal thermal oxidation, and high vacuum sputtering. Vapor deposition processes, which includes chemical vapor deposition (CVD) and atomic layer deposition (ALD), are very appealing as they provide for excellent control of dielectric uniformity and thickness on a substrate. But vapor deposition processes typically involve the co-reaction of reactive metal precursor compounds with an oxygen source such as oxygen or water, either of which can cause formation of an undesirable SiO2 interfacial layer. Thus, an effort is underway to develop water-and oxygen-free vapor deposition processes. Ritala et al. ,"Atomic Layer Deposition of Oxide Thin Films with Metal Alkoxides as Oxygen Sources, "SCIENCE, 288: 319-321 (2000) describe a chemical approach to ALD of thin oxide films. In this approach, a metal alkoxide, serving as both a metal source and an oxygen source, reacts with another metal compound such as a metal chloride or metal alkyl to deposit a metal oxide on silicon without creating an interfacial silicon oxide layer. However, undesirable chlorine residues can also be formed. Furthermore, zirconium and hafnium alkyls are generally unstable and not commercially available. They would also likely leave carbon in the resultant films. Despite these continual improvements in semiconductor dielectric layers, there remains a need for a vapor deposition process utilizing sufficiently volatile metal precursor compounds that can form a thin, high quality oxide layer, particularly on a semiconductor substrate using a vapor deposition process. <Desc/Clms Page number 3> SUMMARY OF THE INVENTIONThis invention provides methods of vapor depositing a metal oxide layer on a substrate. These vapor deposition methods involve forming the layer by combining one or more alcohols with one or more metal organo-amine precursor compounds (e. g. , alkylamines or alkylimines-alkylamines) and/or metal alkyl precursor compounds. Significantly, the methods of the present invention do not require the use of water or a strong oxidizer, thus reducing (and typically avoiding) the problems of producing an undesirable interfacial oxide layer between the desired metal oxide layer and the substrate, and oxidizing other layers beneath the top layer. Typically and preferably, the layer is a dielectric layer. The methods of the present invention involve forming a metal oxide layer on a substrate, such as a semiconductor substrate or substrate assembly in the manufacturing of a semiconductor structure. Such methods include: providing a substrate (preferably, a semiconductor substrate or substrate assembly); providing at least one alcohol of the formula R (OH) r wherein R is an organic group and r is 1 to 3; providing at least one metal-containing precursor compound of the formula M' (NR') w (NR2R3) z (Formula I), M2R4q (Formula II), or Lewis Base adducts of Formula II ; and contacting the precursor compounds to form a metal oxide layer on one or more surfaces of the substrate using a vapor deposition process. In Formulas I and II: Ml and M2 are each independently a metal (which is used herein to include metalloids or semimetals); R', R2, R3, and R4 are each independently hydrogen or an organic group; w is 0 to 4; z is 1 to 8; q is 1 to 5; and w, z, and q are dependent on the oxidation states of the metals. In a preferred embodiment of the invention, a method is provided that includes: providing a substrate (preferably, a semiconductor substrate or substrate assembly) within a deposition chamber; providing at least one alcohol of the formula R (OH) r wherein R is an organic group and r is 1 to 3; providing at least one metal-containing precursor compound of the formula M' (NR') v (NR2R3) z (Fonnula I), M2R4q (Formula II), or Lewis Base adducts of <Desc/Clms Page number 4> Formula II ; vaporizing the precursor compounds to form vaporized precursor compounds; and directing the vaporized precursor compounds to the substrate to form a metal oxide dielectric layer on one or more surfaces of the substrate. In Formulas I and II: M'and MZ are each independently a metal; R', R2, R3, and R4 are each independently hydrogen or an organic group; w is 0 to 4; z is 1 to 8; q is 1 to 5; and w, z, and q are dependent on the oxidation states of the metals. In another preferred embodiment of the invention, a method of manufacturing a memory device structure is provided. The method includes: providing a substrate having a first electrode thereon ; providing at least one alcohol of the formula R (OH) r wherein R is an organic group and r is 1 to 3; providing at least one metal-containing precursor compound of the formula M' (NR') W (NRZR3) a (Formula I), MZR4q (Formula II), or Lewis Base adducts of Formula II ; vaporizing the precursor compounds to form vaporized precursor compounds; directing the vaporized precursor compounds to the substrate to form a metal oxide dielectric layer on the first electrode of the substrate; and forming a second electrode on the dielectric layer. In Formulas I and II: M'and M2 are each independently a metal; R', R2, R3, and R4 are each independently hydrogen or an organic group; w is 0 to 4; z is 1 to 8; q is 1 to 5; and w, z, and q are dependent on the oxidation states of the metals. Also provided is a vapor deposition apparatus that includes: a vapor deposition chamber having a substrate positioned therein; one or more vessels comprising one or more alcohols of the formula R (OH) r wherein R is an organic group and r is 1 to 3; one or more vessels comprising one or more precursor compounds of the formula M' (NR') w (NR 2R3) 7 M2 or Lewis Base adducts of Formula II. In Formulas I and II : M'and M2 are each independently a metal; R', R2, R3, and R4 are each independently hydrogen or an organic group; w is 0 to 4; z is 1 to 8; q is 1 to 5; and w, z, and q are dependent on the oxidation states of the metals. The methods of the present invention can utilize a chemical vapor deposition (CVD) process, which can be pulsed, or an atomic layer deposition (ALD) process (a self-limiting vapor deposition process that includes a plurality of deposition cycles, typically with purging between the cycles). Preferably, the <Desc/Clms Page number 5> methods of the present invention use ALD. For certain ALD processes, the precursor compounds can be alternately introduced into a deposition chamber during each deposition cycle. "Semiconductor substrate"or"substrate assembly"as used herein refers to a semiconductor substrate such as a base semiconductor layer or a semiconductor substrate having one or more layers, structures, or regions formed thereon. A base semiconductor layer is typically the lowest layer of silicon material on a wafer or a silicon layer deposited on another material, such as silicon on sapphire. When reference is made to a substrate assembly, various process steps may have been previously used to form or define regions, junctions, various structures or features, and openings such as capacitor plates or barriers for capacitors. "Layer"as used herein refers to any metal oxide layer that can be formed on a substrate from the precursor compounds of this invention using a vapor deposition process. The term"layer"is meant to include layers specific to the semiconductor industry, such as"barrier layer, ""dielectric layer,"and "conductive layer. " (The term"layer"is synonymous with the term''film'' frequently used in the semiconductor industry. ) The term"layer"is also meant to include layers found in technology outside of semiconductor technology, such as coatings on glass. "Precursor compound"as used herein refers to an alcohol or a metal- containing compound capable of forming, either alone or with other precursor compounds, a metal oxide layer on a substrate in a vapor deposition process. "Deposition process"and"vapor deposition process"as used herein refer to a process in which a metal oxide layer is formed on one or more surfaces of a substrate (e. g. , a doped polysilicon wafer) from vaporized precursor compound (s). Specifically, one or more metal precursor (i. e. , metal-containing precursor) compounds are vaporized and directed to one or more surfaces of a heated substrate (e. g. , semiconductor substrate or substrate assembly) placed in a deposition chamber. These precursor compounds form (e. g. , by reacting or decomposing) a non-volatile, thin, uniform, metal oxide layer on the surface (s) of the substrate. For the purposes of this invention, the term"vapor deposition <Desc/Clms Page number 6> process"is meant to include both chemical vapor deposition processes (including pulsed chemical vapor deposition processes) and atomic layer deposition processes. "Chemical vapor deposition" (CVD) as used herein refers to a vapor deposition process wherein the desired layer is deposited on the substrate from vaporized metal precursor compounds (and any optional reaction gases used) within a deposition chamber with no effort made to separate the reaction components. In contrast to a"simple"CVD process that involves the substantial simultaneous use of the precursor compounds and any reaction gases,"pulsed" CVD alternately pulses these materials into the deposition chamber, but does not rigorously avoid intermixing of the precursor and reaction gas streams, as is typically done in atomic layer deposition or ALD (discussed in greater detail below). "Atomic layer deposition" (ALD) as used herein refers to a vapor deposition process in which numerous consecutive deposition cycles are conducted in a deposition chamber. Typically, during each cycle the metal precursor is chemisorbed to the substrate surface; excess precursor is purged out; a subsequent precursor and/or reaction gas is introduced to react with the chemisorbed layer; and excess reaction gas (if used) and by-products are removed. As compared to the one cycle chemical vapor deposition (CVD) process, the longer duration multi-cycle ALD process allows for improved control of layer thickness by self-limiting layer growth and minimizing detrimental gas phase reactions by separation of the reaction components. The term"atomic layer deposition"as used herein is also meant to include the related terms"atomic layer epitaxy" (ALE), molecular beam epitaxy (MBE), gas source MBE, organometallic MBE, and chemical beam epitaxy when performed with alternating pulses of precursor compound (s), reaction gas (es), and purge (i. e., inert carrier) gas. "Chemisorption"as used herein refers to the chemical adsorption of vaporized reactive precursor compounds on the surface of a substrate. The adsorbed species are irreversibly bound to the substrate surface as a result of relatively strong binding forces characterized by high adsorption energies (e. g., <Desc/Clms Page number 7> > 30 kcal/mol), comparable in strength to ordinary chemical bonds. The chemisorbed species typically form a mononolayer on the substrate surface. (See"The Condensed Chemical Dictionary", l Oth edition, revised by G. G. Hawley, published by Van Nostrand Reinhold Co. , New York, 225 (1981)). The technique of ALD is based on the principle of the formation of a saturated monolayer of reactive precursor molecules by chemisorption. In ALD one or more appropriate precursor compounds or reaction gases are alternately introduced (e. g. , pulsed) into a deposition chamber and chemisorbed onto the surfaces of a substrate. Each sequential introduction of a reactive compound (e. g. , one or more precursor compounds and one or more reaction gases) is typically separated by an inert carrier gas purge. Each precursor compound coreaction adds a new atomic layer to previously deposited layers to form a cumulative solid layer. The cycle is repeated, typically for several hundred times, to gradually form the desired layer thickness. It should be understood that ALD can alternately utilize one precursor compound, which is chemisorbed, and one reaction gas, which reacts with the chemisorbed species. BRIEF DESCRIPTION OF THE DRAWINGSFigures 1-3 are exemplary capacitor constructions. Figure 4 is a perspective view of a vapor deposition coating system suitable for use in the method of the present invention. DETAILED DESCRIPTION OF PREFERRED EMBODIMENTSOF THE INVENTIONThe present invention provides methods of forming a metal oxide layer on a substrate (preferably a semiconductor substrate or substrate assembly) using one or more alcohols of the formula R (OH) r wherein r is 1 to 3 (preferably, 1) and one or more metal-containing precursor compounds of the formulas MI (NR') w (NR2R3) z (Formula I), M2R4q (Formula II), or Lewis Base adducts of Formula II. In Formulas I and II : M'and M'are each independently any metal (main group, transition metal, lanthanide) ; each of R', 2, and R3 is independently hydrogen or an organic group; w is 0 to 4 (preferably, 0 to 2); z is <Desc/Clms Page number 8> 1 to 8 (preferably, 2 to 6); q is 1 to 5 (preferably, 2 to 3) ; and w, z, and q are dependent on the oxidation states of the metals. The metal oxide layer may include one or more different metals and is typically of the formula M,, O. (Formula III), wherein M can be one or more of M'and M2 as defined above (i. e. , the oxide can be a single metal oxide or a mixed metal oxide). Optionally, the metal oxide layer is a mixed metal oxide (i. e. , it includes two or more metals). More preferably, the metal oxide layer includes only one metal. The metal oxide layer (particularly if it is a dielectric layer) preferably includes one or more of ZrO2, HfO2, Ta203, A1203, TiO2, and an oxide of a lanthanide. A particularly preferred metal oxide layer includes TiO2, which is preferably in the anatase phase. If the metal oxide layer includes two or more different metals, the metal oxide layer can be in the form of alloys, solid solutions, or nanolaminates. Preferably, these have dielectric properties. The substrate on which the metal oxide layer is formed is preferably a semiconductor substrate or substrate assembly. Any suitable semiconductor material is contemplated, such as for example, conductively doped polysilicon (for this invention simply referred to as"silicon"). A substrate assembly may also contain a layer that includes platinum, iridium, rhodium, ruthenium, ruthenium oxide, titanium nitride, tantalum nitride, tantalum-silicon-nitride, silicon dioxide, aluminum, gallium arsenide, glass, etc. , and other existing or to- be-developed materials used in semiconductor constructions, such as dynamic random access memory (DRAM) devices and static random access memory (SRAM) devices, for example. Substrates other than semiconductor substrates or substrate assemblies can be used in methods of the present invention. These include, for example, fibers, wires, etc. If the substrate is a semiconductor substrate or substrate assembly, the layers can be formed directly on the lowest semiconductor surface of the substrate, or they can be formed on any of a variety of the layers (i. e., surfaces) as in a patterned wafer, for example. <Desc/Clms Page number 9> The precursor compounds described herein may include a wide variety of metals. As used herein, "metal"includes all metals of the periodic table (including main group metals, transition metals, lanthanides, actinides) as well as metalloids or semimetals. For certain methods of the present invention, preferably, each metal M is selected from the group of metals of Groups IIIB (Sc, Y), IVB (Ti, Zr, Hf), VB (V, Nb, Ta), VIB (Cr, Mo, W), VIIB (Mn, Tc, Re), IIIA (Al, Ga, In, Tl), IVA (Si, Ge, Sn, Pb), and the lanthanides (La, Ce, Pr, etc.), which are also referred to as Groups 3-7,13, 14, and the lanthanides of the Periodic Chart. More preferably, each metal M is selected from the group of metals of Groups IIIB (Sc, Y), IVB (Ti, Zr, Hf), VB (V, Nb, Ta), VIB (Cr, Mo, W), VIIB (Mn, Tc, Re), IVA (Si, Ge, Sn, Pb), and the lanthanides (La, Ce, Pr, etc. ), which are also referred to as Groups 3-7,14, and the lanthanides of the Periodic Chart. Even more preferably, each metal M is selected from the group of metals of Groups IIIB (Sc, Y), IVB (Ti, Zr, Hf), VB (V, Nb, Ta), VIB (Cr, Mo, W), VIIB (Mn, Tc, Re), and the lanthanides (La, Ce, Pr, etc. ), which are also referred to as Groups 3-7 and the lanthanides of the Periodic Chart. For certain embodiments, a preferred group of metals for M'or or is selected from the group of Y, La, Pr, Nd, Gd, Ti, Zr, Hf, Nb, Ta, Si, and Al. For certain other embodiments, a preferred group of metals for M2 is Y, La, Pr, Nd, Gd, Ti, Zr, Hf, Nb, Ta, and Si, and a more preferred group of metals for Nf is Y, La, Pr, Nd, Gd, Ti, Zr, Hf, Nb, and Ta. Each R in the precursor compounds (i. e. , the alcohols and the metal- containing precursor compounds of the formulas M' (NRI) W (NR2R3) z (Formula I) and M2R4q (Formula II)) are each independently hydrogen or an organic group, preferably an organic group. As used herein, the term"organic group"is used for the purpose of this invention to mean a hydrocarbon group that is classified as an aliphatic group, cyclic group, or combination of aliphatic and cyclic groups (e. g. , alkaryl and aralkyl groups). In the context of the present invention, suitable organic groups for precursor compounds of this invention are those that do not interfere with the formation of a metal oxide layer using vapor deposition techniques. In the context of the present invention, the term"aliphatic group" means a saturated or unsaturated linear or branched hydrocarbon group. This <Desc/Clms Page number 10> term is used to encompass alkyl, alkenyl, and alkynyl groups, for example. The term"alkyl group"means a saturated linear or branched monovalent hydrocarbon group including, for example, methyl, ethyl, n-propyl, isopropyl, t- butyl, amyl, heptyl, and the like. The term"alkenyl group"means an unsaturated, linear or branched monovalent hydrocarbon group with one or more olefinically unsaturated groups (i. e. , carbon-carbon double bonds), such as a vinyl group. The term"alkynyl group"means an unsaturated, linear or branched monovalent hydrocarbon group with one or more carbon-carbon triple bonds. The term"cyclic group"means a closed ring hydrocarbon group that is classified as an alicyclic group, aromatic group, or heterocyclic group. The term"alicyclic group"means a cyclic hydrocarbon group having properties resembling those of aliphatic groups. The term"aromatic group"or"aryl group"means a mono-or polynuclear aromatic hydrocarbon group. The term"heterocyclic group"means a closed ring hydrocarbon in which one or more of the atoms in the ring is an element other than carbon (e. g. , nitrogen, oxygen, sulfur, etc.). As a means of simplifying the discussion and the recitation of certain terminology used throughout this application, the terms"group"and"moiety" are used to differentiate between chemical species that allow for substitution or that may be substituted and those that do not so allow for substitution or may not be so substituted. Thus, when the term"group"is used to describe a chemical substituent, the described chemical material includes the unsubstituted group and that group with nonperoxidic O, N, Si, F, or S atoms, for example, in the chain as well as carbonyl groups or other conventional substituents. Where the term"moiety"is used to describe a chemical compound or substituent, only an unsubstituted chemical material is intended to be included. For example, the phrase"alkyl group"is intended to include not only pure open chain saturated hydrocarbon alkyl substituents, such as methyl, ethyl, propyl, t-butyl, and the like, but also alkyl substituents bearing further substituents known in the art, such as hydroxy, alkoxy, alkylsulfonyl, halogen atoms, cyano, nitro, amino, carboxyl, etc. Thus,"alkyl group"includes ether groups, haloalkyls, nitroalkyls, carboxyalkyls, hydroxyalkyls, sulfoalkyls, etc. On the other hand, the phrase "alkyl moiety"is limited to the inclusion of only pure open chain saturated <Desc/Clms Page number 11> hydrocarbon alkyl substituents, such as methyl, ethyl, propyl, t-butyl, and the like. For all the precursor compounds (both metal-containing and alcohols) of this invention, each R is independently and preferably hydrogen or an organic group, more preferably a (CI-C10) organic group, even more preferably a (C 1- C8) organic group, even more preferably a (C1-C6) organic group, and even more preferably a"lower" (i. e., Cl-C4) organic group. Even more preferably, each of these organic groups is an alkyl group. Most preferably, each organic group is an organic moiety, and preferably, an alkyl moiety. In certain embodiments, the carbon atoms of the R groups of the alcohol precursor compounds can be substituted with fluorine atoms. Preferred alcohols include ethanol, isopropyl alcohol, n-propyl alcohol, n-butanol, and ethylene glycol monomethyl ether. In certain embodiments, the carbon atoms of the R groups of the metal- containing precursor compounds are optionally replaced by or substituted with silicon, fluorine, oxygen, and/or nitrogen atoms or groups containing such atoms. Thus, silylated amines and silylated imine-amines are within the scope of Formula 1. For the compounds of Formula I, M' (NR) w (NR2R3) z, Rl, R2, and R3 are each preferably a (C1-C6) organic group. Examples of suitable precursor compounds include tetrakis (dimethylamino) titanium, tetrakis (dimethylamino) hafnium, tetrakis (ethylmethylamino) hafnium, and Al (NMe2) 2 (N (Me) CH2CH2NMe2) (wherein Me = methyl). Such compounds are either commercially available from sources such as Strem Chemical Co. , or they can be prepared using standard techniques (e. g. , by reacting metal chlorides with the corresponding lithium dialkyl amides). For the compounds of Formula II, M2R4q and Lewis Base adducts thereof, each R4 is preferably hydrogen or a (C1-C4) organic group. Preferably, the compounds of Formula II do not include compounds in which all R4 groups are methyl (particularly when M2 is aluminum). Examples of suitable precursor compounds include A1H3, AlMe3, AlHMe2, ZnEt2 and AlH3'NMe3. Such compounds are either commercially available from sources such as Sigma- <Desc/Clms Page number 12> Aldrich, or they can be prepared using standard techniques (e. g. , by reacting Grignard Reagents with metal halides). Various precursor compounds can be used in various combinations, optionally with one or more organic solvents (particularly for CVD processes), to form a precursor composition. The precursor compounds may be liquids or solids at room temperature (preferably, they are liquids at the vaporization temperature). Typically, they are liquids sufficiently volatile to be employed using known vapor deposition techniques. However, as solids they may also be sufficiently volatile that they can be vaporized or sublimed from the solid state using known vapor deposition techniques. If they are less volatile solids, they are preferably sufficiently soluble in an organic solvent or have melting points below their decomposition temperatures such that they can be used in flash vaporization, bubbling, microdroplet formation techniques, etc. Herein, vaporized precursor compounds may be used either alone or optionally with vaporized molecules of other precursor compounds or optionally with vaporized solvent molecules, if used. As used herein, "liquid"refers to a solution or a neat liquid (a liquid at room temperature or a solid at room temperature that melts at an elevated temperature). As used herein, "solution"does not require complete solubility of the solid but may allow for some undissolved solid, as long as there is a sufficient amount of the solid delivered by the organic solvent into the vapor phase for chemical vapor deposition processing. If solvent dilution is used in deposition, the total molar concentration of solvent vapor generated may also be considered as a inert carrier gas. For metal-containing precursors, solvents can be used if desired. The solvents that are suitable for this application (particularly for a CVD process) can be one or more of the following: aliphatic hydrocarbons or unsaturated hydrocarbons (C3-C20, and preferably C5-C10, cyclic, branched, or linear), aromatic hydrocarbons (C5-C20, and preferably C5-C10), halogenated hydrocarbons, silylated hydrocarbons such as alkylsilanes, alkylsilicates, ethers, polyethers, thioethers, esters, lactones, ammonia, amides, amines (aliphatic or aromatic, primary, secondary, or tertiary), polyamines, nitriles, cyanates, isocyanates, thiocyanates, silicone oils, alcohols, or compounds containing <Desc/Clms Page number 13> combinations of any of the above or mixtures of one or more of the above. The compounds are also generally compatible with each other, so that mixtures of variable quantities of the precursor compounds will not interact to significantly change their physical properties. For this invention, preferably no reaction gas is employed to minimize oxidation of the substrate (typically silicon) to its oxide (typically silicon dioxide). That oxidizing process can also cause detrimental oxidation to other substrates such as metal electrodes or nitride barriers. Also, as is known in the art some layers can be pervious to oxidizing gases and cause detrimental oxidation of a layer below the top substrate layer. The precursor compounds can be vaporized in the presence of an inert carrier gas if desired. Additionally, an inert carrier gas can be used in purging steps in an ALD process. The inert carrier gas is typically selected from the group consisting of nitrogen, helium, argon, and combinations thereof. In the context of the present invention, an inert carrier gas is one that does not interfere with the formation of the metal oxide layer. Whether done in the presence of a inert carrier gas or not, the vaporization is preferably done in the absence of oxygen to avoid oxygen contamination of the layer (e. g. , oxidation of silicon to form silicon dioxide). The deposition process for this invention is a vapor deposition process. Vapor deposition processes are generally favored in the semiconductor industry due to the process capability to quickly provide highly conforma layers even within deep contacts and other openings. Chemical vapor deposition (CVD) and atomic layer deposition (ALD) are two vapor deposition processes often employed to form thin, continuous, uniform, metal oxide (preferably dielectric) layers onto semiconductor substrates. Using either vapor deposition process, typically one or more precursor compounds are vaporized in a deposition chamber and optionally combined with one or more reaction gases to form a metal oxide layer onto a substrate. It will be readily apparent to one skilled in the art that the vapor deposition process may be enhanced by employing various related techniques such as plasma assistance, photo assistance, laser assistance, as well as other techniques. <Desc/Clms Page number 14> The final layer (preferably, a dielectric layer) formed preferably has a thickness in the range of about 10 A to about 500 A. More preferably, the thickness of the metal oxide layer is in the range of about 30 A to about 80 A. In most vapor deposition processes, the precursor compound (s) are typically reacted with an oxidizing or reducing reaction gas at elevated temperatures to form the metal oxide layer. However, in the practice of this invention, no such reaction gas is needed because the alcohol provides the oxygen for the film formed. However, oxidizing gases, such as 2, 3, H20, H202, and N2O can be used if desired. Chemical vapor deposition (CVD) has been extensively used for the preparation of metal oxide layers, such as dielectric layers, in semiconductor processing because of its ability to provide highly conforma and high quality dielectric layers at relatively fast processing times. The desired precursor compounds are vaporized and then introduced into a deposition chamber containing a heated substrate with optional reaction gases and/or inert carrier gases. In a typical CVD process, vaporized precursors are contacted with reaction gas (es) at the substrate surface to form a layer (e. g. , dielectric layer). The single deposition cycle is allowed to continue until the desired thickness of the layer is achieved. Typical CVD processes generally employ precursor compounds in vaporization chambers that are separated from the process chamber wherein the deposition surface or wafer is located. For example, liquid precursor compounds are typically placed in bubblers and heated to a temperature at which they vaporize, and the vaporized liquid precursor compound is then transported by an inert carrier gas passing over the bubbler or through the liquid precursor compound. The vapors are then swept through a gas line to the deposition chamber for depositing a layer on substrate surface (s) therein. Many techniques have been developed to precisely control this process. For example, the amount of precursor material transported to the deposition chamber can be precisely controlled by the temperature of the reservoir containing the precursor compound and by the flow of an inert carrier gas bubbled through or passed over the reservoir. <Desc/Clms Page number 15> Preferred embodiments of the precursor compounds described herein are particularly suitable for chemical vapor deposition (CVD). The deposition temperature at the substrate surface is preferably held at a temperature in a range of about 100 C to about 600 C, more preferably in the range of about 200 C to about 500 C. The deposition chamber pressure is preferably maintained at a deposition pressure of about 0.1 torr to about 10 torr. The partial pressure of precursor compounds in the inert carrier gas is preferably about 0.001 torr to about 10 torr. Several modifications of the CVD process and chambers are possible, for example, using atmospheric pressure chemical vapor deposition, low pressure chemical vapor deposition (LPCVD), plasma enhanced chemical vapor deposition (PECVD), hot wall or cold wall reactors or any other chemical vapor deposition technique. Furthermore, pulsed CVD can be used, which is similar to ALD (discussed in greater detail below) but does not rigorously avoid intermixing of percursor and reactant gas streams. Also, for pulsed CVD, the deposition thickness is dependent on the exposure time, as opposed to ALD, which is self-limiting (discussed in greater detail below). A typical CVD process may be carried out in a chemical vapor deposition reactor, such as a deposition chamber available under the trade designation of 7000 from Genus, Inc. (Sunnyvale, CA), a deposition chamber available under the trade designation of 5000 from Applied Materials, Inc. (Santa Clara, CA), or a deposition chamber available under the trade designation of Prism from Novelus, Inc. (San Jose, CA). However, any deposition chamber suitable for performing CVD may be used. Alternatively, and preferably, the vapor deposition process employed in the methods of the present invention is a multi-cycle ALD process. Such a process is advantageous (particularly over a CVD process) in that in provides for optimum control of atomic-level thickness and uniformity to the deposited layer (e. g. , dielectric layer) and to expose the metal precursor compounds to lower volatilization and reaction temperatures to minimize degradation. Typically, in an ALD process, each reactant is pulsed sequentially onto a suitable substrate, <Desc/Clms Page number 16> typically at deposition temperatures of about 25 C to about 400 C (preferably about 150 C to about 300 C), which is generally lower than presently used in CVD processes. Under such conditions the film growth is typically self-limiting (i. e. , when the reactive sites on a surface are used up in an ALD process, the deposition generally stops), insuring not only excellent conformality but also good large area uniformity plus simple and accurate thickness control. Due to alternate dosing of the precursor compounds and/or reaction gases, detrimental vapor-phase reactions are inherently eliminated, in contrast to the CVD process that is carried out by continuous coreaction of the precursors and/or reaction gases. (See Vehkamaki et al,"Growth of SrTiO3 and Bat03 Thin Films by Atomic Layer Deposition, "Electrochemical and Solid-State Letters, 2 (10): 504- 506 (1999)). A typical ALD process includes exposing an initial substrate to a first chemical species (e. g. , a precursor compound of Formula I) to accomplish chemisorption of the species onto the substrate. Theoretically, the chemisorption forms a monolayer that is uniformly one atom or molecule thick on the entire exposed initial substrate. In other words, a saturated monolayer. Practically, chemisorption might not occur on all portions of the substrate. Nevertheless, such an imperfect monolayer is still a monolayer in the context of the present invention. In many applications, merely a substantially saturated monolayer may be suitable. A substantially saturated monolayer is one that will still yield a deposited layer exhibiting the quality and/or properties desired for such layer. The first species is purged from over the substrate and a second chemical species (e. g. , a different precursor compound of Formula I or a precursor compound of Formula II) is provided to react with the first monolayer of the first species. The second species is then purged and the steps are repeated with exposure of the second species monolayer to the first species. In some cases, the two monolayers may be of the same species. As an option, the second species can react with the first species, but not chemisorb additional material thereto. That is, the second species can cleave some portion of the chemisorbed first species, altering such monolayer without forming another monolayer thereon. <Desc/Clms Page number 17> Also, a third species or more may be successively chemisorbed (or reacted) and purged just as described for the first and second species. Optionally, the second species (or third or subsequent) can include at least one reaction gas if desired. Purging may involve a variety of techniques including, but not limited to, contacting the substrate and/or monolayer with a carrier gas and/or lowering pressure to below the deposition pressure to reduce the concentration of a species contacting the substrate and/or chemisorbed species. Examples of carrier gases include N2, Ar, He, etc. Purging may instead include contacting the substrate and/or monolayer with any substance that allows chemisorption by- products to desorb and reduces the concentration of a contacting species preparatory to introducing another species. The contacting species may be reduced to some suitable concentration or partial pressure known to those skilled in the art based on the specifications for the product of a particular deposition process. ALD is often described as a self-limiting process, in that a finite number of sites exist on a substrate to which the first species may form chemical bonds. The second species might only bond to the first species and thus may also be self-limiting. Once all of the finite number of sites on a substrate are bonded with a first species, the first species will often not bond to other of the first species already bonded with the substrate. However, process conditions can be varied in ALD to promote such bonding and render ALD not self-limiting. Accordingly, ALD may also encompass a species forming other than one monolayer at a time by stacking of a species, forming a layer more than one atom or molecule thick. The described method indicates the"substantial absence"of the second precursor (i. e. , second species) during chemisorption of the first precursor since insignificant amounts of the second precursor might be present. According to the knowledge and the preferences of those with ordinary skill in the art, a determination can be made as to the tolerable amount of second precursor and process conditions selected to achieve the substantial absence of the second precursor. <Desc/Clms Page number 18> Thus, during the ALD process, numerous consecutive deposition cycles are conducted in the deposition chamber, each cycle depositing a very thin metal oxide layer (usually less than one monolayer such that the growth rate on average is from about 0.2 to about 3.0 Angstroms per cycle), until a layer of the desired thickness is built up on the substrate of interest. The layer deposition is accomplished by alternately introducing (i. e. , by pulsing) precursor compounds into the deposition chamber containing a semiconductor substrate, chemisorbing the precursor compound (s) as a monolayer onto the substrate surfaces, and then reacting the chemisorbed precursor compound (s) with the other co-reactive precursor compound (s). The pulse duration of precursor compound (s) and inert carrier gas (es) is sufficient to saturate the substrate surface. Typically, the pulse duration is from about 0.1 to about 5 seconds, preferably from about 0.2 to about 1 second. In comparison to the predominantly thermally driven CVD, ALD is predominantly chemically driven. Accordingly, ALD is often conducted at much lower temperatures than CVD. During the ALD process, the substrate temperature is maintained at a temperature sufficiently low to maintain intact bonds between the chemisorbed precursor compound (s) and the underlying substrate surface and to prevent decomposition of the precursor compound (s). The temperature is also sufficiently high to avoid condensation of the precursor compounds (s). Typically the substrate temperature is kept within the range of about 25 C to about 400 C (preferably about 150 C to about 300 C), which is generally lower than presently used in CVD processes. Thus, the first species or precursor compound is chemisorbed at this temperature. Surface reaction of the second species or precursor compound can occur at substantially the same temperature as chemisorption of the first precursor or, less preferably, at a substantially different temperature. Clearly, some small variation in temperature, as judged by those of ordinary skill, can occur but still be a substantially same temperature by providing a reaction rate statistically the same as would occur at the temperature of the first precursor chemisorption. Chemisorption and subsequent reactions could instead occur at exactly the same temperature. <Desc/Clms Page number 19> For a typical ALD process, the pressure inside the deposition chamber is kept at about 10-4 torr to about 1 torr, preferably about 104 torr to about 0.1 torr. Typically, the deposition chamber is purged with an inert carrier gas after the vaporized precursor compound (s) have been introduced into the chamber and/or reacted for each cycle. The inert carrier gas (es) can also be introduced with the vaporized precursor compound (s) during each cycle. The reactivity of a precursor compound can significantly influence the process parameters in ALD. Under typical CVD process conditions, a highly reactive compound may react in the gas phase generating particulates, depositing prematurely on undesired surfaces, producing poor films, and/or yielding poor step coverage or otherwise yielding non-uniform deposition. For at least such reason, a highly reactive compound might be considered not suitable for CVD. However, some compounds not suitable for CVD are superior ALD precursors. For example, if the first precursor is gas phase reactive with the second precursor, such a combination of compounds might not be suitable for CVD, although they could be used in ALD. In the CVD context, concern might also exist regarding sticking coefficients and surface mobility, as known to those skilled in the art, when using highly gas-phase reactive precursors, however, little or no such concern would exist in the ALD context. After layer formation on the substrate, an annealing process can be optionally performed in situ in the deposition chamber in a nitrogen atmosphere or oxidizing atmosphere. Preferably, the annealing temperature is within the range of about 400 C to about 1000 C. Particularly after ALD, the annealing temperature is more preferably about 400 C to about 750 C, and most preferably about 600 C to about 700 C. The annealing operation is preferably performed for a time period of about 0.5 minute to about 60 minutes and more preferably for a time period of about 1 minute to about 10 minutes. One skilled in the art will recognize that such temperatures and time periods may vary. For example, furnace anneals and rapid thermal annealing may be used, and further, such anneals may be performed in one or more annealing steps. As stated above, the use of the complexes and methods of forming films of the present invention are beneficial for a wide variety of thin film applications <Desc/Clms Page number 20> in semiconductor structures, particularly those using high dielectric materials. For example, such applications include capacitors such as planar cells, trench cells (e. g. , double sidewall trench capacitors), stacked cells (e. g. , crown, V-cell, delta cell, multi-fingered, or cylindrical container stacked capacitors), as well as field effect transistor devices. A specific example of where a dielectric layer is formed according to the present invention is a capacitor construction. Exemplary capacitor constructions are described with reference to Figures 1-3. Referring to Figure 1, a semiconductor wafer fragment 10 includes a capacitor construction 25 formed by a method of the present invention. Wafer fragment 10 includes a substrate 12 having a conductive diffusion area 14 formed therein. Substrate 12 can include, for example, monocrystalline silicon. An insulating layer 16, typically borophosphosilicate glass (BPSG), is provided over substrate 12, with a contact opening 18 provided therein to diffusion area 14. A conductive material 20 fills contact opening 18, with material 20 and oxide layer 18 having been planarized as shown. Material 20 might be any suitable conductive material, such as, for example, tungsten or conductively doped polysilicon. Capacitor construction 25 is provided atop layer 16 and plug 20, and electrically connected to node 14 through plug 20. Capacitor construction 25 includes a first capacitor electrode 26, which has been provided and patterned over node 20. Examplary materials include conductively doped polysilicon, Pt, Ir, Rh, Ru, RuO2, IrO2, RhO2. A capacitor dielectric layer 28 is provided over first capacitor electrode 26. The materials of the present invention can be used to form the capacitor dielectric layer 28. Preferably, if first capacitor electrode 26 includes polysilicon, a surface of the polysilicon is cleaned by an in situ HF dip prior to deposition of the dielectric material. An exemplary thickness for layer 28 in accordance with 256 Mb integration is 100 Angstroms. A diffusion barrier layer 30 is provided over dielectric layer 28. Diffusion barrier layer 30 includes conductive materials such as TiN, TaN, metal silicide, or metal silicide-nitride, and can be provided by CVD, for example, using conditions well known to those of skill in the art. After formation of <Desc/Clms Page number 21> barrier layer 30, a second capacitor electrode 32 is formed over barrier layer 30 to complete construction of capacitor 25. Second capacitor electrode 32 can include constructions similar to those discussed above regarding the first capacitor electrode 26, and can accordingly include, for example, conductively doped polysilicon. Diffusion barrier layer 30 preferably prevents components (e. g. , oxygen) from diffusing from dielectric material 28 into electrode 32. If, for example, oxygen diffuses into a silicon-containing electrode 32, it can undesirably form Si02, which will significantly reduce the capacitance of capacitor 25. Diffusion barrier layer 30 can also prevent diffusion of silicon from metal electrode 32 to dielectric layer 28. Figure 2 illustrates an alternative embodiment of a capacitor construction. Like numerals from Figure 1 have been utilized where appropriate, with differences indicated by the suffix"a". Wafer fragment 10a includes a capacitor construction 25a differing from the construction 25 of Figure 2 in provision of a barrier layer 30a between first electrode 26 and dielectric layer 28, rather than between dielectric layer 28 and second capacitor electrode 32. Barrier layer 30a can include constructions identical to those discussed above with reference to Figure 1. Figure 3 illustrates yet another alternative embodiment of a capacitor construction. Like numerals from Figure 1 are utilized where appropriate, with differences being indicated by the suffix"b"or by different numerals. Wafer fragment I Ob includes a capacitor construction 25b having the first and second capacitor plate 26 and 32, respectively, of the first described embodiment. However, wafer fragment 1 Ob differs from wafer fragment 10 of Figure 2 in that wafer fragment 1 Ob includes a second barrier layer 40 in addition to the barrier layer 30. Barrier layer 40 is provided between first capacitor electrode 26 and dielectric layer 28, whereas barrier layer 30 is between second capacitor electrode 32 and dielectric layer 28. Barrier layer 40 can be formed by methods identical to those discussed above with reference to Figure 1 for formation of the barrier layer 30. <Desc/Clms Page number 22> In the embodiments of Figures 1-3, the barrier layers are shown and described as being distinct layers separate from the capacitor electrodes. It is to be understood, however, that the barrier layers can include conductive materials and can accordingly, in such embodiments, be understood to include at least a portion of the capacitorr electrodes. In particular embodiments an entirety of a capacitor electrode can include conductive barrier layer materials. A system that can be used to perform vapor deposition processes (chemical vapor deposition or atomic layer deposition) of the present invention is shown in Figure 4. The system includes an enclosed vapor deposition chamber 110, in which a vacuum may be created using turbo pump 112 and backing pump 114. One or more substrates 116 (e. g. , semiconductor substrates or substrate assemblies) are positioned in chamber 110. A constant nominal temperature is established for substrate 116, which can vary depending on the process used. Substrate 116 may be heated, for example, by an electrical resistance heater 118 on which substrate 116 is mounted. Other known methods of heating the substrate may also be utilized. In this process, precursor compounds 160 (e. g. , a refractory metal precursor compound and an ether) are stored in vessels 162. The precursor compounds are vaporized and separately fed along lines 164 and 166 to the deposition chamber 110 using, for example, an inert carrier gas 168. A reaction gas 170 may be supplied along line 172 as needed. Also, a purge gas 174, which is often the same as the inert carrier gas 168, may be supplied along line 176 as needed. As shown, a series of valves 180-185 are opened and closed as required. The following examples are offered to further illustrate the various specific and preferred embodiments and techniques. It should be understood, however, that many variations and modifications may be made while remaining within the scope of the present invention, so the scope of the invention is not intended to be limited by the examples. Unless specified otherwise, all percentages shown in the examples are percentages by weight. <Desc/Clms Page number 23> EXAMPLES Example 1. Pulsed Chemical Vapor Deposition of TiO2 A chamber of configuration shown in Figure 4 was set up with pneumatic valves under computer control to pulse the valves open in sequential manner. Two reservoirs connected to the chamber contained Ti (NMe2) 4 (Strem Chemical, Newburyport, MA) and isopropyl alcohol (General Chemical, Parsippany, NJ). The substrate was a silicon wafer having doped poly-silicon as a top layer and was maintained at 220 C for the deposition. Each cycle involved a 5-second pulse of Ti (NMe2) 4 and a 5-second pulse of isopropyl alcohol, each separated by a 5-second purge with argon and a 5- second pump down under dynamic vacuum. The precursors were introduced without helium carrier gas, using only a mass flow controller downstream of the isopropyl alcohol reservoir set at 50 scem. After 400 cycles a TiO2 film 1750 A thick was obtained. The film contained only titanium and oxygen based on x-ray photoelectron spectroscopy (XPS) analysis, and had no detectable nitrogen or carbon. X-ray diffraction analysis of the film revealed the anatase crystal phase had been formed as-deposited. Example 2. Atomic Layer Deposition of HfO2 A chamber of configuration shown in Figure 4 was set up with pneumatic valves under computer control to pulse the valves open in sequential manner. Two reservoirs connected to the chamber contained Hf (NMe2) 4 (Strem Chemical, Newburyport, MA) and isopropyl alcohol (General Chemical, Parsippany, NJ). The Hf (NMe2) 4 precursor was heated to 40 C while the isopropyl alcohol remained at ambient. The substrate was a silicon wafer having doped poly-silicon as a top layer and was maintained at 215 C for the deposition. Each cycle involved a 2-second pulse of Hf (NMe2) 4 and a 1-second pulse of isopropyl alcohol, each separated by a 5-second purge with argon and a 5- second pump down under dynamic vacuum. The precursors were introduced without helium carrier gas, using only a mass flow controller downstream of the isopropyl alcohol reservoir set at 25 sccm. After 400 cycles a HfO2 film 250 A <Desc/Clms Page number 24> thick was obtained. The film contained only hafnium and oxygen based on x-ray photoelectron spectroscopy (XPS) analysis, and had no detectable nitrogen or carbon within the HfO2 layer. X-ray diffraction analysis revealed an amorphous film had been formed as-deposited, but after a 600 C rapid thermal process (RTP) under nitrogen for 1 min the film was crystalline HfO2. The complete disclosures of the patents, patent documents, and publications cited herein are incorporated by reference in their entirety as if each were individually incorporated. Various modifications and alterations to this invention will become apparent to those skilled in the art without departing from the scope and spirit of this invention. It should be understood that this invention is not intended to be unduly limited by the illustrative embodiments and examples set forth herein and that such examples and embodiments are presented by way of example only with the scope of the invention intended to be limited only by the claims set forth herein as follows.
A method and system for managing one or more thermal policies of a portable computing device (PCD) includes monitoring temperature of the portable computing device with internal thermal sensors and external thermal sensors. If a change in temperature has been detected by at least one thermal sensor, then a thermal policy manager may increase a frequency in which temperature readings are detected by the thermal sensors. The thermal policy manager may also determine if a current temperature of the portable computing device as detected by one or more of the thermal sensors falls within one or more predetermined thermal states. Each thermal state maybe assigned a unique set of thermal mitigation techniques. Each set of thermal mitigation techniques may be different from one another. The sets of thermal mitigation techniques may differ according to quantity of techniques and impacts on performance of the PCD.
A method (600A) for managing one or more thermal policies of a portable computing device, comprising:determining (620) if the portable computing device has achieved a first predetermined thermal state (310) based on detected temperature of the portable computing device reaching a first temperature;if the portable computing device has achieved the first predetermined thermal state, then initiating (625) one or more first thermal mitigation techniques in order to reduce temperature of the portable computing device;determining (635) if the portable computing device has achieved a second predetermined thermal state (315) based on detected second temperature of the portable computing device, wherein the first temperature is less than the second temperature; andif the portable computing device has achieved the second predetermined thermal state, then increasing a frequency in which temperature readings for the portable computing device are detected and initiating (640) one or more second thermal mitigation techniques, in order to reduce temperature of the portable computing device, wherein the one or more second thermal mitigation techniques are more aggressive than one or more first thermal mitigation techniques; the method further comprising:detecting a change in temperature of the portable computing device by a thermal sensor (157A1, 157A2, 157A3, 157A4, 157A5, 157B1, 157B2); andentering into the second predetermined thermal state, thus increasing the frequency in which temperature readings are detected and initiating the one and more second thermal mitigation techniques, based upon a magnitude of the change in temperature over a certain amount of time exceeding a threshold, even though the detected temperature of the portable computing device does not exceed the second temperature.The method of claim 1, further comprising monitoring temperature of the portable computing device with at least one of an internal on-chip thermal sensor (157A1, 157A2, 157A3, 157A4, 157A5) and an external off-chip thermal sensor (157B1, 157B2).The method of claim 1, further comprising determining if one or more of the mitigation techniques has been successful in lowering temperature of the portable computing device.The method of claim 1, wherein the thermal sensor is positioned adjacent to hardware and on a same surface with the hardware within the portable computing device, the method further comprising and assigning one or more thermal mitigation techniques to the hardware based on an association between the thermal sensor and the hardware.A computer system for managing one or more thermal policies of a personal computing device, the computer system comprising:means (101A, 101B) for determining if the portable computing device has achieved a first predetermined thermal state based on detected temperature of the portable computing device reaching a first temperature;means (101A, 101B) for initiating one or more first thermal mitigation techniques in order to reduce temperature of the portable computing device if the portable computing device has achieved the first predetermined thermal state;means (101A, 101B) for determining if the portable computing device has achieved a second predetermined thermal state based on detected second temperature of the portable computing device, wherein the first temperature is less than the second temperature; andmeans (101A, 101B) for increasing a frequency in which temperature readings for the portable computing device are detected and means for initiating one or more second thermal mitigation techniques if the portable computing device has achieved the second predetermined thermal state in order to reduce temperature of the portable computing device, wherein the one or more second thermal mitigation techniques are more aggressive than the one or more first thermal mitigation techniques; the computer system further comprising:means for detecting a change in temperature of the portable computing device by a thermal sensor (157A, 157B); andmeans for entering into the second predetermined thermal state, thus increasing the frequency in which temperature readings are detected and initiating the one or more second thermal mitigation techniques, based upon a magnitude of the change in temperature over a certain amount of time exceeding a threshold, even though the detected temperature of the portable computing device does not exceed the second temperature.The computer of claim 5, further comprising means for monitoring temperature of the portable computing device with at least one of an internal on-chip thermal sensor (157A1, 157A2, 157A3, 157A4, 157A5) and an external off-chip thermal sensor (157B1, 157B2).The computer of claim 5, further comprising means for determining if one or more of the mitigation techniques has been successful in lowering temperature of the portable computing device.The computer of claim 5, further comprising means for assigning one or more thermal mitigation techniques to hardware based on an association between the thermal sensor and the hardware, the thermal sensor being positioned adjacent to hardware and on a same surface with the hardware within the portable computing device.A computer program comprising instructions adapted to implement a method according to any of claims 1 to 4 when the program is executed by a computer system according to a respective on of claims 5 to 8.
CROSS-REFERENCE TO RELATED APPLICATIONSPriority under 35 U.S.C. §119(e) is claimed to U.S. provisional application entitled, "METHOD AND SYSTEM FOR MANAGING THERMAL POLICIES OF A PORTABLE COMPUTING DEVICE," filed on January 6, 2011 and assigned application serial number 61/430,261 . The entire contents of this application are hereby incorporated by reference.DESCRIPTION OF THE RELATED ARTPortable computing devices (PCDs) are becoming necessities for people on personal and professional levels. These devices may include cellular telephones, portable digital assistants (PDAs), portable game consoles, palmtop computers, and other portable electronic devices.One unique aspect of PCDs is that they typically do not have active cooling devices, like fans, which are often found in larger computing devices like laptop and desk top computers. Instead of using fans, PCDs may rely on the spatial arrangement of electronic packaging so that two or more active and heat producing devices are not positioned in close proximity to one another. When two or more heat devices are not placed in close proximity to one another, then usually their operation does not negatively impact each other and any other electronics that may surround them. Many PCDs may also rely on passive cooling devices such as heat sinks to manage thermal energy among the electronics forming a respective PCD.However, the spatial arrangement of electronic packaging and passive cooling devices, like heatsinks, are sometimes not adequate enough to prevent a PCD from reaching critical temperatures. Such critical thermal temperatures may cause permanent damage to the electronics within a respective PCD. Currently, when a PCD approaches a critical temperature, the operating system is designed to shut down most of the electronics generating the thermal energy in order to cool the PCD. While shutting down electronics may be effective to avoid critical temperatures that may cause permanent damage, such drastic measures directly impact performance of the PCD and may render a PCD useless with respect to its functionality when such measures are taken.Accordingly, what is needed in the art is a method and system for managing one or more thermal policies that allow a PCD to cool electronics while maintaining performance and functionality for an end-user.SUMMARY OF THE DISCLOSUREA method and system for managing one or more thermal policies of a portable computing device (PCD) includes monitoring temperature of the portable computing device with internal thermal sensors and external thermal sensors. If a change in temperature has been detected by at least one thermal sensor, then a thermal policy manager may increase a frequency in which temperature readings are detected by the thermal sensors. The thermal policy manager may also determine if a current temperature of the portable computing device as detected by one or more of the thermal sensors falls within one or more predetermined thermal states. Each thermal state may be assigned a unique set of thermal mitigation techniques. Each set of thermal mitigation techniques may be different from one another. The sets of thermal mitigation techniques may differ according to quantity of techniques and impacts on performance of the PCD.BRIEF DESCRIPTION OF THE DRAWINGSIn the figures, like reference numerals refer to like parts throughout the various views unless otherwise indicated. For reference numerals with letter character designations such as "102A" or "102B", the letter character designations may differentiate two like parts or elements present in the same figure. Letter character designations for reference numerals may be omitted when it is intended that a reference numeral to encompass all parts having the same reference numeral in all figures.FIG. 1 is a functional block diagram illustrating an embodiment of a portable computing device (PCD);FIG. 2A is a functional block diagram illustrating an exemplary spatial arrangement of hardware for a chip illustrated in FIG. 1 ;FIG. 2B is a schematic diagram illustrating an exemplary software architecture of the PCD of FIG. 1 for supporting dynamic voltage and frequency scaling ("DVFS") algorithms;FIG. 2C is a first table listing exemplary frequency values for two DVFS algorithms;FIG. 2D is a second table listing exemplary frequency and voltage pairs for two DVFS algorithms;FIG. 3 is an exemplary state diagram that illustrates various thermal policy states that are tracked by the thermal policy manager in the PCD of FIG. 1 ;FIG. 4 is a diagram illustrating exemplary thermal mitigation techniques that may be applied or ordered by the thermal policy manager;FIG. 5 is a diagram illustrating an exemplary graph of temperature versus time and corresponding thermal policy states;FIGs. 6A-6B are logical flowcharts illustrating a method for managing one or more thermal policies;FIG. 7 is a logical flowchart illustrating a sub-method or subroutine for applying DVFS thermal mitigation techniques;FIG. 8A is a schematic for a four-core multicore processor and different workloads that may be spatially managed with the multicore processor; andFIG. 8B is logical flowchart illustrating a sub-method or subroutine for applying spatial workload shifting thermal mitigation techniques.DETAILED DESCRIPTIONThe word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects.In this description, the term "application" may also include files having executable content, such as: object code, scripts, byte code, markup language files, and patches. In addition, an "application" referred to herein, may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed.The term "content" may also include files having executable content, such as: object code, scripts, byte code, markup language files, and patches. In addition, "content" referred to herein, may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed.As used in this description, the terms "component," "database," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device may be a component. One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between two or more computers. In addition, these components may execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems by way of the signal).In this description, the terms "communication device," "wireless device," "wireless telephone," "wireless communication device," and "wireless handset" are used interchangeably. With the advent of third generation ("3G") and fourth generation ("4G") wireless technology, greater bandwidth availability has enabled more portable computing devices with a greater variety of wireless capabilities.In this description, the term "portable computing device" ("PCD") is used to describe any device operating on a limited capacity power supply, such as a battery. Although battery operated PCDs have been in use for decades, technological advances in rechargeable batteries coupled with the advent of third generation ("3G") wireless technology, have enabled numerous PCDs with multiple capabilities. Therefore, a PCD may be a cellular telephone, a satellite telephone, a pager, a PDA, a smartphone, a navigation device, a smartbook or reader, a media player, a combination of the aforementioned devices, and a laptop computer with a wireless connection, among others.FIG. 1: THERMAL POLICY MANAGEMENT ELEMENTS OF PCD 100Referring to FIG. 1 , this FIG. is a functional block diagram of an exemplary, non-limiting aspect of a PCD 100 in the form of a wireless telephone for implementing methods and systems for monitoring thermal conditions and managing thermal policies. As shown, the PCD 100 includes an on-chip system 102 that includes a multi-core central processing unit ("CPU") 110 and an analog signal processor 126 that are coupled together. The CPU 110 may comprise a zeroth core 222, a first core 224, and an Nth core 230 as understood by one of ordinary skill in the art. Instead of a CPU 110, a digital signal processor ("DSP") may also be employed as understood by one of ordinary skill in the art.The CPU 110 may also be coupled to one or more internal, on-chip thermal sensors 157A as well as one or more external, off-chip thermal sensors 157B. The on-chip thermal sensors 157A may comprise one or more proportional to absolute temperature ("PTAT") temperature sensors that are based on vertical PNP structure and are usually dedicated to complementary metal oxide semiconductor ("CMOS") very large-scale integration ("VLSI") circuits. The off-chip thermal sensors 157B may comprise one or more thermistors. The thermal sensors 157 may produce a voltage drop that is converted to digital signals with an analog-to-digital converter ("ADC") controller 103 (See FIG. 2A ). However, other types of thermal sensors 157 may be employed without departing from the scope of the invention.The thermal sensors 157 in addition to being controlled and monitored by an ADC controller 103, may also be controlled and monitored by one or more thermal policy manager module(s) 101. The thermal policy manager module(s) may comprise software which is executed by the CPU 110. However, the thermal policy manager module(s) 101 may also be formed from hardware and/or firmware without departing from the scope of the invention.In general, the thermal policy manager module(s) 101 may be responsible for monitoring and applying thermal policies that include one or more thermal mitigation techniques that may help a PCD 100 manage thermal conditions and/or thermal loads and avoid experiencing adverse thermal conditions, such as, for example, reaching critical temperatures while maintaining a high level of functionality.FIG. 1 also shows that the PCD 100 may include a monitor module 114. The monitor module 114 communicates with multiple operational sensors (e.g., thermal sensors 157) distributed throughout the on-chip system 102 and with the CPU 110 of the PCD 100 as well as with the thermal policy manager module 101. The thermal policy manager module 101 may work with the monitor module 114 to identify adverse thermal conditions and apply thermal policies that include one or more thermal mitigation techniques as will be described in further detail below.In a particular aspect, one or more of the method steps described herein may implemented by executable instructions and parameters stored in the memory 112 that form the one or more thermal policy manager module(s) 101. These instructions that form the thermal policy manager module(s) may be executed by the CPU 110, the analog signal processor 126, or another processor, in addition to the ADC controller 103 to perform the methods described herein. Further, the processors, 110, 126, the memory 112, the instructions stored therein, or a combination thereof may serve as a means for performing one or more of the method steps described herein.FIG. 1: OTHER ELEMENTS OF PCD 100As illustrated in FIG. 1 , a display controller 128 and a touchscreen controller 130 are coupled to the digital signal processor 110. A touchscreen display 132 external to the on-chip system 102 is coupled to the display controller 128 and the touchscreen controller 130.FIG. 1 is a schematic diagram illustrating an embodiment of a portable computing device (PCD) that includes a video decoder 134. The video decoder 134 is coupled to the multicore central processing unit ("CPU") 110. A video amplifier 136 is coupled to the video decoder 134 and the touchscreen display 132. A video port 138 is coupled to the video amplifier 136. As depicted in FIG. 1 , a universal serial bus ("USB") controller 140 is coupled to the CPU 110. Also, a USB port 142 is coupled to the USB controller 140. A memory 112 and a subscriber identity module (SIM) card 146 may also be coupled to the CPU 110. Further, as shown in FIG. 1 , a digital camera 148 may be coupled to the CPU 110. In an exemplary aspect, the digital camera 148 is a charge-coupled device ("CCD") camera or a complementary metal-oxide semiconductor ("CMOS") camera.As further illustrated in FIG. 1 , a stereo audio CODEC 150 may be coupled to the analog signal processor 126. Moreover, an audio amplifier 152 may be coupled to the stereo audio CODEC 150. In an exemplary aspect, a first stereo speaker 154 and a second stereo speaker 156 are coupled to the audio amplifier 152. FIG. 1 shows that a microphone amplifier 158 may be also coupled to the stereo audio CODEC 150. Additionally, a microphone 160 may be coupled to the microphone amplifier 158. In a particular aspect, a frequency modulation ("FM") radio tuner 162 may be coupled to the stereo audio CODEC 150. Also, an FM antenna 164 is coupled to the FM radio tuner 162. Further, stereo headphones 166 may be coupled to the stereo audio CODEC 150.FIG. 1 further indicates that a radio frequency ("RF") transceiver 168 may be coupled to the analog signal processor 126. An RF switch 170 may be coupled to the RF transceiver 168 and an RF antenna 172. As shown in FIG. 1 , a keypad 174 may be coupled to the analog signal processor 126. Also, a mono headset with a microphone 176 may be coupled to the analog signal processor 126. Further, a vibrator device 178 may be coupled to the analog signal processor 126. FIG. 1 also shows that a power supply 180, for example a battery, is coupled to the on-chip system 102. In a particular aspect, the power supply includes a rechargeable DC battery or a DC power supply that is derived from an alternating current ("AC") to DC transformer that is connected to an AC power source.As depicted in FIG. 1 , the touchscreen display 132, the video port 138, the USB port 142, the camera 148, the first stereo speaker 154, the second stereo speaker 156, the microphone 160, the FM antenna 164, the stereo headphones 166, the RF switch 170, the RF antenna 172, the keypad 174, the mono headset 176, the vibrator 178, thermal sensors 157B, and the power supply 180 are external to the on-chip system 322. However, it should be understood that the monitor module 114 may also receive one or more indications or signals from one or more of these external devices by way of the analog signal processor 126 and the CPU 110 to aid in the real time management of the resources operable on the PCD 100.FIG. 2A is a functional block diagram illustrating an exemplary spatial arrangement of hardware for the chip 102 illustrated in FIG. 1 . According to this exemplary embodiment, the applications CPU 110 is positioned on the far left side region of the chip 102 while the modem CPU 168/126 is positioned on a far right side region of the chip 102. The applications CPU 110 may comprise a multicore processor that includes a zeroth core 222, a first core 224, and an Nth core 230.The applications CPU 110 may be executing a thermal policy manager module 101A (when embodied in software) or it may include a thermal policy manager module 101B (when embodied in hardware and/or firmware). The applications CPU 110 is further illustrated to include operating system ("O/S") module 207 and a monitor module 114. Further details about the monitor module 114 will be described below in connection with FIG. 2B .The applications CPU 110 may be coupled to one or more phase locked loops ("PLLs") 209A, 209B which are positioned adjacent to the applications CPU 110 and in the left side region of the chip 102. Adjacent to the PLLs 209A, 209B and below the applications CPU 110 may comprise an analog-to-digital ("ADC") controller 103 that may include its own thermal policy manager 101B that works in conjunction with the main thermal policy manager module 101A of the applications CPU 110.The thermal policy manager 101B of the ADC controller 103 may be responsible for monitoring and tracking multiple thermal sensors 157 that may be provided "on-chip" 102 and "off-chip" 102. The on-chip or internal thermal sensors 157A may be positioned at various locations to monitor the thermal conditions of the PCD 100.For example, a first internal thermal sensor 157A1 may be positioned in a top center region of the chip 102 between the applications CPU 110 and the modem CPU 168/126 and adjacent to internal memory 112. A second internal thermal sensor 157A2 may be positioned below the modem CPU 168/126 on a right side region of the chip 102. This second internal thermal sensor 157A2 may also be positioned between a an advanced reduced instruction set computer ("RISC") instruction set machine ("ARM") 177 and a first graphics processor 134A. A digital-to-analog controller ("DAC") 173 may be positioned between the second internal thermal sensor 157A2 and the modem CPU 168/126.A third internal thermal sensor 157A3 may be positioned between a second graphics processor 134B and a third graphics processor 134C in a far right region of the chip 102. A fourth internal thermal sensor 157A4 may be positioned in a far right region of the chip 102 and beneath a fourth graphics processor 134D. And a fifth internal thermal sensor 157A5 may be positioned in a far left region of the chip 102 and adjacent to the PLLs 209 and ADC controller 103.One or more external thermal sensors 157B may also be coupled to the ADC controller 103. The first external thermal sensor 157B1 may be positioned off-chip and adjacent to a top right quadrant of the chip 102 that may include the modem CPU 168/126, the ARM 177, and DAC 173. A second external thermal sensor 157B2 may be positioned off-chip and adjacent to a lower right quadrant of the chip 102 that may include the third and fourth graphics processors 134C, 134D.One of ordinary skill in the art will recognize that various other spatial arrangements of the hardware illustrated in FIG. 2A (or other hardware resources) may be provided without departing from the scope of the invention. FIG. 2A illustrates yet one exemplary spatial arrangement and how the main thermal policy manager module 101A and ADC controller 103 with its thermal policy manager 101B may manage thermal states that are a function of the exemplary spatial arrangement illustrated in FIG. 2A .Thermal sensors 157 may be positioned adjacent to hardware, such the CPU 110, and on a same surface with the hardware within the portable computing device 100. For example, see the first internal thermal sensor 157A1. The thermal policy manager 101A may assign one or more specific thermal mitigation techniques unique to the hardware associated with a particular thermal sensor 157, such as the CPU 110 corresponding to the first internal thermal sensor 157A1. In one exemplary embodiment, the thermal mitigation techniques assigned to the CPU 110 and its corresponding thermal sensor 157A1 may be different compared to the thermal mitigation techniques assigned to the third graphical processor 134C associated with the third thermal sensor 157A3. In other exemplary embodiments, the thermal mitigation techniques applied to hardware may be uniform or the same across the whole portable computing device 100.FIG. 2B is a schematic diagram illustrating an exemplary software architecture of the PCD 100 of FIG. 1 and FIG. 2A for supporting dynamic voltage and frequency scaling ("DVFS") algorithms. DVFS algorithms may form or be part of at least one thermal mitigation technique that may be triggered by the thermal policy manager 101 when certain thermal conditions are met as will be described in detail below.As illustrated in FIG. 2B , the CPU or digital signal processor 110 is coupled to the memory 112 via a bus 211. The CPU 110, as noted above, is a multiple-core processor having N core processors. That is, the CPU 110 includes a first core 222, a second core 224, and a Nth core 230. As is known to one of ordinary skill in the art, each of the first core 222, the second core 224 and the Nth core 230 are available for supporting a dedicated application or program. Alternatively, one or more applications or programs can be distributed for processing across two or more of the available cores.The CPU 110 may receive commands from the thermal policy manager module(s) 101 that may comprise software and/or hardware. If embodied as software, the thermal policy manager module 101 comprises instructions that are executed by the CPU 110 that issues commands to other application programs being executed by the CPU 110 and other processors.The first core 222, the second core 224 through to the Nth core 230 of the CPU 110 may be integrated on a single integrated circuit die, or they may be integrated or coupled on separate dies in a multiple-circuit package. Designers may couple the first core 222, the second core 224 through to the Nth core 230 via one or more shared caches and they may implement message or instruction passing via network topologies such as bus, ring, mesh and crossbar topologies.In the illustrated embodiment, the RF transceiver 168 is implemented via digital circuit elements and includes at least one processor such as the core processor 210 (labeled "Core"). In this digital implementation, the RF transceiver 168 is coupled to the memory 112 via bus 213.Each of the bus 211 and the bus 213 may include multiple communication paths via one or more wired or wireless connections, as is known in the art. The bus 211 and the bus 213 may have additional elements, which are omitted for simplicity, such as controllers, buffers (caches), drivers, repeaters, and receivers, to enable communications. Further, the bus 211 and the bus 213 may include address, control, and/or data connections to enable appropriate communications among the aforementioned components.When the logic used by the PCD 100 is implemented in software, as is shown in FIG. 2B , it should be noted that one or more of startup logic 250, management logic 260, dynamic voltage and frequency scaling ("DVFS") interface logic 270, applications in application store 280 and portions of the file system 290 may be stored on any computer-readable medium for use by or in connection with any computer-related system or method.As understood by one of ordinary skill in the art, the demand for processors that provide high performance and low power consumption has led to the use of various power management techniques, such as, dynamic voltage and frequency scaling ("DVFS") in processor designs. DVFS enables trade-offs between power consumption and performance. Processors 110 and 126 ( FIG. 1 ) may be designed to take advantage of DVFS by allowing the clock frequency of each processor to be adjusted with a corresponding adjustment in voltage. A reduction in operating voltage usually results in a proportional savings in power consumed. One main issue for DVFS enabled processors 110, 126 is how to control the balance between performance and power savings.In the context of this document, a computer-readable medium is an electronic, magnetic, optical, or other physical device or means that can contain or store a computer program and data for use by or in connection with a computer-related system or method. The various logic elements and data stores may be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a "computer-readable medium" can be any means that can store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.The computer-readable medium can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic), a random-access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM, EEPROM, or Flash memory) (electronic), an optical fiber (optical), and a portable compact disc read-only memory (CDROM) (optical). Note that the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, for instance via optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.In hardware embodiments, the startup logic 250, management logic 260 and perhaps the DVFS interface logic 270 may be implemented with any or a combination of the following technologies, which are each well known in the art: a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit (ASIC) having appropriate combinational logic gates, a programmable gate array(s) (PGA), a field programmable gate array (FPGA), etc.The memory 112 is a non-volatile data storage device such as a flash memory or a solid-state memory device. Although depicted as a single device, the memory 112 may be a distributed memory device with separate data stores coupled to the digital signal processor and or the core 210 (or additional processor cores) in the RF transceiver 168.The startup logic 250 includes one or more executable instructions for selectively identifying, loading, and executing a select program for managing or controlling the performance of one or more of the available cores such as the first core 222, the second core 224 through to the Nth core 230. A select program can be found in the program store 296 of the embedded file system 290 and is defined by a specific combination of a performance scaling algorithm 297 and a set of parameters 298. The select program, when executed by one or more of the core processors in the CPU 110 and the core 210 in the RF transceiver 168, may operate in accordance with one or more signals provided by the monitor module 114 in combination with control signals provided by the one or more thermal policy manager module(s) 101 to scale the performance of the respective processor core. In this regard, the monitor module 114 may provide one or more indicators of events, processes, applications, resource status conditions, elapsed time, as well as temperature as received from the thermal policy manager module 101.The management logic 260 includes one or more executable instructions for terminating an operative performance scaling program on one or more of the respective processor cores, as well as selectively identifying, loading, and executing a more suitable replacement program for managing or controlling the performance of one or more of the available cores. The management logic 260 is arranged to perform these functions at run time or while the PCD 100 is powered and in use by an operator of the device. A replacement program can be found in the program store 296 of the embedded file system 290 and is defined by a specific combination of a performance scaling algorithm 297 and a set of parameters 298.The replacement program, when executed by one or more of the core processors in the digital signal processor or the core 210 in the RF transceiver 168, may operate in accordance with one or more signals provided by the monitor module 114 or one or more signals provided on the respective control inputs of the various processor cores to scale the performance of the respective processor core. In this regard, the monitor module 114 may provide one or more indicators of events, processes, applications, resource status conditions, elapsed time, temperature, etc in response to control signals originating from the thermal policy manager 101.The DVFS interface logic or interface logic 270 includes one or more executable instructions for presenting, managing and interacting with external inputs to observe, configure, or otherwise update information stored in the embedded file system 290. In one embodiment, the DVFS interface logic 270 may operate in conjunction with manufacturer inputs received via the USB port 142. These inputs may include one or more programs to be deleted from or added to the program store 296. Alternatively, the inputs may include edits or changes to one or more of the programs in the program store 296. Moreover, the inputs may identify one or more changes to, or entire replacements of one or both of the startup logic 250 and the management logic 260. By way of example, the inputs may include a change to the management logic 260 that instructs the PCD 100 to suspend all performance scaling in the RF transceiver 168 when the received signal power falls below an identified threshold. By way of further example, the inputs may include a change to the management logic 260 that instructs the PCD 100 to apply a desired program when the video codec 134 is active.The DVFS interface logic 270 enables a manufacturer to controllably configure and adjust an end user's experience under defined operating conditions on the PCD 100. When the memory 112 is a flash memory, one or more of the startup logic 250, the management logic 260, the DVFS interface logic 270, the application programs in the application store 280 or information in the embedded file system 290 can be edited, replaced, or otherwise modified. In some embodiments, the DVFS interface logic 270 may permit an end user or operator of the PCD 100 to search, locate, modify or replace the startup logic 250, the management logic 260, applications in the application store 280 and information in the embedded file system 290. The operator may use the resulting interface to make changes that will be implemented upon the next startup of the PCD 100. Alternatively, the operator may use the resulting interface to make changes that are implemented during run time.The embedded file system 290 includes a hierarchically arranged DVFS store 292. In this regard, the file system 290 may include a reserved section of its total file system capacity for the storage of information for the configuration and management of the various parameters 298 and performance scaling algorithms 297 used by the PCD 100. As shown in FIG. 2 , the DVFS store 292 includes a core store 294, which includes a program store 296, which includes one or more DVFS programs. Each program is defined as a combination of a respective performance scaling algorithm and a set of parameters associated with the specific algorithm. As a further example of the hierarchical nature of the DVFS store 292, a particular member of a set of files may be located and identified by the path of \startup\core0\algorithm\parameterset. In this example, a program is identified by the algorithm in combination with the contents of information stored in the parameter set. For example, a conventional DVFS algorithm known as "classic" may be identified to manage performance scaling on core0 222 in accordance with the parameters sample rate, samples to increase, and samples to decrease as follows: \startup\core0\classic\SampleRate, with a value of 100, where the sample rate is in MHz; \startup\core0\classic\SamplesToIncrease, with a value of 2, where the samples to increase is an integer; and \startup\core0\classic\SamplesToDecrease, with a value of 1, where the samples to decrease is an integer.That is, the respective filenames define a parameter and the value of the parameter is identified by the contents of the file. The algorithm is defined by a periodic sampling of the CPU idle percentage and operates in accordance with a low threshold (% idle) and a high threshold (% idle). If a samples-to-increase threshold comparator indicates for two consecutive samples that performance should be increased, the DVFS algorithm increases performance in accordance with a predetermined clock level adjustment. Conversely, if a samples-to-decrease threshold comparator indicates for 1 consecutive sample that performance should be decreased, the DVFS algorithm decreases performance in accordance with the predetermined clock level (i.e., frequency) adjustment. As explained above, processor or core operating voltage may be changed together with changes in the clock frequency.Alternatively, or additionally, the DVFS store 292 may be arranged such that the search path starts from the most specific with respect to its application (i.e., the processor core, algorithm, and parameter value) progresses to the least specific with respect to application. In an example embodiment, parameters are defined in the directories /core0, /coreAll and /default in association with the "classic" performance scaling algorithm. For example, the path \core0\classic\SampleRate - applies only to the classic algorithm operating on core0. This most specific application will override all others. The path \coreAll\classic\SampleRate - applies to any processor core running the classic algorithm. This application is not as specific as the example path above but is more specific than \default\classic\SampleRate - which applies to any processor core running the classic algorithm.This default application is the least specific and is used only if no other suitable path exists in the DVFS store 292. The first parameter found will be the one used. The \default location will always have a valid parameter file. The architecture of the individual cores, the architecture of the one or more shared caches and the mechanism(s) used to pass instructions between the cores, as well as the desired use cases for the PCD 100 are expected to dictate the nature of the various performance scaling algorithms 297 stored in the memory 112.FIG. 2C is a first table 267 listing exemplary frequency values for three or more different DVFS algorithms that may be selected by the DVFS interface logic 270. These exemplary values demonstrate throttling, in which the activity of one or more processors 110 and/or cores are reduced in order to mitigate thermal loads. According to this exemplary first table 267, each core of the multicore CPU 110 may be assigned specific maximum clock frequency values depending upon the current DVFS algorithm being executed. For the first DVFS algorithm that is listed in the first row of the table 627, Core 0 may be assigned a maximum clock frequency of 600 MHz, while Core 1 may be assigned a maximum clock frequency of 650 MHz, and the Nth Core may be assigned a maximum clock frequency of 720 MHz. For the second DVFS algorithm that is listed in the second row of the table 627, Core 0 may be assigned a maximum clock frequency of 550 MHz, while Core 1 is assigned a maximum clock frequency of 600 MHz, and the Nth core may be assigned a maximum clock frequency of 650 MHz. For the third DVFS algorithm that is listed in the second row of the table 627, Core 0 may be assigned a maximum clock frequency of 450 MHz, while Core 1 is assigned a maximum clock frequency of 500 MHz, and the Nth core may be assigned a maximum clock frequency of 550 MHz. These limits on clock frequency may be selected by the thermal policy manager 101 depending upon the current thermal state of the PCD 100.FIG. 2D is a second table 277 listing exemplary frequency and voltage pairs for three DVFS algorithms. This table 277, like the first table 267, demonstrates throttling of one or more processors 110 and/or corresponding cores. For the DVFS algorithm listed in the first row of the table 277, Core 0 may be assigned a maximum clock frequency of 600 MHz while its maximum voltage may be limited to 1.3 volts ("V"). Core 1 may be assigned a maximum clock frequency of 500 MHz and a corresponding maximum voltage of 2.0V. Core N may be assigned a maximum clock frequency of 550 MHz and a corresponding maximum voltage of 2.0V. For the second DVFS algorithm listed in the second row of the table 277, Core 0 may be assigned a maximum clock frequency of 550 MHz while the maximum voltage is assigned the value of 1.0V. Core 1 may be assigned a maximum clock frequency of 500 MHz and the corresponding maximum voltage of 1.5V.For the second row, Core N may be assigned a maximum clock frequency of 500 MHz and a corresponding maximum voltage of 1.9V. For the third row, Core 0 may be assigned a maximum clock frequency of 450 MHz while the maximum voltage is assigned the value of 0.9V, while Core 1 may be assigned a maximum clock frequency of 350 MHz and the corresponding maximum voltage of 1.0V. Core N may be assigned a maximum clock frequency of 400 MHz and a corresponding maximum voltage of 1.3V.The thermal policy manager 101 may select the various pairs of frequency and voltages enumerated in table 277 depending upon the current thermal state of the PCD 100.FIG. 3 is an exemplary state diagram 300 that illustrates various thermal policy states 305, 310, 315, and 320 that are tracked by the thermal policy manager 101. While only four states are illustrated, one of ordinary skill in the art will recognize that other states beyond these four may be created. Similarly, one of ordinary skill in the art recognizes that fewer policies may be employed without departing from the invention. Further, additional sub-states or sub-policies may be added to each state 305, 310, 315, and 320 as understood by one of ordinary skill in the art.The first policy state 305 may comprise a "normal" thermal state in which the thermal policy manager 101 only monitors thermal sensors 157 in a routine or ordinary fashion. In this exemplary first and normal state 305, the PCD 100 is usually not in any danger or risk of experiencing an adverse thermal condition, such as, reaching critical temperatures that may cause failure of any of the hardware and/or software components. In this exemplary state, the thermal sensors 157 may be detecting or tracking temperatures that are at 50°C or below. However, one of ordinary skill in the art will recognize that other temperature ranges may be established for the first and normal state 305 without departing from the scope of the invention.The second policy state 310 may comprise a "quality of service" or "QoS" state in which the thermal policy manager 101 may increase the frequency in which thermal sensors 157 are polled or in which the thermal sensors 157 send their temperature status reports to the thermal policy manager 101. Increasing the frequency in which thermal sensors 157 are polled or in which the thermal sensors 157 send their temperature status reports helps the thermal policy manager 101 compensate for situations in which one or more thermal sensors 157 are not in direct contact with a region which is exhibiting high temperatures. The frequency in which temperature readings are received may be adjusted to compensate for thermal constants of different materials that may exist between a high thermal region and a particular thermal sensor 157.The exemplary second state 310 may be reached or entered into by the thermal policy manager 101 when a significant change of temperature has been detected in the first, normal state 305. The threshold or magnitude of the change in temperature (delta T) which triggers this QoS state 310 may be adjusted or tailored according to a particular PCD 100. Therefore, while a PCD 100 may be operating in the first normal state 305, depending upon the magnitude of the change in temperature that is detected by one or more thermal sensors, the PCD 100 may leave the first normal state 305 and enter into the second QoS state 310 as tracked by the thermal policy manager 101.For example, a PCD 100 may have a first maximum temperature reading from a given thermal sensor 157 of approximately 40°C. And a second reading from the same thermal sensor 157 may show a change in temperature of only 5°C which takes the maximum temperature being detected to 45°C. However, while the maximum temperature being detected may be below an established threshold of 50°C for the first, normal state 305, the change in temperature by 5°C within a relatively short time frame may be significant enough for the thermal policy manager 101 to change the state to the second, QoS state 310.In the second, QoS thermal state 310 the thermal policy manager 101 may request or it may actually perform one or more thermal mitigation techniques in order to reduce the thermal load and temperature of the PCD 100. In this particular second thermal state 310, the thermal policy manager 101 is designed to implement or request thermal mitigation techniques that may be barely perceivable by an operator and which may degrade a quality of service provided by the PCD 100 in a minimal fashion. The temperature range for this second, QoS thermal state 310 may comprise a range between about 50°C to about 80°C. One of ordinary skill in the art will recognize that other temperature ranges may be established for the second, QoS state 305 and are within the scope of the invention. Further, one of ordinary skill in the art will recognize that other sub-states or sub-policies may be created and used relative to the current set described.As noted previously, the second, QoS state 310 may be triggered based on the magnitude and/or location of the change in temperature and are not necessarily limited to the endpoints of a selected temperature range. Further details about this second, QoS thermal state 310 will be described below in connection with FIG. 4 .The third thermal state 315 may comprise a "severe" state in which the thermal policy manager 101 continues to monitor and/or receives interrupts from thermal sensors 157 while requesting and/or applying more aggressive thermal mitigation techniques relative to the second, QoS state 310 described above. This means that in this state the thermal policy manager 101 is less concerned about quality of service from the perspective of the operator.In this third thermal state 315, the thermal policy manager 101 is more concerned about mitigating or reducing thermal load in order to decrease temperature of the PCD 100. The PCD 100 may have degradations in performance that are readily perceived or observed by an operator in this state 315. The third, severe thermal state 315 and its corresponding thermal mitigation techniques applied or triggered by the thermal policy manager 101 will be described in further detail below in connection with FIG. 4 . The temperature range for this third, severe thermal state 310 may comprise a range between about 80°C to about 100°C.Similar to the first thermal state 305 and second thermal state 310 as discussed above, this third and severe thermal state 315 may be initiated based upon the change in temperature detected by one or more thermal sensors 157 and not necessarily limited to a temperature range established or mapped for this third thermal state 315. For example, as the arrows in this diagram illustrate, each thermal state may be initiated in sequence or they can be initiated out of sequence depending upon the magnitude of the change in temperature (delta T) over a certain amount of time that may be detected. So this means that the PCD 100 may leave the first and normal thermal state 305 and enter into or initiate the third and severe thermal state 315 based on a change in temperature that is detected by one or more thermal sensors 157, and vice versa.Similarly, the PCD 100 may be in the second or QoS thermal state 310 and enter into or initiate the fourth or critical state 320 based on a change in temperature over an amount of time that is detected by one or more thermal sensors 157, and vice versa. In this exemplary third and critical state 320, the thermal policy manager 101 is applying or triggering as many and as sizable thermal mitigation techniques as possible in order to avoid reaching one or more critical temperatures that may cause permanent damage to the electronics contained within the PCD 100.This fourth and critical thermal state 320 may be similar to conventional techniques that are designed to eliminate functionality and operation of a PCD 100 in order to avoid critical temperatures. The fourth thermal state 320 may comprise a "critical" state in which the thermal policy manager 101 applies or triggers the shutting down of non-essential hardware and/or software. The temperature range for this fourth thermal state may include those of about 100°C and above. The fourth and critical thermal state 320 will be described in further detail below in connection with FIG. 4 .The thermal policy management system is not limited to the four thermal states 305, 310, 315, and 320 illustrated in FIG. 3 . Depending upon a particular PCD 100, additional or fewer thermal states may be provided without departing from the scope of the invention. That is, one of ordinary skill in the art recognizes that additional thermal states may improve functionality and operation of a particular PCD 100 while in other situations, fewer thermal states may be preferred for a particular PCD 100 that has its own unique hardware and/or software.FIG. 4 is a diagram illustrating exemplary thermal mitigation techniques that may be applied or ordered by the thermal policy manager 101 and are dependent upon a particular thermal state of a PCD 100. As noted previously, the first thermal state 305 may comprise a "normal" state in which the thermal policy manager 101 being executed by the CPU 110 and partially by the ADC controller 103 may monitor, poll, or receive one or more status reports on temperature from one or more thermal sensors 157 as illustrated in FIG. 2A . In this first thermal state 305, a PCD 100 may not be in any danger or risk of reaching a critical temperature that may harm one or more software and/or hardware components within the PCD 100. Usually, in this first thermal state, the thermal policy manager 101 is not applying or has not requested any initiation of thermal mitigation techniques such that the PCD 100 is operating at its fullest potential and highest performance without regard to thermal loading. The temperature range for this first thermal state 305 may include those of 50°C and below. For this first thermal state 305, the thermal policy manager 101 may reside in the ADC controller 103 while the main thermal policy manager 101 for all other states may reside or be executed by the CPU 110. In an alternate exemplary embodiment, the thermal policy manager 101 may reside only in the CPU 110.In the second thermal state 310 also referred to as the QoS state 310, once it is initiated, the thermal policy manager 101 may begin more rapid monitoring, polling, and/or receiving of interrupts (relative to the first thermal state 305) from thermal sensors 157 regarding current temperature of the PCD 100. In this exemplary second thermal state 310, the thermal policy manager 101 may initiate or request the monitor module 114 and/or operating system ("O/S") module 207 of FIG. 2A to start applying thermal mitigation techniques but with the objective to maintain high-performance with little or no degradations to the quality of service as perceived by the operator of the PCD 100.According to this exemplary second thermal state 310 illustrated in FIG. 4 , the thermal policy manager 101 may request the monitor 114 and/or the O/S module 207 to initiate thermal mitigation techniques such as, but not limited to, (1) load scaling and/or (2) load dynamic scaling; and (3) spatial load shifting. Load scaling may comprise adjusting or "scaling" the maximum clock frequency allowed in DVFS algorithm, such as the values provided in the first table 267 of FIG. 2C . Such an adjustment may limit the maximum heat dissipation. This thermal load mitigation technique may also involve adjusting the voltage to match the standard DVFS table used for a particular and unique PCD 100.The thermal load mitigation technique of load dynamic scaling may comprise the scaling of one and/or all/ of the N application processor cores 222, 224, and 230. This thermal load mitigation technique may comprise establishing the max clock frequency allowed for the DVFS algorithm of a particular core 222, 224, or 230. The DVFS algorithm will use a table of voltage/frequency pairs, such as the second table 277 illustrated in FIG. 2D , to scale processing capability.One such dynamic scaling technique includes limiting the number of millions of instructions per second (MIPS) by limiting the max frequency allowed. In this way, the thermal policy manager 101 is effectively limiting the power consumption of the core(s) 222, 224, and 230 and limiting their capability (MIPS) that is available. The thermal policy manager 101 may choose to limit N cores 222, 224, 230 together, or it can select and chose which cores 222, 224, 230 get scaled back while allowing other cores 222, 224, 230 to operate in an unconstrained manner. The thermal policy manager 101, monitor module 114, and/or O/S module 207 may make their decisions on which cores 222, 224, 230 to control based on data received from thermal sensors 157 or software application requirements based, and/or best effort prediction. The temperature range for this second thermal state may include those of about 50° C to about 80°C.The thermal load mitigation technique of spatial load shifting comprises the activation and deactivation of cores within a multicore processor system. If N multiple cores exist, each core may be loaded up with work or its performance maximized using up to N-1 cores and then as a thermal sensor 157 indicates a heating problem, the location of an inactive core functioning as a cooling device may be shifted. Each core may effectively be cooled by letting it idle in a predetermined pattern or in a pattern dictated by thermal measurements. A MIPS hole is effectively moved around the cores in the course of several seconds to cool them. In this way, several GHz of processing power may be made available to a PCD 100, while still cooling the silicon die by moving the load around. Further details of spatial load shifting will be described below in connection with FIGs. 8A-8B .Referring now to the third thermal state 315 of FIG. 4 , also known as the severe thermal state 315, the thermal policy manager 101 may start continuous monitoring, polling, or receiving interrupts from thermal sensors 157 so that temperature is sensed more continuously / frequently compared to the second lower thermal state 310. In this exemplary thermal state 315, the thermal policy manager 101 may apply or request that the monitor module 114 and/or O/S module 207 more aggressive thermal mitigation techniques and/or additional thermal mitigation techniques (relative to the second thermal state 310) with probable perceivable degradation of performance observed by an operator of the PCD 100.According to this exemplary third thermal state 315, the thermal policy manager 101 may cause reduction in power to one or more hardware devices like amplifiers, processors, advanced receiver hardware, etc.The thermal policy manager 101 may also shift workloads among different hardware devices in a spatial manner in order to bring active devices off-line and to bring in active devices on-line. The thermal mitigation techniques of this third and severe thermal state 315 may be the same as those described above with respect to the second, quality of service thermal state 310. However, these same thermal mitigation techniques may be applied in a more aggressive manner.For example, when adjusting DVFS parameters, the thermal policy manager 101 may request that these parameters are adjusted more significantly such as providing for significantly lower voltages and/or frequencies compared to the second thermal state 310. These lower voltages and/or frequencies may be lower than is recommended for supporting a particular application program which may degrade performance.Referring now to the fourth and critical state 320 of FIG. 4 , the thermal policy manager 101 may start shutting down or requesting the monitor 114 and/or O/S module 207 to start shutting down all nonessential hardware and/or software modules."Nonessential" hardware and/or software modules may be different for each type of particular PCD 100. According to one exemplary embodiment, all nonessential hardware and/or software modules may include all of those outside of an emergency 911 telephone call function and global positioning satellite ("GPS") functions. This means that the thermal policy manager 101 in this fourth, critical thermal state 320 may cause the shutdown of hardware and/or software modules that are outside of emergency 911 telephone calls and GPS functions. The thermal policy manager 101 may shut down modules in sequence and/or in parallel depending upon the critical temperatures being monitored by the thermal sensors 157, locations of the thermal sensors 157, and the change in temperature being observed by the thermal policy manager 101. The temperature range for this fourth thermal state 320 may include those of about 100°C and above.FIG. 5 is a diagram illustrating an exemplary graph 500 of temperature versus time and corresponding thermal policy states 305, 310, 315, and 320. At the first point 503 of the temperature plot or line 505, the thermal policy manager 101 may receive a first interrupt temperature reading of 40°C from one or more thermal sensors 157. Since this first temperature reading of 40°C may be below the maximum temperature of 50°C set for the normal thermal state 305, then the thermal policy manager 101 may remain in the first or normal thermal state 305.At a second point 506 along the temperature line 505, the thermal policy manager 101 may receive a second interrupt temperature reading of 50°C. While 50°C may be within the selected temperature range for the first thermal state 305, if the change in temperature from the last temperature reading was significant, such as a large temperature change within a short period of time (like a 3°C change within five seconds), then such a change or jump in temperature may trigger the thermal policy manager 101 to leave the normal thermal state 305 and initiate the second, QoS thermal state 310.Between the second point 506 and third point 509 of the temperature line 505, the temperature of the PCD 100 was above 50°C and the thermal policy manager 101 may have requested or activated one or more thermal mitigation techniques in order to lower the temperature of the PCD 100. At the third point 509 of the temperature line 505, the thermal policy manager 101 may change the thermal state of the PCD 100 from the second state 3102 the first and normal state 305.At the fourth point 512, the thermal policy manager 101 may observe that the temperature trend is moving in an upward fashion or, in other words, the temperature line 505 may have a positive slope or change in delta T. The thermal policy manager 101 may change the thermal state of the PCD 100 in view of this data from the first thermal state 305 to the second, QoS thermal state 310. In the second thermal state 310, the thermal policy manager 101 may request or it may activate one or more thermal mitigation techniques that should not significantly impact the quality of service provided by the PCD 100. The second thermal state 310 may include a temperature range between a temperature of about 50°C to about 80°C.Moving along the temperature line 505 to the fifth point 515 which has a magnitude of about 80°C, the thermal policy manager 101 may initiate a change of thermal state from the second, QoS thermal state 310 to the third and severe thermal state 315. As noted previously, the temperature range for this first thermal state may include a range between about 80°C to about 100°C. In this third and severe thermal state 310, the thermal policy manager 101 may be requesting or activating a plurality of thermal mitigation techniques that may impact the quality of service and performance of the PCD 100.The segment of the temperature line 505 between the fifth point 515 and sixth point 518 reflects that the third and severe thermal state 310 has been unsuccessful in mitigating the temperature rise within the PCD 100. Therefore, at the sixth point 518 which may have a magnitude of approximately 100°C, the thermal policy manager 101 may enter into the fourth and critical state 320. In this fourth and critical state 320, the thermal policy manager 101 may activate or request that certain hardware and/or software components be shut down in order to alleviate the current thermal load. As noted previously, the thermal policy manager 101 may cause any hardware and/or software component outside of emergency 911 call functions and GPS functions to be shut down while in this fourth thermal state 320.Moving along the temperature line 505 to the seventh point 521, the segment of the line 505 between the sixth point 518 and seventh point 521 reflects that the critical thermal state 320 and severe thermal state 315 were successful in lowering the temperature of the PCD 100. As noted previously, one or more thermal states may be jumped or skipped depending upon the temperature measured by the thermal sensors 157 and observed by the thermal policy manager 101. Further, when returning to lower thermal states, the thermal states followed by the thermal policy manager 101 may be similar to a hysteresis.FIGs. 6A-6B are logical flowcharts illustrating a method 600 for managing one or more thermal policies of a PCD 100. Method 600A of FIG. 6A starts with first block 605 in which the thermal policy manager 101 may monitor temperature with internal and external thermal sensors 157 while in a first thermal state 305. This first block 605 generally corresponds with the first thermal state 305 illustrated in FIGs. 3-4 . As noted previously, the thermal policy manager 101 may monitor, actively poll, and/or receive interrupts from one or more thermal sensors 157. In this particular thermal state, the thermal policy manager 101 does not apply any thermal mitigation techniques. The PCD 100 may perform at its optimal level without regard to any thermal loading conditions in this first thermal state 305.Next, in decision block 610, the thermal policy manager 101 may determine if a temperature change (delta T) or change in absolute temperature has been detected by one or more thermal sensors 157. If the inquiry to decision block 610 is negative, then the "NO" branch is followed back to block 605. If the inquiry to decision block 610 is positive, then the "YES" branch is followed to block 615 in which the thermal policy manager 101 may increase the frequency of the monitoring of the thermal sensors 157. In block 615, the thermal policy manager may actively poll the thermal sensors 157 more frequently or it may request the thermal sensors 157 to send more frequent interrupts that provide temperature data. This increased monitoring of thermal sensors 157 may occur in the first or normal state 305 and it may also occur in the second or quality of service thermal state 310.Alternatively, block 615 may be moved altogether to after block 620. In this way, the increase thermal monitoring of sensors 157 would occur only if the next thermal state, the QoS state, has been reached. As will be described below, the method is not limited to the specific sequence of each of the described embodiments as understood by one of ordinary skill in the art.Next, in decision block 620, the thermal policy manager 101 may determine if the next thermal state has been reached or achieved by the PCD 100. In this decision block 620, the thermal policy manager 101 may be determining if the temperature range assigned to the second thermal state 310 has been achieved. Alternatively, the thermal policy manager in this decision block 620 may be determining if a significant change in temperature (delta T) over time has occurred since a last reading.If the inquiry to decision block 620 is negative, then the "NO" branch is followed back to decision block 610. If the inquiry to decision block 620 is positive, then the "YES" branch is followed to routine or submethod 625. Routine or submethod 625 may comprise a second thermal state 310 also referred to as the QoS state 310 in which thermal policy manager 101 may apply or request one or more thermal mitigation techniques described above in connection with FIG. 4 . For example, the thermal policy manager 101 may request the monitor 114 and/or the O/S module 207 to initiate thermal mitigation techniques such as, but not limited to, (1) load scaling and/or (2) load dynamic scaling as described above.Subsequently, in decision block 630, the thermal policy manager 101 may determine if the one or more thermal mitigation techniques of the second or QoS state 310 were successful and if the current temperature as detected by the one or more thermal sensors 157 falls within the next lower thermal range for the first or normal state 305. If the inquiry to decision block 630 is positive, then the "YES" branch is followed back to block 605. If the inquiry to decision block 630 is negative, then the "NO" branch is followed to decision block 635.In decision block 635, the thermal policy manager 101 may determine if the PCD 100 has now entered into the third or severe thermal state 315 according to the temperature as detected by the one or more thermal sensors 157. Alternatively, the thermal policy manager 101 may determine if the PCD 100 has entered into the third or severe thermal state 315 by determining if a significant change in temperature (delta T) has occurred.If the inquiry to decision block 635 is negative, the "NO" branch is followed back to decision block 620. If the inquiry to decision block 635 is positive, then the "YES" branch is followed to submethod or subroutine 640.In submethod or subroutine 640, the thermal policy manager 101 has determined that the PCD 100 has entered into the third or severe thermal state. The thermal policy manager 101 may then activate or request that one or more thermal mitigation techniques be applied. As noted previously, the thermal policy manager 101 in this third or severe thermal state 315 may start continuous monitoring, polling, or receiving interrupts from thermal sensors 157 so that temperature is sensed more continuously / frequently compared to the second lower thermal state 310.In this exemplary third thermal state 315, the thermal policy manager 101 may apply or request that the monitor module 114 and/or O/S module 207 apply more aggressive thermal mitigation techniques and/or additional thermal mitigation techniques (relative to the second thermal state 310) with probable perceivable degradation of performance observed by an operator of the PCD 100. According to this exemplary thermal state 315, the thermal policy manager 101 may cause reduction in power to one or more hardware devices like amplifiers, processors, advanced receiver hardware, etc.The thermal policy manager 101 may also shift workloads among different hardware devices in a spatial manner in order to bring active devices off-line and to bring in active devices on-line. The thermal mitigation techniques of this third and severe thermal state 315 may be the same as those described above with respect to the second, quality of service thermal state 310. However, these same thermal mitigation techniques may be applied in a more aggressive manner. For example, when adjusting DVFS parameters, the thermal policy manager 101 may request that these parameters are adjusted more significantly such as providing for significantly lower voltages and/or frequencies compared to the second thermal state 310. These lower voltages and/or frequencies may be lower than is recommended for supporting a particular application program.Next, in decision block 645, the thermal policy manager 101 may determine if the one or more thermal mitigation techniques applied in submethod or routine 640 were successful to prevent escalation of temperature for the PCD 100. If the inquiry to decision block 645 is negative, then the "NO" branch is followed to step 655 of FIG. 6B . If the inquiry to decision block 645 is positive, then the "YES" branch is followed to step 650 in which the thermal policy manager 101 determines the current thermal state of the PCD 100 based on temperature readings provided by the one or more thermal sensors 157.Referring now to FIG. 6B , this FIG. is a continuation flow chart relative to the flowchart illustrated in FIG. 6A . The method 600B of FIG. 6B starts with decision block 655 in which the thermal policy manager 101 may determine if the PCD 100 has entered into the fourth or critical thermal state 320 based on the temperature being detected by one or more thermal sensors 157. If the inquiry to decision block 655 is negative, then the "NO" branch is followed to step 660 in which the thermal policy manager 101 returns the PCD 102 the third or severe thermal state 315 and the process returns to block 635 of FIG. 6A .If the inquiry to decision block 655 is positive, then the "YES" branch is followed to submethod or routine 665 in which the thermal policy manager 101 activates or request that one or more critical thermal mitigation techniques be activated. The thermal policy manager 101 in this fourth, critical thermal state 320 may cause the shutdown of hardware and/or software modules that are outside of emergency 911 telephone calls and GPS functions. The thermal policy manager 101 may shut down modules in sequence and/or in parallel depending upon the critical temperatures being monitored by the thermal sensors 157 and the change in temperature being observed by the thermal policy manager 101.Subsequently, in decision block 670, the thermal policy manager 101 may determine if the thermal mitigation techniques applied in routine or submethod 665 were successful to prevent any escalation of temperature of the PCD 100 as detected by the thermal sensors 157. If the inquiry to decision block 670 is negative, then the "NO" branch is followed back to routine or submethod 665.If the inquiry to decision block 670 is positive, then the "YES" branch is followed to step 675 in which the thermal policy manager 101 determines the current thermal state of the PCD 100 based on temperature readings supplied by one or more thermal sensors 157. Once the temperature readings are assessed by the thermal policy manager 101, the thermal policy manager 101 initiates (or returns to) the thermal state corresponding to the temperature ranges detected by the thermal sensors 157.FIG. 7 is a logical flowchart illustrating sub-method or subroutines 625, 640, and 665 for applying DVFS thermal mitigation techniques. Block 705 is the first step in the submethod or subroutine for applying DVFS thermal mitigation techniques. In this first block 705, the thermal policy manager 101 may determine the current thermal state based on temperature readings provided by thermal sensors 157. Once the current thermal state is determined by the thermal policy manager 101, the thermal policy manager 101 may then review the current DVFS settings in block 710. Next, in block 715, the thermal policy manager 101 may review the current workloads of one or more hardware and/or software modules.Next, in block 720, the thermal policy manager 101 may adjust or issue commands to adjust the current DVFS settings that may include voltage and/or frequency, in order to reduce workload or to spatially shift the workload to mitigate thermal loading conditions and according to the current thermal state which was determined by the thermal policy manager 101.So for the second or QoS thermal state 310, in block 720, the thermal policy manager 101 may initiate or request the monitor module 114 and/or operating system ("O/S") module 207 of FIG. 2A to start applying thermal mitigation techniques but with the objective to maintain high-performance with little or no perception in degradations to the quality of service as perceived by the operator of the PCD 100.According to this exemplary second thermal state 310 illustrated in FIG. 4 , the thermal policy manager 101 may request the monitor 114 and/or the O/S module 207 to initiate thermal mitigation techniques such as, but not limited to, (1) load scaling and/or (2) load dynamic scaling. Load scaling may comprise adjusting or "scaling" the maximum clock frequency allowed in DVFS algorithm.For the third or severe terminal state 315, in block 720, the thermal policy manager 101 may start continuous monitoring, polling, or receiving interrupts from thermal sensors 157 so that temperature is sensed more continuously / frequently compared to the second lower thermal state 310. In this exemplary thermal state 315, the thermal policy manager 101 may apply or request that the monitor module 114 and/or O/S module 207 more aggressive thermal mitigation techniques and/or additional thermal mitigation techniques (relative to the second thermal state 310) with probable perceivable degradation of performance observed by an operator of the PCD 100. According to this exemplary thermal state 315, the thermal policy manager 101 may cause reduction in power to one or more hardware devices like amplifiers, processors, advanced receiver hardware, etc.The thermal policy manager 101 may also shift workloads among different hardware devices in a spatial manner in order to bring active devices off-line and to bring in active devices on-line. The thermal mitigation techniques of this third and severe thermal state 315 may be the same as those described above with respect to the second, quality of service thermal state 310. However, these same thermal mitigation techniques may be applied in a more aggressive manner. For example, when adjusting DVFS parameters, the thermal policy manager 101 may request that these parameters are adjusted more significantly such as providing for significantly lower voltages and/or frequencies compared to the second thermal state 310. These lower voltages and/or frequencies may be lower than is recommended for supporting a particular application program.For the fourth or critical terminal state 320, in block 720, this thermal state 320 may be similar to conventional techniques that are designed to eliminate functionality and operation of a PCD 100 in order to avoid critical temperatures. The fourth thermal state 320 may comprise a "critical" state in which the thermal policy manager 101 applies or triggers the shutting down of non-essential hardware and/or software. The temperature range for this fourth thermal state may include those of about 100°C and above. The submethod 625, 640, or 665 then returns to an appropriate step in the thermal management method 600 depending upon the current thermal state of the PCD 100.FIG. 8A is a schematic 800A for a four-core multicore processor 110 and different workloads that may be spatially managed with the multicore processor 110. While only four cores are illustrated, one of ordinary skill in the art recognizes that additional cores may be employed and are within the scope of the invention.The four-core multicore processor 110 has a zeroth core 222, a first core 224, a second core 226, and a third core 228. The first workload scenario for the multicore processor 110 is demonstrated by multicore processor 110A in which the zeroth core 222 has a workload of 70% (out of a 100% full work capacity/utilization for a particular core), while the first core 224 has a workload of 30%, the second core 226 has a workload of 50%, and the third core 228 has a workload of 10%.If the thermal policy manager 101 enters into any one of the thermal states 310, 315, 320 described above in which thermal mitigation techniques are applied to the PCD 100, a spatial thermal load mitigation technique as illustrated in this FIG. 8A may be implemented. According to this spatial thermal load mitigation technique, the thermal policy manager 101, the monitor module 114, and/or the O/S module 207 may shift the workload of one core in a multicore processor 110 to one or more other cores.In the exemplary embodiment illustrated in FIG. 8A , the workload of the zeroth core 222 may be shifted such that additional work is performed by the remaining three other cores of the multicore processor 110. Multicore processor 110B illustrates such a shift in that 20% of the workload for the zeroth core 222 and 40% of the workload for the second core 226 were transferred among the remaining two cores. The workload experienced by the zeroth core 222 was reduced down to 50% while the workload experienced by the second core 226 was reduced down to 10%. Meanwhile, the workload of the first core 224 was increased to 70% while the workload of the third core 228 was increased to 30%. One of ordinary skill in the art recognizes that other magnitudes and combinations of shifting workload and corresponding work load percentages are well within the scope of the invention.The multicore processors 110C-110D provide a demonstration of an exemplary shift of a "hole" in which one or more cores may effectively be cooled by letting them idle in a predetermined pattern or in a pattern dictated by thermal measurements. A 'hole' or core that is not being utilized is effectively moved in MIPS around a group of cores to cool them in the course of several seconds. In the exemplary embodiment illustrated by multicore processor 110C of FIG. 8A , the zeroth core 222 and the first core 224 may have exemplary workloads of 80% while the second core 226 and the third core 228 have no loads whatsoever. In this scenario, if either or both of the zeroth core 222 and first core 224 reach the second thermal stage 310, the third thermal stage 315, or the fourth thermal states 320, then the thermal policy manager 101 may apply or request that a spatial thermal load mitigation technique be applied in which all of the workload of the two active cores 222, 224 be shifted to the two inactive cores 226, 228. The fourth processor 110D demonstrates such a shift in which the zeroth core 222 and first core 224 no longer have any workloads while the second core 226 and third core 228 have assumed the previous workload which was managed by the zeroth core 222 and first core 224.FIG. 8B is logical flowchart illustrating a sub-method or subroutine 625,640,665 for applying spatial workload shifting thermal mitigation techniques. Block 805 is the first step in the submethod or subroutine 625, 640, 665 for applying spatial workload shifting thermal mitigation techniques. In this first block 805, the thermal policy manager 101 may determine the current thermal state based on temperature readings provided by thermal sensors 157. Once the current thermal state is determined by the thermal policy manager 101, the thermal policy manager 101, the monitor module 114, and/or the O/S module 207 may then review the current workload of the cores of a multicore processor 110 in block 810. As noted previously, the thermal policy manager 101 may be tasked with implementing one or more thermal mitigation techniques.However, in an alternate exemplary embodiments, it is possible for the thermal policy manager 101 to only suggest that thermal mitigation techniques be applied and the thermal policy manager 101 may allow the monitor module 114 and/or the O/S module to decide how the thermal mitigation techniques are actually implemented. For brevity, the remainder of this subroutine 625, 640, 665 will reference the embodiment in which the thermal policy manager 101 actually implements the thermal mitigation techniques.Next, in block 815, the thermal policy manager 101 may determine which cores are experiencing heavy workloads and those cores which are experiencing little or no workloads. In block 820, the thermal policy manager 101 may determine which processors that are potentially responsible for contributing to or causing the thermal loading condition and the current thermal state. Subsequently, in block 825, the thermal policy manager 101 may adjust the spatial workload distribution among the cores of a multicore processor 110 to mitigate thermal load and in accordance with the current thermal state. Block 825 generally corresponds to the spatial shifting thermal mitigation technique illustrated in FIG. 8A . The submethod 625, 640, or 665 then returns to an appropriate step in the thermal management method 600 depending upon the current thermal state of the PCD 100.Certain steps in the processes or process flows described in this specification naturally precede others for the invention to function as described. However, the invention is not limited to the order of the steps described if such order or sequence does not alter the functionality of the invention. That is, it is recognized that some steps may performed before, after, or parallel (substantially simultaneously with) other steps without departing from the scope and spirit of the invention. In some instances, certain steps may be omitted or not performed without departing from the invention. Further, words such as "thereafter", "then", "next", etc. are not intended to limit the order of the steps. These words are simply used to guide the reader through the description of the exemplary method.Additionally, one of ordinary skill in programming is able to write computer code or identify appropriate hardware and/or circuits to implement the disclosed invention without difficulty based on the flow charts and associated description in this specification, for example.Therefore, disclosure of a particular set of program code instructions or detailed hardware devices is not considered necessary for an adequate understanding of how to make and use the invention. The inventive functionality of the claimed computer implemented processes is explained in more detail in the above description and in conjunction with the Figs. which may illustrate various process flows.In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted as one or more instructions or code on a computer-readable medium. Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such computer-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to carry or store desired program code in the form of instructions or data structures and that may be accessed by a computer.Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line ("DSL"), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium.Disk and disc, as used herein, includes compact disc ("CD"), laser disc, optical disc, digital versatile disc ("DVD"), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.Therefore, although selected aspects have been illustrated and described in detail, it will be understood that various substitutions and alterations may be made therein without departing from the spirit and scope of the present invention, as defined by the following claims.In the following, further embodiments are described to facilitate the understanding of the invention:1. A method for managing one or more thermal policies of a portable computing device comprising:monitoring temperature of the portable computing device with at least one of an internal thermal sensor and an external thermal sensor;determining if a change in temperature has been detected by at least one thermal sensor;if the change in temperature has been detected by a thermal sensor, then increasing a frequency in which temperature readings are detected;determining if a current temperature of the portable computing device as detected by one or more of the thermal sensors falls within a first predetermined temperature range;if the current temperature of the portable computing device falls within the first predetermined temperature range, then initiating one or more first thermal mitigation techniques in order to reduce temperature of the portable computing device.2. The method of embodiment 1, further comprising determining if the current temperature of the portable computing device as detected by one or more of the thermal sensors falls within a second predetermined temperature range.3. The method of embodiment 2, wherein if the current temperature of the portable computing device falls within the second predetermined temperature range, then initiating one or more second thermal mitigation techniques that are more severe relative to the first thermal mitigation techniques.4. The method of embodiment 1, wherein the one or more thermal mitigation techniques comprises a spatial workload shift among cores of a multicore processor.5. A method for managing one or more thermal policies of a portable computing device comprising:determining if the portable computing device has achieved a first predetermined thermal state;if the portable computing device has achieved the first predetermined thermal state, then initiating one or more first thermal mitigation techniques in order to reduce temperature of the portable computing device;determining if the portable computing device has achieved a second predetermined thermal state; andif the portable computing device has achieved the second predetermined thermal state, then initiating one or more second thermal mitigation techniques, in order to reduce temperature of the portable computing device, the second thermal mitigation techniques being more severe relative to the first thermal mitigation techniques.6. The method of embodiment 5, further comprising monitoring temperature of the portable computing device with at least one of an internal thermal sensor and an external thermal sensor.7. The method of embodiment 5, further comprising determining if a change in temperature has been detected by at least one thermal sensor.8. The method of embodiment 7, further comprising if the change in temperature has been detected by a thermal sensor, then increasing a frequency in which temperature readings are detected.9. The method of embodiment 5, further comprising determining if one or more of the mitigation techniques has been successful in lowering temperature of the portable computing device.10. The method of embodiment 7, further comprising positioning the thermal sensor adjacent to hardware and on a same surface with the hardware within the portable computing device, and assigning one or more thermal mitigation techniques to the hardware based on an association between the thermal sensor and the hardware.11. A computer system for managing one or more thermal policies of a portable computing device, the system comprising:a processor operable to:monitor temperature of the portable computing device with at least one of an internal thermal sensor and an external thermal sensor;determine if a change in temperature has been detected by at least one thermal sensor;increase a frequency in which temperature readings are detected if the change in temperature has been detected by a thermal sensor;determine if a current temperature of the portable computing device as detected by one or more of the thermal sensors falls within a first predetermined temperature range; andinitiate one or more first thermal mitigation techniques in order to reduce temperature of the portable computing device if the current temperature of the portable computing device falls within the first predetermined temperature range.12. The system of embodiment 11, wherein the processor is further operable to:determine if the current temperature of the portable computing device as detected by one or more of the thermal sensors falls within a second predetermined temperature range.13. The system of embodiment 12, wherein the processor is further operable to initiate one or more second thermal mitigation techniques that are more severe relative to the first thermal mitigation techniques if the current temperature of the portable computing device falls within the second predetermined temperature range.14. The system of embodiment 11, wherein the one or more thermal mitigation techniques comprises a spatial workload shift among cores of a multicore processor.15. A computer system for managing one or more thermal policies of a portable computing device, the system comprising:a processor operable to:determine if the portable computing device has achieved a first predetermined thermal state;initiate one or more first thermal mitigation techniques in order to reduce temperature of the portable computing device if the portable computing device has achieved the first predetermined thermal state;determining if the portable computing device has achieved a second predetermined thermal state; andinitiate one or more second thermal mitigation techniques if the portable computing device has achieved the second predetermined thermal state in order to reduce temperature of the portable computing device, the second thermal mitigation techniques being more severe relative to the first thermal mitigation techniques.16. The system of embodiment 15, wherein the processor operable to monitor temperature of the portable computing device with at least one of an internal thermal sensor and an external thermal sensor.17. The system of embodiment 15, wherein the processor is further operable to determine if a change in temperature has been detected by at least one thermal sensor.18. The system of embodiment 17, wherein the processor is further operable to increase a frequency in which temperature readings are detected if the change in temperature has been detected by a thermal sensor.19. The system of embodiment 11, wherein the processor is further operable to determine if one or more of the mitigation techniques has been successful in lowering temperature of the portable computing device.20. The system of embodiment 11, wherein the processor is further operable to assign one or more thermal mitigation techniques to hardware based on an association between the thermal sensor and the hardware, the thermal sensor being positioned adjacent to hardware and on a same surface with the hardware within the portable computing device.21. A computer system for managing one or more thermal policies of a personal computing device, the system comprising:means for monitoring temperature of the portable computing device with at least one of an internal thermal sensor and an external thermal sensor;means for determining if a change in temperature has been detected by at least one thermal sensor;means for increasing a frequency in which temperature readings are detected if the change in temperature has been detected by a thermal sensor;means for determining if a current temperature of the portable computing device as detected by one or more of the thermal sensors falls within a first predetermined temperature range; andmeans for initiating one or more first thermal mitigation techniques in order to reduce temperature of the portable computing device if the current temperature of the portable computing device falls within the first predetermined temperature range.22. The system of embodiment 21, further comprising: means for determining if the current temperature of the portable computing device as detected by one or more of the thermal sensors falls within a second predetermined temperature range.23. The system of embodiment 22, further comprising means for initiating one or more second thermal mitigation techniques that are more severe relative to the first thermal mitigation techniques if the current temperature of the portable computing device falls within the second predetermined temperature range.24. The system of embodiment 21, wherein the one or more thermal mitigation techniques comprises a spatial workload shift among cores of a multicore processor.25. A computer system for managing one or more thermal policies of a personal computing device, the system comprising:means for determining if the portable computing device has achieved a first predetermined thermal state;means for initiating one or more first thermal mitigation techniques in order to reduce temperature of the portable computing device if the portable computing device has achieved the first predetermined thermal state;means for determining if the portable computing device has achieved a second predetermined thermal state; andmeans for initiating one or more second thermal mitigation techniques if the portable computing device has achieved the second predetermined thermal state in order to reduce temperature of the portable computing device, the second thermal mitigation techniques being more severe relative to the first thermal mitigation techniques.26. The system of embodiment 25, further comprising means for monitoring temperature of the portable computing device with at least one of an internal thermal sensor and an external thermal sensor.27. The system of embodiment 26, further comprising means for determining if a change in temperature has been detected by at least one thermal sensor.28. The system of embodiment 27, further comprising: further comprising means for increasing a frequency in which temperature readings are detected if the change in temperature has been detected by a thermal sensor.29. The system of embodiment 25, further comprising means for determining if one or more of the mitigation techniques has been successful in lowering temperature of the portable computing device.30. The system of embodiment 25, further comprising means for assigning one or more thermal mitigation techniques to hardware based on an association between the thermal sensor and the hardware, the thermal sensor being positioned adjacent to hardware and on a same surface with the hardware within the portable computing device.31. A computer program product comprising a computer usable medium having a computer readable program code embodied therein, said computer readable program code adapted to be executed to implement a method for managing thermal policies of a portable computing device, said method comprising:monitoring temperature of the portable computing device with at least one of an internal thermal sensor and an external thermal sensor;determining if a change in temperature has been detected by at least one thermal sensor;if the change in temperature has been detected by a thermal sensor, then increasing a frequency in which temperature readings are detected;determining if a current temperature of the portable computing device as detected by one or more of the thermal sensors falls within a first predetermined temperature range; andif the current temperature of the portable computing device falls within the first predetermined temperature range, then initiating one or more first thermal mitigation techniques in order to reduce temperature of the portable computing device.32. The computer program product of embodiment 31, wherein the program code implementing the method further comprises: determining if the current temperature of the portable computing device as detected by one or more of the thermal sensors falls within a second predetermined temperature range.33. The computer program product of embodiment 32, wherein the program code implementing the method further comprises: initiating one or more second thermal mitigation techniques that are more severe relative to the first thermal mitigation techniques if the current temperature of the portable computing device falls within the second predetermined temperature range.34. The computer program product of embodiment 31, wherein the one or more thermal mitigation techniques comprises a spatial workload shift among cores of a multicore processor.35. A computer program product comprising a computer usable medium having a computer readable program code embodied therein, said computer readable program code adapted to be executed to implement a method for managing thermal policies of a portable computing device, said method comprising:determining if the portable computing device has achieved a first predetermined thermal state;if the portable computing device has achieved the first predetermined thermal state, then initiating one or more first thermal mitigation techniques in order to reduce temperature of the portable computing device;determining if the portable computing device has achieved a second predetermined thermal state; andif the portable computing device has achieved the second predetermined thermal state, then initiating one or more second thermal mitigation techniques, in order to reduce temperature of the portable computing device, the second thermal mitigation techniques being more severe relative to the first thermal mitigation techniques.36. The computer program product of embodiment 35, wherein the program code implementing the method further comprises: monitoring temperature of the portable computing device with at least one of an internal thermal sensor and an external thermal sensor.37. The computer program product of embodiment 35, wherein the program code implementing the method further comprises: determining if a change in temperature has been detected by at least one thermal sensor.38. The computer program product of embodiment 31, wherein the program code implementing the method further comprises: increasing a frequency in which temperature readings are detected if the change in temperature has been detected by a thermal sensor.39. The computer program product of embodiment 31, wherein the program code implementing the method further comprises: determining if one or more of the mitigation techniques has been successful in lowering temperature of the portable computing device.40. The computer program product of embodiment 31, wherein the program code implementing the method further comprises: assigning one or more thermal mitigation techniques to hardware based on an association between the thermal sensor and the hardware, the thermal sensor being positioned adjacent to hardware and on a same surface with the hardware within the portable computing device.
Methods and apparatus relating to Priority Based Application Event Control (PAEC) to reduce application events are described. In one embodiment, PAEC may determine which applications (and their corresponding sub-system(s)) may cause a processor or platform to exit a low power consumption state. In an embodiment, PAEC may determine which applications (and their corresponding sub-system(s)) may resume operations after a processor or platform exit a low power consumption state. Other embodiments are also claimed and disclosed.
CLAIMS 1. An apparatus comprising: a processor; and logic to allow one or more of a plurality of applications to be executed based on policy information, corresponding to the plurality of applications, and after the processor is to exit a low power consumption state, wherein the policy information is to indicate which one of the plurality of applications is to be awakened after the processor exits the low power consumption state. 2. The apparatus of claim 1, wherein the logic is to allow one or more sub-systems, corresponding to the one or more of the plurality of applications, to be powered on based on the policy information and after the processor is to exit the low power consumption state. 3. The apparatus of claim 2, wherein the policy information is to indicate which of the one or more sub-systems corresponds to which of the one or more of the plurality of applications. 4. The apparatus of claim 2, wherein the policy information is to indicate which power state of the one or more sub-systems corresponds to which of the one or more of the plurality of applications. 5. The apparatus of claim 1, wherein the logic is to prioritize which of the one or more of the plurality of applications are allowed to wake the processor from the low power consumption state. 6. The apparatus of claim 1, wherein the plurality of applications comprise one or more platform-power-aware or one or more platform-power-unaware applications. 7. The apparatus of claim 1, wherein the processor comprises a plurality of processor cores. 8. The apparatus of claim 1, wherein one or more of a memory, the processor, and the logic are on a same integrated circuit device. 9. An apparatus comprising: a processor; and logic to allow one or more of a plurality of applications to cause the processor to exit from a low power consumption state based on policy information corresponding to the plurality of applications, wherein the policy information is to indicate which one of the plurality of applications is to be allowed to cause the processor to exit from the low power consumption state. 10. The apparatus of claim 9, wherein the logic is to allow one or more sub-systems, corresponding to the one or more of the plurality of applications, to cause the processor to exit the low power consumption state based on the policy information. 11. The apparatus of claim 10, wherein the policy information is to indicate which of the one or more sub-systems corresponds to which of the one or more of the plurality of applications. 12. The apparatus of claim 10, wherein the policy information is to indicate which power state of the one or more sub-systems corresponds to which of the one or more of the plurality of applications. 13. The apparatus of claim 9, wherein the logic is to prioritize which of the one or more of the plurality of applications are allowed to wake the processor from the low power consumption state. 14. The apparatus of claim 9, wherein the plurality of applications comprise one or more platform-power-aware or one or more platform-power-unaware applications. 15. The apparatus of claim 9, wherein the processor comprises a plurality of processor cores. 16. The apparatus of claim 9, wherein one or more of a memory, the processor, and the logic are on a same integrated circuit device. 17. A computer-readable medium to store instructions that when executed by a processor cause the processor to: allow one or more of a plurality of applications to be executed based on policy information corresponding to the plurality of applications, and after the processor exits a low power consumption state, wherein the policy information indicates which one of the plurality of applications is to be awakened after the processor exits the low power consumption state. 18. The computer-readable medium of claim 17, wherein the instructions cause the processor to allow one or more sub-systems, corresponding to the one or more of the plurality of applications, to be powered on based on the policy information and after the processor exits the low power consumption state. 19. The computer-readable medium of claim 18, wherein the policy information indicates which of the one or more sub-systems corresponds to which of the one or more of the plurality of applications. 20. The computer-readable medium of claim 18, wherein the policy information indicates which power state of the one or more sub-systems corresponds to which of the one or more of the plurality of applications. 21. The computer-readable medium of claim 17, wherein the instructions cause the processor is to prioritize which of the one or more of the plurality of applications are allowed to wake the processor from the low power consumption state. 22. The computer-readable medium of claim 17, wherein the plurality of applications comprise one or more platform-power-aware or one or more platform-power-unaware applications. 23. The computer-readable medium of claim 17, wherein a memory, coupled to the processor, is to store an operating system software. 24. A computer-readable medium to store instructions that when executed by a processor cause the processor to : allow one or more of a plurality of applications to cause the processor to exit from a low power consumption state based on policy information corresponding to the plurality of applications,, wherein the policy information indicates which one of the plurality of applications is to be allowed to cause the processor to exit from the low power consumption state. 25. The computer-readable medium of claim 24, wherein the instructions are to cause the processor to allow one or more sub-systems, corresponding to the one or more of the plurality of applications, to cause the processor to exit the low power consumption state based on the policy information and before the processor exits the low power consumption state. 26. The computer-readable medium of claim 25, wherein the policy information indicates which of the one or more sub-systems corresponds to which of the one or more of the plurality of applications. 27. The computer-readable medium of claim 25, wherein the policy information indicates which power state of the one or more sub-systems corresponds to which of the one or more of the plurality of applications. 28. The computer-readable medium of claim 24, wherein the instructions are to cause the processor to prioritize which of the one or more of the plurality of applications are allowed to wake the processor from the low power consumption state. 29. The computer-readable medium of claim 24, wherein the plurality of applications comprise one or more platform-power-aware or one or more platform-power-unaware applications. 30. The computer-readable medium of claim 24, wherein a memory, coupled to the processor, is to store an operating system software.
PRIORITY BASED APPLICATION EVENT CONTROL (PAEC) TO REDUCE POWER CONSUMPTION FIELD The present disclosure generally relates to the field of electronics. More particularly, an embodiment of the invention relates to Priority Based Application Event Control (PAEC) to reduce power consumption in computing devices. BACKGROUND Generally, one of the highest power consuming components in computing system is a processor. To reduce power consumption, some implementations may attempt to have the processor enter a sleep or standby mode as often and as long as possible. However, these attempts may be defeated due to occurrence of various events, e.g., triggered by other components in the system, which may force a processor to exit a lower power consumption state. In turn, the higher power consumption may also increase heat generation. Excessive heat may damage components of a computer system. Further, the higher power utilization may increase battery consumption, e.g., in mobile computing devices, which in turn reduces the amount of time a mobile device may be operated prior to recharging. The additional power consumption may additionally require usage of larger batteries that may weigh more. Heavier batteries reduce the portability or usability of a mobile computing device. Accordingly, overall system power consumption and utility may be directly related to how long a processor is maintained in a lower power consumption state. BRIEF DESCRIPTION OF THE DRAWINGS The detailed description is provided with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items. Figs. 1, 3, and 5-6 illustrate block diagrams of embodiments of computing systems, which may be utilized to implement various embodiments discussed herein. Fig. 2 illustrates a block diagram of portions of a processor core and other components of a computing system, according to an embodiment. Fig. 4 illustrates a flow diagram in accordance with some embodiments. DETAILED DESCRIPTION In the following description, numerous specific details are set forth in order to provide a thorough understanding of various embodiments. However, various embodiments of the invention may be practiced without the specific details. In other instances, well-known methods,procedures, components, and circuits have not been described in detail so as not to obscure the particular embodiments of the invention. Further, various aspects of embodiments of the invention may be performed using various means, such as integrated semiconductor circuits ("hardware"), computer-readable instructions organized into one or more programs ("software"), or some combination of hardware and software. For the purposes of this disclosure reference to "logic" shall mean either hardware, software, firmware, or some combination thereof. Also, the use of "instruction" and "micro-operation" (uop) is interchangeable as discussed herein. Some of the embodiments discussed herein may utilize Priority Based Application Event Control (PAEC) to reduce the number of application events that may cause a processor to exit a low power consumption state. In one embodiment, PAEC may be utilized in mobile devices or any other type of computing device. In an embodiment, PAEC techniques may leverage hardware (e.g., SoC (System on Chip) or On-Die System Fabric (OSF)) to assign priorities to applications ("apps") and associate these application priorities or applications with platform subsystem states (modes), e.g., to control platform events generated by both platform-power-aware and/or platform-power-unaware applications based on priority and/or policy configuration, e.g., without compromising QOS (Quality Of Service) or user experience. In one embodiment, PAEC provides fine grain power management by associating applications with the platform subsystem^), e.g., to provide a mechanism to specify and/or prioritize which apps may wake the system or processor, which apps must/may run after system-wakeup or processor-wakeup, etc., and without impacting the QOS requirements and/or user experience. In some embodiments, resumption of one or more applications (after a platform/system and/or a processor have entered a low power consumption state) may be restricted by the PAEC based on some policy (also referred to herein interchangeably as configuration) information or settings. This information may be adaptive and change during runtime in some embodiments. Furthermore, this policy information may include information regarding whether, in which order, when, and/or which of the one or more applications and/or their associated sub-system(s) are to be woken once the platform/system and/or the processor exits from the lower power consumption state. In an embodiment, the policy information may also indicate and/or prioritize which application and/or sub-system may wake the system. The techniques discussed herein may be used in any type of a computing system, such as the systems discussed with reference to Figs. 1-2 and 5-6. More particularly, Fig. 1 illustrates a block diagram of a computing system 100, according to an embodiment of the invention. The system 100 may include one or more processors 102-1 through 102-N (generally referred to herein as "processors 102" or "processor 102"). The processors 102 may communicate via aninterconnection network or bus 104. Each processor may include various components some of which are only discussed with reference to processor 102-1 for clarity. Accordingly, each of the remaining processors 102-2 through 102-N may include the same or similar components discussed with reference to the processor 102-1. In an embodiment, the processor 102-1 may include one or more processor cores 106-1 through 106-M (referred to herein as "cores 106" or more generally as "core 106"), a shared cache 108, a router 110, and/or a logic 120. The processor cores 106 may be implemented on a single integrated circuit (IC) chip. Moreover, the chip may include one or more shared and/or private caches (such as cache 108), buses or interconnections (such as a bus or interconnection network 112), memory controllers (such as those discussed with reference to Figs. 5-6), or other components. In one embodiment, the router 110 may be used to communicate between various components of the processor 102-1 and/or system 100. Moreover, the processor 102-1 may include more than one router 110. Furthermore, the multitude of routers 110 may be in communication to enable data routing between various components inside or outside of the processor 102-1. The shared cache 108 may store data (e.g., including instructions) that are utilized by one or more components of the processor 102-1, such as the cores 106. For example, the shared cache 108 may locally cache data stored in a memory 114 for faster access by components of the processor 102. In an embodiment, the cache 108 may include a mid-level cache (such as a level 2 (L2), a level 3 (L3), a level 4 (L4), or other levels of cache), a last level cache (LLC), and/or combinations thereof. Moreover, various components of the processor 102-1 may communicate with the shared cache 108 directly, through a bus (e.g., the bus 112), and/or a memory controller or hub. As shown in Fig. 1, in some embodiments, one or more of the cores 106 may include a level 1 (LI) cache 116-1 (generally referred to herein as "LI cache 116"). In one embodiment, the PAEC logic 120 may reduce the number of application events that may cause a processor/platform to exit a low power consumption state and/or restrict resumption of operations by applications (and powering on of their corresponding sub-systems) after a processor/platform exits a low power consumption state. Logic 120 may assign priority to applications ("apps") that may be stored in memory 114 and may further associate the apps with platform sub-system states (modes), e.g., to control platform events generated by both platform- power-aware and/or platform-power-unaware applications based on application priority and/or policy configuration, e.g., without compromising QOS (Quality Of Service) or user experience. In some embodiments, operations performed by logic 120 may be controlled or configured viaOS and/or software application(s) (e.g., that may be stored in the memory 114), e.g., per user or Original Equipment Manufactures (OEMs) (based on information from a User Interface (e.g., UI 314 of Fig. 3) in some embodiments). Additionally, information relating to the application priority and/or application policy configuration may be stored in any of the memories discussed herein, including for example, memory 114 and/or caches 108/116, etc. Fig. 2 illustrates a block diagram of portions of a processor core 106 and other components of a computing system, according to an embodiment of the invention. In one embodiment, the arrows shown in Fig. 2 illustrate the flow direction of instructions through the core 106. One or more processor cores (such as the processor core 106) may be implemented on a single integrated circuit chip (or die) such as discussed with reference to Fig. 1. Moreover, the chip may include one or more shared and/or private caches (e.g., cache 108 of Fig. 1), interconnections (e.g., interconnections 104 and/or 112 of Fig. 1), control units, memory controllers, or other components. As illustrated in Fig. 2, the processor core 106 may include a fetch unit 202 to fetch instructions (including instructions with conditional branches) for execution by the core 106. The instructions may be fetched from any storage devices such as the memory 114 and/or the memory devices discussed with reference to Figs. 5-6. The core 106 may also include a decode unit 204 to decode the fetched instruction. For instance, the decode unit 204 may decode the fetched instruction into a plurality of uops (micro-operations). Additionally, the core 106 may include a schedule unit 206. The schedule unit 206 may perform various operations associated with storing decoded instructions (e.g., received from the decode unit 204) until the instructions are ready for dispatch, e.g., until all source values of a decoded instruction become available. In one embodiment, the schedule unit 206 may schedule and/or issue (or dispatch) decoded instructions to an execution unit 208 for execution. The execution unit 208 may execute the dispatched instructions after they are decoded (e.g., by the decode unit 204) and dispatched (e.g., by the schedule unit 206). In an embodiment, the execution unit 208 may include more than one execution unit. The execution unit 208 may also perform various arithmetic operations such as addition, subtraction, multiplication, and/or division, and may include one or more an arithmetic logic units (ALUs). In an embodiment, a co-processor (not shown) may perform various arithmetic operations in conjunction with the execution unit 208. Further, the execution unit 208 may execute instructions out-of-order. Hence, the processor core 106 may be an out-of-order processor core in one embodiment. The core 106 may also include a retirement unit 210. The retirement unit 210 may retire executed instructions after they are committed. In an embodiment, retirement of the executed instructions may result in processorstate being committed from the execution of the instructions, physical registers used by the instructions being de-allocated, etc. The core 106 may also include a bus unit 214 to enable communication between components of the processor core 106 and other components (such as the components discussed with reference to Fig. 1) via one or more buses (e.g., buses 104 and/or 112). The core 106 may also include one or more registers 216 to store data accessed by various components of the core 106 (such as values related to assigned app priorities and/or sub-system states (modes) association. Furthermore, even though Fig. 1 illustrates the PAEC logic 120 to be coupled to the core 106 via interconnect 112, in various embodiments the PAEC logic 120 may be located elsewhere such as inside the core 106, coupled to the core via bus 104, etc. Moreover, the current generation of smart phones and netbooks platforms may support granular power management via OSPM (Operating System Power Management), PMU (Power Management Unit), and SCU (System Controller Unit). The SCU along with the Operating System may provide the Always On Always Connected (AO AC) capability to the platform. Based on the OS power manager's guidance, the SCU may determine the correct power level for different sub-systems (including CPU (Central Processing Unit) or processor) in the platform. External events like timer interrupt, interrupt from Communication (Comms) module, etc., may be forwarded by the SCU to CPU thereby waking up the CPU. Apart from subsystem interrupts, CPU also may be woken up by applications (apps) due to timers or events to provide AO AC functionality. These wake(s) reduce the residency time of the CPU in the sleep or deep sleep state, resulting in additional power consumption. Also, platform power-unaware apps may be active resulting in waking of the CPU and other sub-systems, even though the power manager entity has put the platform into standby/sleep mode. In addition, applications may set timers and wake up the CPU periodically even though there is no change to a resource under consideration. Furthermore, some current platforms may support coalescing of external events, and wait/deliver the events (wakes) based on some wake configuration. Current implementations generally have no way to assign priority to the applications in the platform and associate these application priorities with different operating modes of the platform (such as browsing, video playback, etc.), and as a result, apps may be frozen/thawed - put into sleep/deep sleep/standby state or forced to be in suspended state or allowed to be run. In addition, there is generally no existing mechanism to specify and prioritize which apps may wake the system from Suspended state and also, which apps must/may run once platform wakes up, etc. For example, in some current systems, SCU controls only the sub-system states and not the apps associated with them.Also, current methodologies generally fail to consider the QOS or user experience impact on application/sub-system that are forced into sleep/standby state. Fig. 3 illustrates a block diagram of a system 300 in which PAEC techniques may be implemented, according to some embodiments. To provide the compute and storing capability, system 300 may include a host CPU (or GFX (Graphics) 302 (such as the processors discussed with reference to Figs. 1-2 and 5-6), memory 304 (such as the memories discussed with reference to Figs. 1-2 and 5-6), and drives (e.g., as part of sub systems 1 through X ). Generally, the sub-systems shown (e.g., 1, 2, through X) may include any component in a computing system such as the components discussed with reference to Figs. 1-2 and 4-6, that are capable of being power gated and/or capable of waking a computing system/platform and/or processor. Furthermore, system 300 may include a display controller 308 to provide display capabilities, a hardware Security Engine 310 to provide any necessary cryptographic operations and/or a tamper proof execution environment, a PAEC component 312 implemented as an OS component to run inside the OS 313 (wherein PAEC 312 may be tightly integrated with the scheduler of the OS 313 and an OS power manager in one embodiment and have the ability to halt/freeze/thaw a currently-running process/program and resume later in some embodiments), a PAEC UI (User Interface) 314 (which may be an application component in accordance with one embodiment) to provide the ability for an administrator or user to specify priority and/or associate them with the modes of the sub-systems in the platform, a Secure Storage 316 to provide a tamper proof secure storage that stores the PAEC policy configured by user/administrator information, and a SCU (System Controller Unit) and/or PMU (Power Management Unit) 318 to provide fine-grained platform power management support. Fig. 4 illustrates a flow diagram of a method for implementing PAEC, according to some embodiments. In an embodiment, Fig. 4 illustrates the operation of the PAEC logic 120, PAEC component 312, and/or PAEC UI 314 in accordance with some embodiments. Furthermore, the operations discussed with reference to Fig. 4 may be performed by one or more components of Figs. 1-3 and 5-6. Referring to Figs. 1-4, once PAEC functionality is enabled at 402 and/or PAEC UI is invoked at 403 (e.g., by a user/OEM/OS/etc. and per some stored value such as a bit), the PAEC UI 314 may provide the user with the current policy settings stored in the security storage 316 at 404. PAEC UI 314 may provide the user with option(s) to change the policy and/or priority settings at 406. At 408, PAEC UI 314 may allow a user or administrator to assign priority to the applications and associate them with sub-system operating mode(s) (e.g., Browsing, Video playback, etc.), e.g., to update the policy settings.In an embodiment, priority may be assigned by an OEM, OS, or apps provider at 408. Furthermore, priority may be determined and assigned based on QOS API (Application Program Interface) requirements placed by the app during application registration, in an embodiment. Power aware apps may use the QOS API to specify their QOS requirements that PAEC mechanism (e.g., items 120 or 312) uses, e.g., as a vector, to determine the priority of the apps. Based on the policy settings, PAEC determines the threshold priorities which allow wake events and builds a list of apps that are to be frozen after system/platform resume at 410. The threshold priority may also determine how long PAEC may defer events before it wakes up the CPU in an embodiment. At 412, when the platform is about to enter a (e.g., SOix) platform low power state, all applications are frozen and the process execution halted (e.g., by PAEC mechanism 120 or 312), e.g., based on some priority scheme and policy settings. "SOix" generally refers to improved idle power state(s) achieved by platform-level power management that is event driven (e.g., based on OS or software application input) instead of traditional idle power state that is periodic or based on a polled activity. In some embodiments, at least some of the power consumption states discussed herein may be in accordance with those defined under Advanced Configuration and Power Interface (ACPI) specification, Revision 4.0a, April 5, 2010, including for example, CO which may indicate the processor is operating, CI which may indicate the processor is not executing instructions but may return to an executing state almost instantaneously, C2 which may indicate the processor is to maintain all software -visible information but may take longer to return to full executing state, C3 which may indicate the processor is sleep and does not need to keep its cache coherent, etc.. In one embodiment, PAEC mechanism (e.g., items 120 or 312) freezes application(s) based on the information obtained during application registration or invocation. In some embodiments, at 412, for exceptional apps that PAEC should not freeze, PAEC may be configured to assign the highest/lowest priority available to those applications to allow for their wake events to land in the CPU accordingly. PAEC may send a notification about these apps to user (if configured), for example, via the PAEC UI 314, or log information about these apps. This allows the user to override the default settings in the future. At 414, based on the configuration settings, PAEC mechanism (e.g., items 120 or 312) may allow or restrict selective apps to wake the system/processor or be run post resume (from a low power consumption state such as SOix) to keep the corresponding sub-system in a low-power state and increase the CPU residency in the low power state. Also, PAEC mechanism (e.g., items 120 or 312) may keep track of the wake events during platform (e.g., SOix) low powerconsumption state(s) to provide feedback on the policy settings to the policy manager for fine tuning of the parameters. This allows PAEC to be adaptive, e.g., by keeping track of the wake events during runtime. In various embodiments, PAEC is establishes a relationship between applications and sub- system power states to provide greater flexibility to OS/applications/Power Manager logic to reduce power in a very granular fashion. Also, PAEC may be adaptive and may not impact the QOS or user experience in the platform. In some embodiments, PAEC is configurable and may be integrated with other components such as malware program, parental control, etc. to restrict specific apps. PAEC may reduce wakes and keep a CPU in longer idle states. App developers may leverage the QOS API to improve user experience with enhanced power savings. In some embodiments, PAEC associates applications with platform sub-system operational modes and provides options to wake/run selective applications after resuming from low power states (such as wakes from SOix states). In an embodiment, PAEC is capable of masking wake events from selective sub- systems/apps based on policy settings. Further, PAEC may coalesce and deliver low priority wakes at a later time frame. And, PAEC may freeze applications based on their priority settings and/or their associated sub-system status, e.g., to avoid sub-systems to be turned on during low power (e.g., SOix) states, without compromising user experience or QOS. There may also be more user control of the configuration "knobs" per-application-basis (for security optimization, etc.). And, PAEC may provide the ability to securely store the policy settings on the secure storage. In some embodiments, PAEC may provide several advantages including one or more of the following: (1) An ability to assign priorities to the applications in the platform and associate the applications with the sub-system operating modes. Priorities may be assigned by apps pro vider/sy stem administrator/service provider or based on user configurable policy settings as well. Moreover, an embodiment allows apps to be categorized based on their priority, for example, to provide easier app management. (2) PAEC may be configured to halt/freeze apps that are masked out in a overlay region to avoid their events. (3) PAEC provides mechanism to wake up selective applications and their associated sub-system based on configuration settings. PAEC provides fine granular control from OS/app perspective. (4) PAEC may be adaptive - e.g., track the wakes during suspended state and provides feedback to fine tune the parameters. (5) Increases the residency of the CPU and other sub-systems in deepest low power state. (6) Store user configuration policy in a tamper-proof, secure storage.Fig. 5 illustrates a block diagram of a computing system 500 in accordance with an embodiment of the invention. The computing system 500 may include one or more central processing unit(s) (CPUs) 502 or processors that communicate via an interconnection network (or bus) 504. The processors 502 may include a general purpose processor, a network processor (that processes data communicated over a computer network 503), or other types of a processor (including a reduced instruction set computer (RISC) processor or a complex instruction set computer (CISC)). Moreover, the processors 502 may have a single or multiple core design. The processors 502 with a multiple core design may integrate different types of processor cores on the same integrated circuit (IC) die. Also, the processors 502 with a multiple core design may be implemented as symmetrical or asymmetrical multiprocessors. In an embodiment, one or more of the processors 502 may be the same or similar to the processors 102 of Fig. 1. For example, one or more of the processors 502 may include the PAEC logic 120 discussed with reference to Figs. 1-4. Also, the operations discussed with reference to Figs. 1-4 may be performed by one or more components of the system 500. A chipset 506 may also communicate with the interconnection network 504. The chipset 506 may include a memory control hub (MCH) 508. The MCH 508 may include a memory controller 510 that communicates with a memory 512 (which may be the same or similar to the memory 114 of Fig. 1). The memory 512 may store data, including sequences of instructions, that may be executed by the CPU 502, or any other device included in the computing system 500. For example, the memory 512 may store the PAEC 312, OS 313, and/or PAEC UI 314 discussed with reference to Figs. 3-4. In one embodiment of the invention, the memory 512 may include one or more volatile storage (or memory) devices such as random access memory (RAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), static RAM (SRAM), or other types of storage devices. Nonvolatile memory may also be utilized such as a hard disk. Additional devices may communicate via the interconnection network 504, such as multiple CPUs and/or multiple system memories. The MCH 508 may also include a graphics interface 514 that communicates with a display device 516. In one embodiment of the invention, the graphics interface 514 may communicate with the display device 516 via an accelerated graphics port (AGP). In an embodiment of the invention, the display 516 (such as a flat panel display) may communicate with the graphics interface 514 through, for example, a signal converter that translates a digital representation of an image stored in a storage device such as video memory or system memory into display signals that are interpreted and displayed by the display 516. The display signals produced by the displaydevice may pass through various control devices before being interpreted by and subsequently displayed on the display 516. A hub interface 518 may allow the MCH 508 and an input/output control hub (ICH) 520 to communicate. The ICH 520 may provide an interface to I/O device(s) that communicate with the computing system 500. The ICH 520 may communicate with a bus 522 through a peripheral bridge (or controller) 524, such as a peripheral component interconnect (PCI) bridge, a universal serial bus (USB) controller, or other types of peripheral bridges or controllers. The bridge 524 may provide a data path between the CPU 502 and peripheral devices. Other types of topologies may be utilized. Also, multiple buses may communicate with the ICH 520, e.g., through multiple bridges or controllers. Moreover, other peripherals in communication with the ICH 520 may include, in various embodiments of the invention, integrated drive electronics (IDE) or small computer system interface (SCSI) hard drive(s), USB port(s), a keyboard, a mouse, touch screen, camera, parallel port(s), serial port(s), floppy disk drive(s), digital output support (e.g., digital video interface (DVI)), or other devices. The bus 522 may communicate with an audio device 526, one or more disk drive(s) 528, and a network interface device 530 (which is in communication with the computer network 503). Other devices may communicate via the bus 522. Also, various components (such as the network interface device 530) may communicate with the MCH 508 in some embodiments of the invention. In addition, the processor 502 and the MCH 508 may be combined to form a single chip. Furthermore, the graphics accelerator 516 may be included within the MCH 508 in other embodiments of the invention. Furthermore, the computing system 500 may include volatile and/or nonvolatile memory (or storage). For example, nonvolatile memory may include one or more of the following: readonly memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically EPROM (EEPROM), a disk drive (e.g., 528), a floppy disk, a compact disk ROM (CD-ROM), a digital versatile disk (DVD), flash memory, a magneto-optical disk, or other types of nonvolatile machine-readable media that are capable of storing electronic data (e.g., including instructions). Fig. 6 illustrates a computing system 600 that is arranged in a point-to-point (PtP) configuration, according to an embodiment of the invention. In particular, Fig. 6 shows a system where processors, memory, and input/output devices are interconnected by a number of point-to- point interfaces. The operations discussed with reference to Figs. 1-5 may be performed by one or more components of the system 600. As illustrated in Fig. 6, the system 600 may include several processors, of which only two, processors 602 and 604 are shown for clarity. The processors 602 and 604 may each include alocal memory controller hub (MCH) 606 and 608 to enable communication with memories 610 and 612. The memories 610 and/or 612 may store various data such as those discussed with reference to the memory 512 of Fig. 5. In an embodiment, the processors 602 and 604 may be one of the processors 502 discussed with reference to Fig. 5. The processors 602 and 604 may exchange data via a point-to-point (PtP) interface 614 using PtP interface circuits 616 and 618, respectively. Also, the processors 602 and 604 may each exchange data with a chipset 620 via individual PtP interfaces 622 and 624 using point-to-point interface circuits 626, 628, 630, and 632. The chipset 620 may further exchange data with a graphics circuit 634 via a graphics interface 636, e.g., using a PtP interface circuit 637. At least one embodiment of the invention may be provided within the processors 602 and 604. For example, the PAEC logic 120 of Figs. 1-4 may be located within the processors 602 and 604. Other embodiments of the invention, however, may exist in other circuits, logic units, or devices within the system 600 of Fig. 6. Furthermore, other embodiments of the invention may be distributed throughout several circuits, logic units, or devices illustrated in Fig. 6. The chipset 620 may communicate with a bus 640 using a PtP interface circuit 641. The bus 640 may communicate with one or more devices, such as a bus bridge 642 and I/O devices 643. Via a bus 644, the bus bridge 642 may communicate with other devices such as a keyboard/mouse/touchscreen/camera 645, communication devices 646 (such as modems, network interface devices, or other communication devices that may communicate with the computer network 503), audio I/O device 647, and/or a data storage device 648. The data storage device 648 may store code 649 that may be executed by the processors 602 and/or 604. In various embodiments of the invention, the operations discussed herein, e.g., with reference to Figs. 1-6, may be implemented as hardware (e.g., logic circuitry), software, firmware, or combinations thereof, which may be provided as a computer program product, e.g., including (e.g., a non-transitory) machine -readable or computer-readable medium having stored thereon instructions (or software procedures) used to program a computer to perform a process discussed herein. The machine-readable medium may include a storage device such as those discussed with respect to Figs. 1-6. Additionally, such computer-readable media may be downloaded as a computer program product, wherein the program may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a bus, a modem, or a network connection).Reference in the specification to "one embodiment," "an embodiment," or "some embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiment(s) may be included in at least an implementation. The appearances of the phrase "in one embodiment" in various places in the specification may or may not be all referring to the same embodiment. Also, in the description and claims, the terms "coupled" and "connected," along with their derivatives, may be used. In some embodiments of the invention, "connected" may be used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements may not be in direct contact with each other, but may still cooperate or interact with each other. Thus, although embodiments of the invention have been described in language specific to structural features and/or methodological acts, it is to be understood that claimed subject matter may not be limited to the specific features or acts described. Rather, the specific features and acts are disclosed as sample forms of implementing the claimed subject matter.
Some exemplary embodiments of a multi-chip module (MCM) power quad flat no-lead (PQFN) semiconductor package utilizing a leadframe for electrical interconnections have been disclosed. One exemplary embodiment comprises a PQFN semiconductor package comprising a leadframe, a driver integrated circuit (IC) coupled to the leadframe, a plurality of vertical conduction power devices coupled to the leadframe, and a plurality of wirebonds providing electrical interconnects, including at least one wirebond from a top surface electrode of one of the plurality of vertical conduction power devices to a portion of the leadframe, wherein the portion of the leadframe is electrically connected to a bottom surface electrode of another of the plurality of vertical conduction power devices. In this manner, efficient multi-chip circuit interconnections can be provided in a PQFN package using low cost leadframes.
1.A power quad flat no-lead (PQFN) semiconductor package comprising:a lead frame comprising a plurality of chip pads;a driver integrated circuit (IC) coupled to the first chip pad of the lead frame;a plurality of vertical conduction power devices including a first set of vertical conduction power devices and a second set of vertical conduction power devices, the first set of vertical conduction power devices being coupled to a second die pad of the lead frame, the second a set of vertical conduction power devices individually coupled to respective ones of the leadframes;a plurality of wire bonds providing electrical interconnection between the drive IC, the plurality of vertically conductive power devices, and a plurality of outer leads of the leadframe, wherein the first set of vertical conduction power devices A top surface electrode of one of the leads is electrically connected to a plating on a portion of the lead frame, the portion of the lead frame being used as a wiring device to be electrically connected to the second set of vertical conduction power devices Another plating under one of the bottom surface electrodes.2.The PQFN semiconductor package of claim 1 wherein the package is configured as a full bridge power device.3.The PQFN semiconductor package of claim 1 wherein said leadframe is selectively plated with silver to enhance adhesion.4.The PQFN semiconductor package of claim 1 wherein said first set of vertical conduction power devices are located adjacent a first edge of said package, and wherein said second set of vertical conduction power devices are located at a second edge of said package nearby.5.The PQFN semiconductor package of claim 1 wherein said plurality of vertically conductive power devices are six (6) in number, said first set of vertically conductive power devices being three (3) in number and said The two sets of vertical conduction power devices are three (3) in number.6.The PQFN semiconductor package of claim 1 wherein the plurality of vertical conduction power devices comprise power MOSFETs.7.The PQFN semiconductor package of claim 1 wherein the plurality of vertical conduction power devices comprise IGBTs.8.The PQFN semiconductor package of claim 1, wherein the package has a thickness of 0.9 mm or less.9.The PQFN semiconductor package of claim 1, wherein the package has a footprint of 12 mm by 12 mm or less.10.The PQFN semiconductor package of claim 1, wherein the plurality of wire bonds comprise a ball bond in a ball-on-pin bonding (BSOB) manner to connect to power electrodes of the plurality of vertically conductive power devices.11.A power quad flat no-lead (PQFN) semiconductor package comprising:Lead framea driver integrated circuit (IC) coupled to the lead frame;a plurality of vertical conduction power devices coupled to the lead frame;a plurality of wire bonds providing electrical interconnection between the drive IC, the plurality of vertically conductive power devices, and a plurality of outer leads of the leadframe, the plurality of wire bonds including a first wire bond of a top surface electrode of one of the plurality of vertically conductive power devices to a plating layer on a portion of the lead frame, wherein the portion of the lead frame is used as a wiring device to electrically connect to Another plating under the bottom surface electrode of the other of the plurality of vertically conductive power devices.12.The PQFN semiconductor package of claim 11 wherein the package is configured as a full bridge power device.13.The PQFN semiconductor package of claim 11 wherein said leadframe is selectively plated with silver to enhance adhesion.14.The PQFN semiconductor package of claim 11 wherein said vertical conduction power device is divided into a first group on a single chip pad located near a first edge of said package and a second edge located adjacent said package a second set on a separate chip pad, the first set comprising the one of the plurality of vertically conductive power devices, and the second set comprising the one of the plurality of vertically conductive power devices another.15.The PQFN semiconductor package of claim 11 wherein said plurality of vertically conductive power devices are six (6) in number.16.The PQFN semiconductor package of claim 11 wherein said plurality of vertical conduction power devices comprise power MOSFETs.17.The PQFN semiconductor package of claim 11 wherein said plurality of vertically conductive power devices comprise IGBTs.18.The PQFN semiconductor package of claim 11, wherein the package has a thickness of 0.9 mm or less.19.The PQFN semiconductor package of claim 11, wherein the package has a footprint of 12 mm by 12 mm or less.20.The PQFN semiconductor package of claim 11 wherein said first wire bond comprises a ball bond in a ball-on-pin bond (BSOB) mode.
Multi-Chip Module (MCM) Power Quad Flat No-Lead (PQFN) Semiconductor Package Using Electrical Connections Using Lead FramesBackground of the invention This application claims the benefit and priority of the pending provisional application entitled "Low Cost Leadframe Based High Power Density Full Bridge Power Device", Serial No. 61/459,527, filed on Dec. 13, 2010. The disclosure of this pending provisional application is hereby incorporated by reference in its entirety in its entirety herein in its entirety in its entirety herein 1.Field of inventionThe present invention generally relates to semiconductor devices. More specifically, the present invention relates to a multi-chip package of a semiconductor device. 2.Background techniqueA package that combines several semiconductor components into a single package can help simplify circuit design, reduce cost, and provide higher efficiency and improved performance by keeping related and non-independent circuit components close together. These integrated multi-chip device packages help promote application integration and higher electrical and thermal performance compared to the use of discrete components. This trend toward higher circuit integration has led to the development and use of power quad flat no-lead (PQFN) packages that can include larger form factors such as 12 mm by 12 mm multi-chip modules ( Multi chip module, MCM). The performance of high power density circuit applications requiring efficient heat dissipation is optimized by exposing a large surface area die pad on the bottom surface of the PQFN package. One advantage of the PQFN package is low cost manufacturing because using a simple low cost lead frame for the substrate material is better than with expensive multilayer substrate materials. However, due to this single layer configuration, electrical wiring and routing is a particular challenge, particularly for larger and more complex multi-chip modules supported by a form factor of 12 mm by 12 mm. It is not possible to directly interconnect a power device using a multilayer substrate material such as a power MOSFET and an IGBT package design using a simple single layer lead frame. Because most of the top surface electrical interconnects must be wirebonded, the trace layout must be carefully designed to prevent wire shorts. Although increasing the package thickness reduces the risk of wire shorts, the risk of package rupture increases, which is generally undesirable in order to maintain package reliability. Therefore, a unique cost-effective and reliable solution is needed to support the efficient design and operation of the MCM PQFN package. Summary of the inventionA multi-chip module (MCM) power quad flat no-lead (PQFN) semiconductor package that utilizes a leadframe for electrical interconnection, substantially as shown and/or described in connection with at least one of the figures, and as claimed The explanation is more complete. DRAWINGSFIG. 1A illustrates a top plan view of a semiconductor package in accordance with an embodiment of the present invention. FIG. 1B illustrates a top plan view of a semiconductor package including wire bonding in accordance with an embodiment of the present invention. 1C illustrates a bottom view of a semiconductor package in accordance with an embodiment of the present invention. 2 illustrates a cross-sectional view of a portion of a semiconductor package in accordance with an embodiment of the present invention. Detailed description of the invention The present application is directed to a multi-chip module (MCM) power quad flat no-lead (PQFN) semiconductor package that utilizes a leadframe for electrical interconnection. The following description includes specific information regarding the implementation of the present invention. Those skilled in the art will recognize that the present invention can be practiced otherwise than as specifically discussed in this application. Moreover, some specific details of the invention have not been discussed in order to not obscure the invention. Specific details that are not described in this application are within the knowledge of one of ordinary skill in the art. The drawings of the present application and their accompanying detailed description are only for the typical embodiments of the invention. Other embodiments of the invention that utilize the principles of the invention are not described in detail in the present application and are not specifically described herein. FIG. 1A illustrates a top plan view of a semiconductor package in accordance with an embodiment of the present invention. In this example, the semiconductor package may include 27 external leads as numbered, or external leads 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 12 mm by 12 mm PQFN package of 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26 and 27 (ie having a footprint of 12 mm by 12 mm) ). However, alternative embodiments may utilize different package sizes and may include different numbers of external leads, as required by the application. As shown in FIG. 1A, a driver integrated circuit (IC) or driver IC 130 is located at the center of the package. Driver IC 130 may include a high voltage integrated circuit (HVIC) driver adapted to drive six power devices in a full bridge configuration, such as the "5th generation" HVIC available from International Rectifier. Accordingly, the driver IC 130 may be connected to the gate electrodes 141a, 141b, 141c, 141d, 141e, and 141f of the respective vertical conduction power devices 140a, 140b, 140c, 140d, 140e, and 140f, which may include, for example, a power metal oxide semiconductor Field effect transistors (power MOSFETs) such as fast-reverse epitaxial diode field effect transistors (FREDFETs), or insulated gate bipolar transistors (IGBTs). For example, vertical conduction power devices 140a through 140c may include MOSFET devices forming a high side FET of a full bridge power device, and vertical conduction power devices 140d through 140f may include low side FETs forming a full bridge power device (high Side FET) MOSFET device. For the sake of clarity, the wire bonds that provide the connection between the driver IC 130 and the vertical conduction power devices 140a through 140f are removed from Figure 1A. Moreover, while packages that provide full-bridge power devices have been illustrated in the figures, alternative embodiments may provide other packaged device configurations, depending on the requirements of the particular application. Leadframe 160 may comprise a material having high thermal conductivity and electrical conductivity, such as copper (Cu) alloy C194 available from Olin. The large area bottom surface of leadframe 160 can be exposed for optimum conductivity and heat dissipation, as further shown and discussed in connection with FIG. 1C. The top surface of leadframe 160 can also be selectively plated with material to enhance adhesion to the device chip and wires. For example, the plating layers 150a, 150b, 150c, 150d, 150e, 150f, and 150g may include a silver (Ag) plating selectively applied to the lead frame 160, which may be obtained from a company such as QPL Limited. Mold composition 165 may comprise a low flexural modulus mold composition such as CEL9220ZHF10 (v79) available fromChem. As shown in FIG. 1A, vertical conduction power devices 140a through 140c all share the same die pad located near the top edge of the package and are coupled to leadframe 160 by plating 150a. Therefore, the bottom drain electrodes of the high-side MOSFET are all connected together on the same chip pad. On the other hand, the vertical conduction power devices 140d to 140f including the low side MOSFETs are each disposed on separate chip pads near the right edge of the package. Solder or conductive adhesive, such as the silver filled QMI 529HT available from Henkel Corporation, which can be used to join the bottom surface of vertical conduction power devices 140a through 140c to plating 150a, vertically conductive power device 140d to plating 150d, vertical conduction power device 140e to plating 150c, vertically conducting power device 140f to plating 150b, and driving IC 130 to plating 150g. Therefore, the driving IC 130 and the vertical conduction power devices 140a to 140f are arranged in an optimal manner in the package to achieve electrical conductivity. To complete the full bridge power circuit of Figure 1A, source electrode 142a is required to be connected to the drain electrode of vertical conduction power device 140d, source electrode 142b is required to be connected to the drain electrode of vertical conduction power device 140e, source electrode 142c is required to be connected to The drain electrodes of the power device 140f are vertically conducted, and the source electrodes 142d, 142e, 142f are required to be connected together. However, direct routing of the wires to provide the necessary connections can result in jumpers and potential line shorts. In addition, because the goal of the package is for high power applications, long wire length requirements may adversely affect electrical performance and thermal performance. Accordingly, turn to FIG. 1B, which illustrates a top plan view of a semiconductor package including wire bonding in accordance with an embodiment of the present invention. As shown in FIG. 1B, thin wires are utilized for gate connections, current sensing, and other input/output (I/O) functions, as represented by wire bonds 170b. These may include, for example, a G1 type gold (Au) wire having a diameter of 1.3 mils. Thicker wires are utilized for power connections, as represented by wire bonds 170a. These may include, for example, a 2.0 mil diameter copper (Cu) wire, such as theLD wire available from Kulicke &. A thicker wire such as wire bond 170a can be bonded using a bond stitch on ball (BSOB) bonding method. As shown in FIG. 1B, a plurality of wire bonds, such as two wire bonds, can be arranged in parallel to provide additional current steering. Therefore, the required connection is provided by the wire bonding and the lead frame 160, as shown in FIG. 1B, which completes the circuit of FIG. 1A and the wiring to the external leads 1 to 27. The gate electrodes 141a to 141f are directly connected to the driving IC 130, respectively, using gold wire bonding. Because the vertical conduction power devices 140c and 140f are already in close proximity, direct wire bonding using a pair of copper wires can be used between the source electrode 142c and the plating layer 150b. However, routing of the leadframe 160 may be advantageous for connections between devices that are further apart. Because the leadframe 160 can include a material that is highly conductive, such as a copper alloy, the leadframe 160 can provide a more efficient conduction path than direct wire routing. In addition, problems such as the risk of line shorts due to jumpers are also avoided. For example, to connect the source electrode 142b to the drain electrode of the vertical conduction power device 140e, a pair of thick copper wires are bonded between the top of the source electrode 142b and the top of the plating layer 150e. This connection is shown in more detail below in conjunction with the discussion of FIG. 2, which shows a cross-sectional view from the section provided by line 102. The lead frame 160 under the plating layer 150e is then connected to the plating layer 150c to complete the connection to the drain electrode of the vertical conduction power device 140e. In a similar manner, source electrode 142a is bonded to plating 150f via a pair of thick copper wires, which are then connected to plating 150d via leadframe 160, which has been attached to the drain electrode of vertical conduction power device 140d. Therefore, the necessary electrical connection for completing the package is achieved by using the lead frame 160 as a wiring device, which advantageously avoids the jumper bonding. Moving to Figure 1C, Figure 1C illustrates a bottom view of a semiconductor package in accordance with an embodiment of the present invention. By flipping the package shown in Figure 1B, it can be seen that the appearance is similar to the layout shown in Figure 1C, and the exposed portions of the leadframe are visible. Thus, for example, leadframe portion 160a may correspond to the contour of plating 150a shown in FIG. 1B, and leadframe portion 160b may correspond to the contour of plating 150e shown in FIG. 1B. Therefore, a large area package lead frame is exposed at the bottom for effective heat dissipation and conduction. The exposed surface area can also be plated, for example with tin (Sn). The effective design of the PQFN package can be advantageously developed by designing a printed circuit board (PCB) with a matching convex surface accordingly. Turning now to Figure 2, Figure 2 illustrates a cross-sectional view of a portion of a semiconductor package in accordance with an embodiment of the present invention. More specifically, the cross-sectional view corresponds to the section provided along line 102 in FIG. 1B. 2, lead frame portions 260a and 260b correspond to lead frame portions 160a and 160b in FIG. 1C, vertical conductive device 240b corresponds to vertical conductive device 140b in FIG. 1B, and source electrode 242b corresponds to source electrode 142b in FIG. 1B. The plating layer 250a corresponds to the plating layer 150a in FIG. 1B, the plating layer 250e corresponds to the plating layer 150e in FIG. 1B, and the mold composition 265 corresponds to the mold composition 165 in FIG. 1B. It should be noted that Figure 2 is not necessarily drawn to scale. As shown in FIG. 2, the drain electrode 243b of the vertical conduction device 240b is coupled to the lead frame portion 260a by a conductive adhesive 235 and a plating layer 250a. As previously discussed, the conductive adhesive 235 can include a silver filled adhesive such as QMI 529HT. The source electrode 242b of the vertical conduction device 240b is then connected to the lead frame portion 260b by a wire bond 270a and a plating layer 250e. Wire bond 270a can include a 2.0 mil diameter copper (Cu) wire bonded in a BSOB manner. As previously mentioned, a plurality of wire bonds can be provided for additional current handling, which is not shown in Figure 2 because the pair of wire bonds are parallel to each other in Figure 1B. After the device chip is bonded and wire bonded, the package can be sealed using the mold composition 265. To provide resiliency to prevent package breakage, the height (or thickness) of the package defined by the mold composition 265 can be kept thin, such as 0.9 mm or less. The cross section shown in Figure 2 thus illustrates the electrical connection provided by wire bonds 270a that are connected to source electrode 142b and plating layer 150e shown in Figure 1B. The portion of the lead frame 160 in FIG. 1B corresponding to the lead frame portion 260b in FIG. 2 continues to move to the right to be connected to the plating layer 150c, thereby completing the connection to the drain of the vertical conduction power device 140e. A similar connection process is also applied to the source electrode 142a connected to the drain of the vertical conduction power device 140d. Thus, a multi-chip module (MCM) power quad flat no-lead (PQFN) semiconductor package that utilizes a leadframe for electrical interconnection has been described. According to the present invention, even a complex package having a plurality of power devices can be integrated by utilizing a low cost lead frame as an effective electrical interconnection. The innovative package of the present invention allows for a compact form factor, improved electrical and thermal conductivity, enhanced reliability, and cost effective manufacturing compared to conventional packaging techniques. From the above description of the invention, it is apparent that many of the techniques can be used to implement the inventive concept without departing from the scope thereof. In addition, the present invention has been described with respect to the specific embodiments thereof, and it will be understood by those skilled in the art that the present invention may be modified in form and detail without departing from the spirit and scope of the invention. Likewise, the described embodiments are to be considered in all aspects as illustrative and not limiting. It is also to be understood that the invention is not limited to the specific embodiments described herein, but many modifications, changes and substitutions are possible without departing from the scope of the invention. 
Technologies are provided in example embodiments for analyzing an encrypted network flow. The technologies include monitoring the encrypted network flow between a first node and a second node, the network flow initiated from the first node; duplicating the encrypted network flow to form a copy of the encrypted network flow; decrypting the copy of the encrypted network flow using a shared secret, the shared secret associated with the first node and the second node; and scanning the network flow copy for targeted data.
WHAT IS CLAIMED IS: 1. At least one machine accessible storage medium having instructions stored thereon for analyzing an encrypted network flow, the instructions when executed on a machine, cause the machine to: monitor the encrypted network flow between a first node and a second node, the network flow initiated from the first node; duplicate the encrypted network flow to form a copy of the encrypted network flow; decrypt the copy of the encrypted network flow using a shared secret, the shared secret associated with the first node and the second node; and scan the network flow copy for targeted data. 2. The machine accessible storage medium of claim 1, further comprising instructions, when executed on the machine, cause the machine to: extract the shared secret from the first node. 3. The machine accessible storage medium of claim 2, further comprising instructions, when executed on the machine, cause the machine to: load a shared library into an application on the first node, wherein the application is accessing the encrypted protocol session, and wherein the shared library allows access to an encryption protocol session through the application; and identify the shared secret in the encryption protocol session. 4. The machine accessible storage medium of claim 2, further comprising instructions, when executed on the machine, cause the machine to: monitor a network flow at a network layer; identify an initiation of a handshake of an encryption protocol session; responsive to identifying the initiation, open a memory space of a process initiating the encrypted protocol session; and identify the shared secret in the encryption protocol session within the memory space of the process. 5. The machine accessible storage medium of claim 1, further comprising instructions, when executed on the machine, cause the machine to: delay the encrypted network flow; and forward the encrypted network flow. 6. The machine accessible storage medium of claim 1, further comprising instructions, when executed on the machine, cause the machine to: responsive to identifying targeted data in the network flow copy, terminate the encrypted network flow. 7. The machine accessible storage medium of any of claims 1-4 and 6, further comprising instructions, when executed on the machine, cause the machine to: responsive to identifying targeted data in the network flow copy, decrypt the encrypted network flow using the shared secret; modify the unencrypted network flow to remove the targeted data; encrypt a modified network flow using the shared secret; and forward the modified network flow. 8. The machine accessible storage medium of any of claims 1-6, wherein the shared secret is at least one of a master secret, pre-master secret, and a session context. 9. The machine accessible storage medium of any of claims 1-6, further comprising instructions, when executed on the machine, cause the machine to: limit a number of encryption methods used to encrypt a network flow between the first node and the second node. 10. A method for analyzing an encrypted network flow, comprising: monitoring the encrypted network flow between a first node and a second node, the network flow initiated from the first node; duplicating the encrypted network flow to form a copy of the encrypted network flow; decrypting the copy of the encrypted network flow using a shared secret, the shared secret associated with the first node and the second node; and scanning the network flow copy for targeted data. 11. The method of claim 10, further comprising: extracting the shared secret from the first node. 12. The method of claim 10, further comprising: responsive to identifying targeted data in the network flow copy, terminating the encrypted network flow. 13. The method of any of claims 10-12, further comprising: responsive to identifying targeted data in the network flow copy, decrypting the encrypted network flow using the shared secret; modifying the unencrypted network flow to remove the targeted data; encrypting a modified network flow using the shared secret; and forwarding the modified network flow. 14. An apparatus, comprising: a security module configured to: monitor the encrypted network flow between a first node and a second node, the network flow initiated from the first node; duplicate the encrypted network flow to form a copy of the encrypted network flow; decrypt the copy of the encrypted network flow using a shared secret, the shared secret associated with the first node and the second node; and scan the network flow copy for targeted data. 15. The apparatus of claim 14, further comprising: an extraction module configured to extract the shared secret from the first node. 16. The apparatus of claim 15, wherein the extraction module being configured to extract the shared secret from the first node comprises the extraction module being configured to: load a shared library into an application on the first node, wherein the application is accessing the encrypted protocol session, and wherein the shared library allows access to an encryption protocol session through the application; and identify the shared secret in the encryption protocol session. 17. The apparatus of claim 15, wherein the extraction module being configured to extract the shared secret from the first node comprises the extraction module being configured to: monitor a network flow at a network layer; identify an initiation of a handshake of an encryption protocol session; responsive to identifying the initiation, open a memory space of a process initiating the encrypted protocol session; and identify the shared secret in the encryption protocol session within the memory space of the process. 18. The apparatus of claim 14, wherein the security module is further configured to: delay the encrypted network flow; and forward the encrypted network flow. 19. The apparatus of claim 14, wherein the security module is further configured to: responsive to identifying targeted data in the network flow copy, terminate the encrypted network flow. 20. The apparatus of any of claims 14-17 and 19, wherein the security module is further configured to: responsive to identifying targeted data in the network flow copy, decrypt the encrypted network flow using the shared secret; modify the unencrypted network flow to remove the targeted data; encrypt a modified network flow using the shared secret; and forward the modified network flow. 21. The apparatus of any of claims 14-19, wherein the shared secret is at least one of a master secret, pre-master secret, and a session context. 22. The apparatus of any of claims 14-19, wherein the security module is further configured to: limit a number of encryption methods used to encrypt a network flow between the first node and the second node. 23. At least one machine accessible storage medium having instructions stored thereon for extracting a shared secret from a first node, the instructions when executed on a machine, cause the machine to: monitor a network flow at a network layer; identify an initiation of a handshake of an encryption protocol session; responsive to identifying the initiation, open a memory space of a process initiating the encrypted protocol session; and identify the shared secret in the encryption protocol session within the memory space of the process. 24. The at least one machine accessible storage medium of claim 23, further comprising instructions, when executed on the machine, cause the machine to: extract the shared secret from the memory space of the process. 25. The at least one machine accessible storage medium of any of claims 23-24, further comprising instructions, when executed on the machine, cause the machine to: send the shared secret to a security module.
ENCRYPTED DATA INSPECTION IN A NETWORK ENVIRONMENT TECHNICAL FIELD This disclosure relates in general to the field of network security and, more particularly, to inspecting encrypted data in a network environment. BACKGROUND The field of network security has become increasingly important in today's society. The Internet has enabled interconnection of different computer networks all over the world. However, the Internet has also presented many opportunities for malicious operators to exploit these networks. Certain types of malicious software (e.g., bots) can be configured to receive commands from a remote operator once the software has infected a host computer. The software can be instructed to perform any number of malicious actions, such as sending out spam or malicious emails from the host computer, stealing sensitive information from a business or individual associated with the host computer, propagating to other host computers, and/or assisting with distributed denial of service attacks. In addition, the malicious operator can sell or otherwise give access to other malicious operators, thereby escalating the exploitation of the host computers. Thus, the ability to effectively protect and maintain stable computers and systems continues to present significant challenges for component manufacturers, system designers, and network operators. Enterprise environments deploy numerous network management tools, including firewalls, network intrusion detection/prevention (NIDS/NIPS) systems, traffic shapers, and other systems. A number of these systems rely on inspection of network traffic in order to provide a wide array of services, including the detection/prevention of malware propagation, ensuring corporate intellectual property is not leaked outside well defined enterprise boundaries, as well as general auditing and network management functions. Network traffic may also be encrypted using protocols such as Secure Sockets Layer (SSL) / Transport Layer Security (TLS). BRIEF DESCRIPTION OF THE DRAWINGS To provide a more complete understanding of the present disclosure and features and advantages thereof, reference is made to the following description, taken in conjunction with the accompanying figures, wherein like reference numerals represent like parts, in which: FIGURE 1 is a simplified block diagram of a network environment in which a firewall may intercept a network flow in accordance with an embodiment; FIGURE 2 is an example illustration of a network environment 200 in accordance with an embodiment; FIGURE 3 is an illustration of a network environment with SSL/TLS handshake communications in accordance with an embodiment; FIGURE 4 is a block diagram of a network environment 400 for SSL/TLS in accordance with an advantageous embodiment; FIGURE 5 is an illustration of a security module as a proxy in accordance with an illustrative embodiment; FIGURE 6 is an illustration of a data diagram in accordance with an embodiment; FIGURE 7 is a simplified flowchart illustrating a process for extracting a shared secret using a shared library in accordance with an embodiment; FIGURE 8 is a simplified flowchart illustrating a process for extracting a shared secret from a memory space in accordance with an embodiment; FIGURE 9 is a simplified flowchart illustrating a process for analyzing an encrypted network flow in accordance with an embodiment; FIGURE 10 also illustrates a memory coupled to processor in accordance with an embodiment; and FIGURE 1 1 illustrates a computing system that is arranged in a point-to-point (PtP) configuration according to an embodiment. DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS EXAMPLE EMBODIMENTS Turning to FIGURE 1, FIGURE 1 is a simplified block diagram of a network environment in which a firewall may intercept a network flow in accordance with an embodiment. In the embodiment illustrated in FIGURE 1, network environment 100 can include Internet 102, client 104, a firewall 106, a policy server 108, a mail server 1 10, and a web server 112. In general, client 104 may be any type of termination node in a network connection, including but not limited to a desktop computer, a server, a laptop, a mobile device, a mobile telephone, or any other type of device that can receive or establish a connection with another node, such as mail server 110 or web server 112. Firewall 106 may control communications between client 104 and other nodes attached to Internet 102 or another network, such as by blocking unauthorized access while permitting authorized communications. In some instances, firewall 106 may be coupled to or integrated with an intrusion prevention system, network access control device, web gateway, email gateway, mobile device, or any other type of gateway between Internet 102 and client 104. Moreover, the location of firewall 106 in the routing topology close to user client 104 is arbitrary. Policy server 108 may be coupled to or integrated with firewall 106, and may be used to manage client 104 and to administer and distribute network policies. Thus, in this example embodiment, client 104 may communicate with servers attached to Internet 102, such as mail server 110 or web server 112, by establishing a connection through firewall 106 if permitted by policies implemented in firewall 106 and managed by policy server 108. Each of the elements of FIGURE 1 may couple to one another through simple interfaces or through any other suitable connection (wired or wireless), which provides a viable pathway for network communications. Additionally, any one or more of these elements may be combined or removed from the architecture based on particular configuration needs. Network environment 100 may include a configuration capable of transmission control protocol/Internet protocol (TCP/IP) communications for the transmission or reception of packets in a network. Network environment 100 may also operate in conjunction with a user datagram protocol/IP (UDP/IP) or any other suitable protocol where appropriate and based on particular needs. For purposes of illustrating the techniques for providing network security in example embodiments, it is important to understand the activities occurring within a given network. The following foundational information may be viewed as a basis from which the present disclosure may be properly explained. Such information is offered earnestly for purposes of explanation only and, accordingly, should not be construed in any way to limit the broad scope of the present disclosure and its potential applications. Typical network environments used in organizations and by individuals include the ability to communicate electronically with other networks using the Internet, for example, to access web pages hosted on servers connected to the Internet, to send or receive electronic mail (i.e., email) messages, or to exchange files. However, malicious users continue to develop new tactics for using the Internet to spread malware and to gain access to confidential information. Malware generally includes any software designed to access and/or control a computer without the informed consent of the computer owner, and is most commonly used as a label for any hostile, intrusive, or annoying software such as a computer virus, bot, spyware, adware, etc. Once compromised, malware may subvert a host and use it for malicious activity, such as spamming or information theft. Malware also typically includes one or more propagation vectors that enable it to spread within an organization's network or across other networks to other organizations or individuals. Common propagation vectors include exploiting known vulnerabilities on hosts within the local network and sending emails having a malicious program attached or providing malicious links within the emails. For purposes of illustrating some example techniques of a security module and an extraction module, it is important to understand a man-in-the-middle (MITM) technique. One or more embodiments recognize and take into account that some embodiments for screening SSL (or TLS) traffic in security devices use MITM techniques: the security device terminates the SSL connection using a certificate that spoofs the destination, then proxies the data to the destination over a second SSL connection. The user can see this spoofing, and either ignores it explicitly for each connection, or sets his machine to trust the security device so the warning goes away. MITM is expensive for the security device to implement, because it needs to decrypt and re-encrypt all traffic. Also, MITM requires the security device to perform expensive public -key cryptography operations on each connection being screened. An additional problem with MITM is that the user does not get a true SSL authentication of the target web site (server). This is a key benefit of SSL security, but the user only knows that the security device is reached, and not the web site that has really been accessed. This deficiency can be exploited by attackers who using phishing emails to direct users to sites that look like trusted sites, but are really out to exploit them. Additionally, the different embodiments of this disclosure recognize and take into account a situation where a trusted client is communicating with an untrusted server; the network device terminates and re-establishes an SSL/TLS session between two communicating endpoints. This is also often referred to as a break-make connection. The trusted client is provisioned with a certificate of the network device / domain and accepts this in the secure session setup process, even though it is communicating with an endpoint beyond the network appliance (e.g. a banking website). In practice, this session is terminated at the network appliance, which instantiates a second, separate session to the ultimate endpoint, on behalf of the client. This mechanism allows the network appliance to get visibility to the TLS traffic, as it is a 'man-in-the-middle' for the secure communication channel. This approach results in a burden on the network appliance, as it needs to proxy connections for every client / session, hence needs to manage resources for all of these proxy connections. This situation adds significant overhead to the network appliance. Also, the different embodiments of this disclosure recognize and take into account another situation where an untrusted client is communicating with a trusted server, the network appliance gets access (in some OOB manner) to the trusted server's certificate, including the public / private key pair (e.g. RSA keys) used for authenticating the SSL/TLS session. Because of the SSL/TLS operation, where the client sends a pre-master secret to the server, encrypted with the public key of the server, the network appliance is able to capture / decrypt this information en route and snoop on the SSL/TLS handshake. This allows the network appliance to independently compute the SSL/TLS session keys and thereafter decrypt the encrypted communication between the two endpoints. However, this situation relies upon ownership of the server private key, and does not apply in the common situation of an organization that seeks to protect multiple users with client machines that are connecting to multiple servers on the Internet, by provisioning a security device, such as Intrusion Protection or Firewall. The different embodiments of this disclosure recognize and take into account: enterprises have pressing need to scan SSL/TLS traffic; malware inspection; data loss protection; MITM techniques already in use; MITM fakes both authentication and encryption; user sees forged certificate, trust is compromised; annoyance factor when a user either sees a warning message for every connection, or never knows whether trust is real. One or more embodiments of this disclosure provide a novel approach which simplifies visibility into encrypted network streams, as well as alleviating large overheads on the network devices. FIGURE 2 is an example illustration of a network environment 200 in accordance with an embodiment. In an aspect of this disclosure, network environment 200 is includes a client 202, a firewall 204, and a server 206. Network environment 200 may be one example of network environment 100 as shown in FIGURE 1. In an embodiment, network environment 200 may include an encryption protocol session 208 that operates between client 202, firewall 204, and server 206. Encryption protocol session 208 may further include network flow 210, targeted data 211, and shared secret 212. Sever 206 may further include certificate of authority 214. Firewall 204 may further include security module 220, which in turn may include a network flow copy 222 and an unencrypted network flow 224. Client 202 may further include trust list 216, extraction module 230, shared library 232, and application 234. In an embodiment of this disclosure, server 206 includes certificate of authority 214. Certificate of authority 214 may be an entity that issues digital certificates. The digital certificate certifies the ownership of a public key by the named subject of the certificate. This allows client 202 to rely upon signatures or assertions made by the private key that corresponds to the public key that is certified. In this model of trust relationships, certificate of authority 214 is a trusted third party that is trusted by both server 206 and client 202 upon the certificate. On client 202, trust list 216 may be maintained. Trust list 216 may include the digital certificates that client 202 trusts. In one or more embodiments, encryption protocol session 208 operates between client 202, firewall 204, and server 206. Encryption protocol session 208 includes a network flow 210. Network flow 210 is an encrypted flow of data that operates in both directions between client 202 and server 206. Firewall 204 may intercept network flow 210 for inspection and analysis. In an embodiment, the protocols used for encryption protocol session 208 (secure communications) may be transport layer security (TLS) or its predecessor, secure sockets layer (SSL). These protocols are cryptographic protocols that provide communication security over the Internet. These protocols may also be used interchangeably in this disclosure. TLS and SSL encrypt the segments of network connections at the application layer for the transport layer, using asymmetric cryptography for key exchange, symmetric cryptography for confidentiality, and message authentication codes for message integrity. Client 202 and server 206 may also maintain a shared secret 212 (e.g., a password, key, etc.) for authentication of data in network flow 210. Shared secret 212 may be configured during encryption protocol session 208. Shared secret 212 may be a value that is shared, and known, between client 202 and server 206. In an embodiment, for example, shared secret 212 may be a master secret or session keys as used in SSL/TLS. Session keys may be a session context and may include an initialization vector, crypto algorithm being used, etc., as well as just the session key. A session context may contain necessary cryptographic information to de-capsulate the payload (e.g. encryption / integrity / compression algorithms, associated keys, key sizes, Initialization vectors, etc.) In contrast, a public/private asymmetric key structure is not shared between client 202 and server 206 because each party has different keys. Extraction module 230 is configured to extract shared secret 212 from client 202. In particular, extraction module 230 may extract the master secret, pre-master secret, hash-based message authentication code (HMAC), and/or session keys. Extraction module 230 may be loaded onto client 202, or in other embodiments, may be a separate module with access to client 202. In an embodiment, extraction module 230 may load shared library 232 into application 234. This allows extraction module 230 access to encryption protocol session 208 through application 234 to identify shared secret 212. Shared library 232 may be a shared library or shared object is a file that is intended to be shared by executable files and further shared objects files. Shared library 232 may be, for example, a dynamic link library (DLL). Application 234 may be a process that is communicating with server 206 through encryption protocol session 208. Application 234 may be, for example, a web browser. In another embodiment, extraction module 230 may be configured to monitor network flow 210 at a network layer and detect the progress of a network handshake, such as the SSL initial handshake, and so determine the point in time when memory space 231 of application 234 may contain the shared secret 212 for the encrypted connection being negotiated. Extraction module 230 may be configured to open the memory space 231 of the process running the application 234, for example, by using debugging system calls to access the process memory of a target process on the same computer system in Microsoft®, Windows®, or Linux®. Extraction module 230 may also be configured to search memory space 231 to identify shared secret 212. Extraction module 230 is configured to send shared secret 212 to security module 220. The path of transmission to security module 220 may also be a secured channel. With shared secret 212, security module 220 may be able to decrypt network flow 210 using the same encryption/decryption process as client 202 and server 206 are using. Security module 220 may operate in different modes of operation. In one embodiment, security module 220 may be configured to copy network flow 210 to create network flow copy 222. Network flow copy 222 may then be decrypted without affected network flow 210 to create unencrypted network flow 224. In some embodiments, security module 220 may delay network flow 210 to wait for shared secret 212 from encryption module 230, have time to decrypt network flow copy 222, modify network flow 210, inspect unencrypted network flow 224 for security issues, or any other suitable reason for delaying. In other embodiments, security module 220 does not delay network flow 210 and may only copy network flow 210. In an embodiment, security module 220 may be configured to scan network flow 210 and/or network flow copy 222 (once decrypted and as unencrypted network flow 224) for targeted data 211. Targeted data 211 may contain data that security module 220 is looking for such as, for example, hostile, intrusive, or annoying software such as a computer virus, bot, spyware, adware. Targeted data 21 1 may be malware. In operational terminology, and in one particular embodiment, an illustration of a TLS or SSL connection may begin as follows: during a negotiation phase client 202 sends a message specifying the highest TLS protocol version it supports, a random number, a list of suggested cipher suites, and suggested compression methods. A cipher suite is a named combination of authentication, encryption, and message authentication code (MAC) algorithms used to negotiate the security settings for a network connection using the TLS or SSL network protocols. Also, if client 202 is attempting to perform a resumed handshake, it may send a session ID. In response, server 206 responds with a message containing the chosen protocol version, another random number, a selected cipher suite, and a selected compression method from the choices offered by the client. To confirm or allow resumed session, server 206 may send the same session ID. To start a new session, server 206 may send a new session ID. Also, client 202 may respond with another message, which may contain a pre master secret, public key, or nothing. The pre master secret is encrypted using the public key of the server certificate. Client 202 and server 206 then use the random numbers and the pre master secret to compute a common secret, called the "master secret". All other key data for this connection is derived from this master secret. The master secret may be used to make session keys for each communication session between client 202 and server 206. The pre master secret, master secret, and session keys are all examples of shared secret 212. One or more embodiments provide extraction module 230, also referred to as a trusted agent, on client 202 that monitors SSL/TLS connections and is able to intercept certain, well defined, application programming interfaces (APIs) to directly extract the master secret, pre- master secret, and/or the session key. Extraction module 230 on client 202 may perform the extraction of shared secret 212. This information is securely shared to security module 220, a trusted and authorized network appliance, via a secure out-of-band (OOB) channel. In other embodiments, the information is shared via a non-secure channel. This allows security module 220 to decrypt encryption protocol session 208, SSL/TLS communication, and get visibility into network flow 210. In operational terminology, and in particular, one embodiment, extraction module 208, special software, on client 202, the user workstation, searches out shared secret 212, the SSL key, as each encryption protocol session 208 is established. Discovery protocols then transmit shared secret 212 securely to security module 220. Client 202 establishes the SSL connection end-to-end, with full authentication of the target site, but security module 220 can still scan the connection to protect client 202. In addition, shared secret 212 may shared with security module 230 only after a public -key handshake occurs, so security module 230 can decrypt the session using a single symmetric decryption per data item. This process is faster than MITM. One or more embodiments of this disclosure (1) preserves end-to-end authentication and that it can be used for passive connections to the network, (2) alleviate overhead on a security module to store state for every single connection, where a second, independent SSL/TLS connection must be constructed in order to get visibility into the encrypted traffic streams, and (3) are compatible with the user of client-side certificates in SSL (not supported in MITM). The embodiments of this disclosure provide a client based approach to extract the SSL/TLS master secret and/or session keys and sharing these with authorized security modules using a separate secure channel. The embodiments also enable scanning of the network flow without removing the ability of the client to perform end-to-end authentication and in a way that is very efficient for the security devices (IPS, Firewall, security module) to implement. The embodiments of this disclosure provide a system to decrypt encryption protocol sessions (SSL/TLS sessions) without compromising client trust. The embodiments provide: SSL Handshake is passed on without change; original certificate, original CA trust; extraction module shares session key with security module; key is a short-lived credential, affects only this session; decryption can also be faster than MITM; decryption can also support SSL mutual authentication client-side and server authentication; and can support passive mode inspection of traffic. The embodiments also provide: the security device can also be used in a proxy environment, where proxy may need to modify the SSL plaintext; authentication and trust are still end-to-end; connection starts in "inspection mode", where all data is pass-through; If proxy needs to change plaintext (e.g., modifying a URL, removing an attachment), connection switches to "proxy mode". In one or more of the embodiments, a crypto state is divided between host and server. The client decrypt state is copied to become the initial server encrypt state. The server decrypt state is copied to become the initial client encrypt state. The security device both decrypts and re-encrypts SSL data, using the separate states. SSL plaintext can be modified in between these steps. The crypto states within the proxy diverge between received state and re- encrypt state once the proxy modifies plaintext. Once re-encryption starts, it continues until the connection terminates. The security module may use SSL key information to decrypt/ verify SSL session information and inspect SSL packets in further for malware detection or data loss protection. One or more embodiments of this disclosure provide for modifying the SSL/TLS handshake in order to change the SSL parameters that can be negotiated. In such embodiments, the Initialization Vector (IV) must also be derived to allow the modification of the SSL/TLS FINISH handshake message. This may be accomplished by using the master secret as the shared secret. In another embodiment, the IV may directly be extracted by the extraction module and shared with the security module, in the same manner as sharing the SSL/TLS session key. In an embodiment, a handshake may be rewritten as follows: For the ServerHello/ClientHello by: A) Limiting the list of the cipher suites in the Hello to an approved list; B) Changing the list of cipher suites in the ClientHello to an approved list; C) Changing the selected cipher suite in the ServerHello to one in an approved list; D) Changing the random data from a client/server to a more secure source; and E) Not allowing session resumption by removing the session from the Client Hello. For the ClientCertificate by: A) Supplying one or more Client Certificates; B) Replacing one or more Client Certificates; and C) Removing one or more Client Certificates. For the ClientKeyExchange by changing the random data from a client/server to a more secure source. In one example implementation, client 202 and/or firewall 204 are network elements, which are meant to encompass network appliances, servers, routers, switches, gateways, bridges, load balancers, processors, modules, or any other suitable device, component, element, or object operable to exchange information in a network environment. Network elements may include any suitable hardware, software, components, modules, or objects that facilitate the operations thereof, as well as suitable interfaces for receiving, transmitting, and/or otherwise communicating data or information in a network environment. This may be inclusive of appropriate algorithms and communication protocols that allow for the effective exchange of data or information. However, user client 202 may be distinguished from other network elements, as they tend to serve as a terminal point for a network connection, in contrast to a gateway or router that tends to serve as an intermediate point in a network connection. Client 202 may also be representative of wireless network nodes, such as a smartphone, or other similar telecommunications devices. In regards to the internal structure associated with network environment 200, each of client 202 and/or firewall 204 can include memory elements for storing information to be used in the operations outlined herein. Each of client 202 and/or firewall 204 may keep information in any suitable memory element (e.g., random access memory (RAM), read-only memory (ROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), application specific integrated circuit (ASIC), etc.), software, hardware, or in any other suitable component, device, element, or object where appropriate and based on particular needs. Any of the memory items discussed herein (e.g., memory elements 250 and 252) should be construed as being encompassed within the broad term 'memory element.' The information being used, tracked, sent, or received by client 202 and/or firewall 204 could be provided in any database, register, queue, table, cache, control list, or other storage structure, all of which can be referenced at any suitable timeframe. Any such storage options may be included within the broad term 'memory element' as used herein. In certain example implementations, the functions outlined herein may be implemented by logic encoded in one or more tangible media (e.g., embedded logic provided in an ASIC, digital signal processor (DSP) instructions, software (potentially inclusive of object code and source code) to be executed by a processor, or other similar machine, etc.), which may be inclusive of non-transitory media. In some of these instances, memory elements can store data used for the operations described herein. This includes the memory elements being able to store software, logic, code, or processor instructions that are executed to carry out the activities described herein. In one example implementation, client 202 and/or firewall 204 may include software modules (e.g., extraction module 230 and/or security module 220) to achieve, or to foster, operations as outlined herein. In other embodiments, such operations may be carried out by hardware, implemented externally to these elements, or included in some other network device to achieve the intended functionality. Alternatively, these elements may include software (or reciprocating software) that can coordinate in order to achieve the operations, as outlined herein. In still other embodiments, one or all of these devices may include any suitable algorithms, hardware, software, components, modules, interfaces, or objects that facilitate the operations thereof. Additionally, each of use client 202 and/or firewall 204 may include a processor 260 and 262 that can execute software or an algorithm to perform activities as discussed herein. A processor can execute any type of instructions associated with the data to achieve the operations detailed herein. In one example, the processors could transform an element or an article (e.g., data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented with fixed logic or programmable logic (e.g., software/computer instructions executed by a processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (e.g., a field programmable gate array (FPGA), an EPROM, an EEPROM) or an ASIC that includes digital logic, software, code, electronic instructions, or any suitable combination thereof. Any of the potential processing elements, modules, and machines described herein should be construed as being encompassed within the broad term 'processor.' FIGURE 3 is an illustration of a network environment with SSL/TLS handshake communications in accordance with an embodiment. Network environment 300 includes client 302, firewall 304, and server 306. Furthermore, client 202 includes extraction module 308, firewall 304 includes security module 310 to perform security inspection 316, and server 306 includes server certificate 312. Server certificate 312 may be on example of certificate of authority 214 in FIGURE 2. Server certificate 312 may be passed through to client 202. Client 202 may store server certificate 312 in real certificate authority trust 314. Real certificate authority trust 314 may be one example of trust list 216 in FIGURE 2. Network environment 300 also includes messages 320-336. Messages 320-336 may be messages included as part of a handshake for an SSL/TLS session. An SSL/TLS session may be one example of encryption protocol session 208 in FIGURE 2. Messages 320 and 322 may be initial messages that include a ClientHello and a ServerHello. Firewall 304 may allow message 320 to pass through. Even though message 320 and 322 are labeled separately, they contain the same information. Messages 324 and 326 may be server 306 sending client 302, server certificate 312. Firewall 304 also passes through these messages. Even though message 324 and 326 are labeled separately, they contain the same information. By passing through server certificate 312, client 302 can confirm that communications are coming from server 306. Messages 328 and 330 are the finishing messages for negotiation. Even though message 324 and 326 are labeled separately, they contain the same information. In other embodiments, if security module 310 wants to select the cipher suite, security module 310 may alter messages 328 and/or 330. In this case, they may not be the same. Message 332 may be when extraction module 308 sends a shared secret to security module 310. Message 332 may also be a secure message. Messages 334 and 336 may represent the network flow. These messages show the data that is passed through firewall 306. In one or more embodiments, firewall 306 allow these message to pass through, in other embodiments firewall 306 may go between messages 334 and 336. In the later situation, firewall 306 may delay, terminate, or modify messages 334 and 336. FIGURE 4 is a block diagram of a network environment 400 for SSL/TLS in accordance with an advantageous embodiment. Network environment 400 includes client 402, firewall 404, and server 406. Furthermore, client 202 includes extraction module 408 and firewall 304 includes security module 310. Client 402 may be one example of client 202 as shown in FIGURE 2. Firewall 404 may be one example of firewall 204 as shown in FIGURE 2. Server 406 may be one example of server 206 as shown in FIGURE 2. Client 202 further includes application 412 and operating system (OS) 414. Application 412 may be a process that initiates an encryption protocol session with server 406. Application 412 may be loaded into operating system 414 that handles the actual transmission of data to server 406. Extraction module 408 may extract a shared secret from operating system 414 and/or application 412. Extraction module performs key sharing to send the shared secret (SSL session key or master secret) to security module 410. This allows security module to perform decryption 416 on the network flow between operating system 414 and server 406. The network flow may be SSL/TLS encrypted traffic. FIGURE 5 is an illustration of a security module as a proxy in accordance with an illustrative embodiment. A network environment 502 may include a network flow 504, a security module 506, a client decrypt state 508, a server decrypt state 510, a client encrypt state 512, and a server encrypt state 514. Security module 506 in part (a) of FIGURE 5 may be a proxy in inspection mode. When in inspection mode security module 506 is copying and decrypting network flow 504. Security module 506 uses client decrypt state 508 to decrypt network flow 504 coming from a client and server decrypt state 510 to decrypt network flow 504 coming from a server. In part (b) of FIGURE 5, security module 502 is transitioning into proxy mode. Security module 506 may take client decrypt state 508 to create server encrypt state 514 and take server decrypt state 510 to create client encrypt state 512. In addition to decrypting like in part (a) in inspection mode, security module 506 can also encrypt in proxy mode in part (c). In part (c), security module 506 is in between network flow 504. During proxy mode, security module 506 may pass through, decrypt/encrypt, terminate, and/or modify network flow 504. To modify network flow 504, security module may decrypt as before in part (a), but then use client encrypt state 512 and/or server encrypt state 514 to also encrypt network flow 504. In an embodiment, once security module begins modifying network flow 504, security module 506 may encrypt/decrypt the rest of network flow 504 for the rest of the encryption protocol (SSL/TLS) session. FIGURE 6 is an illustration of a data diagram in accordance with an embodiment. Data diagram 600 shows a typical SSL/TLS data structures. Data diagram 600 includes data structures 602-616. The extraction module may inspect a target application memory and address related data structures 602-616 with API hooks or signature based scanning. The client may be protected by other security services to protect sensitive SSL key information before it is sent to a security module via a secure OOB channel. In an embodiment, a decrypt path might be Wininet.dll including CFSM:RunworkItem, CFSM:Run, CFSM:SecureReceive, ICSECURESOCKET::RECEIVE_FSM, ICSecuresocket: :DecryptData, and ICSecuresocket::DecryptData. Then, Sspiceli.dll, which includes DecryptMessage and LsaunsealMessage. Then Schannel.dll, which includes SpunsealMessage, SslUnsealMessageStream, TlsDecryptHandler, and TlsDecryptMessag. Then, Ncrypt.dll, which includes SslDecryptpacket, SPSslDecryptPacket, and TlsDecryptPacket. Then, Bcrypt.dll, which includes BcryptDecrypt. Then, Bcryptprimitives.dll, which includes MSCryptDecrypt, MSBlockDecrypt, and AescbcDecrypt. In an embodiment, a function may be a DecryptMessage() Function. This function may be used as follows: SECURITY_STATUS SEC_Entry DecryptMessage( in PCtxtHandle phContext, inout PSecBufferDesc pMessage, _in ULONG MessageSeqNo, _out PULONG pfQOP ). CtxtHandle may be additional context information CtxtHandle may access LSA_SEC_HANDLE by CtxtHandle {Void * P_vtable; LSA_SEC_HANDLE usercontext; ... } With LSA_SEC_HANDLE, NCRYPT_KEY_HANDLE may be accessed. CSslContext includes Cipher ID, ReadKey, WriteKey, and SessionlD. With NCRYPT_KEY_HANDLE, BCRYPT KEY HANDLE may be accessed. SSL_KEY_HANDLE may include hmac_key and bcrypt_key handle. With BCRYPT_KEY_HANDLE, the shared secret (session key) may be obtained. MSCRYPT_SYMMKEY_HANDLE includes the session key and the round key. FIGURE 7 is a simplified flowchart illustrating a process for extracting a shared secret using a shared library in accordance with an embodiment. A flow 700 may be a process that operates during and/or before an encryption protocol session. At 710, an extraction module loads the shared library into an application. At 720, the extraction module identifies any cryptographic structures in the application. At 730, the extraction module hooks the cryptographic structures with extraction functions. Responsive to the cryptographic structures being called, at 740, the execution module executes the extraction functions to identify the shared secret. At 750, the extraction module extracts the shared secret. After 750, the extraction module may securely transmit the shared secret to a security module. In operational terms, and specifically one embodiment, an extraction module injects DLL into a process space of the application (e.g. web browser). To do this, the extraction module: uses GetProcessAddress to find the LoadLibrary function Kernel32.dll; places the string with the path to the injected DLL into the web browser process space via the VirtualAllocEx and WriteProcessMemory; and invokes the CreateRemoteThread to launch a thread in the web browser process space using the LoadLibrary as the thread method, passing the string allocated in above as the only argument. Then, the injected DLL: pre-loads the SCHA EL.DLL into the process space; finds the base of the crypto data structures; and hooks the crypto functions of SCHANNEL.DLL with custom functions. Next, when the application requests crypto functions, the hooked functions are called which then: call the original SCHANNEL functions; inspect the data structure found above for the crypto key material, it may also find the master secret as well; and return the values returned by the original SCHANNEL functions. FIGURE 8 is a simplified flowchart illustrating a process for extracting a shared secret from a memory space in accordance with an embodiment. A flow 800 may be a process that operates during and/or before an encryption protocol session. At 810, an extraction module monitors a network flow at a network layer. The extraction module is searching for an initiation of a handshake for the encryption protocol session. At 820, the extraction module identifies the initiation of the handshake of the encryption protocol session. Responsive to identifying the initiation, at 830, the extraction module opens the memory space of a process initiating the encrypted protocol session. In one or more embodiments, the process may be an application. At 840, the extraction module identifies a shared secret in the encryption protocol session within the memory space of the process. At 850, the extraction module extracts the shared secret. After 850, the extraction module may transmit the shared secret to a security module. In operational terms, and in particular, one embodiment, an extraction module may hook into TCP streams looking for ClientHello messages. When a ClientHello message is found, and until the key material for the session is found, all TLS messages are inspected. The SessionID is extracted for the ServerHello message. Before and after each packet is processed by the inspected process the process is queried via the EnumProcessModules and ReadProcessMemory to: find if SCHANNEL.DLL is loaded; find the base of the crypto data structures in SCHANNEL.DLL; and find the key material for the session id found in the ServerHello message, it may also find the pre-master and/or master secret as well. Once the key material is found the key material is sent to a security module. This process may be extended to find the key material in FIGURE 7, including those that are statically linked, by searching the process space for the proper data structures. FIGURE 9 is a simplified flowchart illustrating a process for analyzing an encrypted network flow in accordance with an embodiment. A flow 900 may be a process that operates during an encryption protocol session. At 902, a security module monitors the encrypted network flow between a first node and a second node, the network flow initiated from the first node. In an embodiment, the first node may be a client and the second node may be a server. The encrypted network flow travels both ways between the first node and the second node. At 904, the security module duplicates the encrypted network flow to form a copy of the encrypted network flow. At 906, the security module decrypts the copy of the encrypted network flow using a shared secret. The shared secret associated with the first node and the second node. Both the first node and the second node know the shared secret. In an embodiment, the first node provides the shared secret. By knowing the shared secret, the security module can decrypt the network flow without interfering with the network flow. At 908, the security module scans the network flow copy for targeted data. Targeted data may be data that is targeted by the client, user, security module, firewall, security software, policy server, or other entity. Additionally, in one or more embodiments, an extraction module may extract the shared secret from the first node before 902. Additionally, in one or more embodiments, the security module may delay the encrypted network flow and forward the encrypted network flow as part of monitoring at 902. In that embodiment, the security module would delay forwarding to give time to scan the copy of the network flow. Responsive to identifying targeted data in the network flow copy, the security module may terminate the encrypted network flow. FIGURE 10 also illustrates a memory 1002 coupled to processor 1000 in accordance with an embodiment. Memory 1002 may be any of a wide variety of memories (including various layers of memory hierarchy) as are known or otherwise available to those of skill in the art. The memory 1002 may include code 1004, which may be one or more instructions, to be executed by processor 1000. Processor 1000 follows a program sequence of instructions indicated by code 1004. Each instruction enters a front-end logic 1006 and is processed by one or more decoders 1008. The decoder may generate as its output a micro operation such as a fixed width micro operation in a predefined format, or may generate other instructions, microinstructions, or control signals that reflect the original code instruction. Front-end logic 1006 also includes register renaming logic 1010 and scheduling logic 1012, which generally allocate resources and queue the operation corresponding to the convert instruction for execution. Processor 1000 is shown including execution logic 1014 having a set of execution units 1016-1 through 1016-N. Some embodiments may include a number of execution units dedicated to specific functions or sets of functions. Other embodiments may include only one execution unit or one execution unit that can perform a particular function. Execution logic 1014 performs the operations specified by code instructions. After completion of execution of the operations specified by the code instructions, back- end logic 1018 retires the instructions of code 1004. In one embodiment, processor 1000 allows out of order execution but requires in order retirement of instructions. Retirement logic 1020 may take a variety of forms as known to those of skill in the art (e.g., re-order buffers or the like). In this manner, processor 1000 is transformed during execution of code 1004, at least in terms of the output generated by the decoder, hardware registers and tables utilized by register renaming logic 1010, and any registers (not shown) modified by execution logic 1014. Although not illustrated in FIGURE 10, a processing element may include other elements on a chip with processor 1000. For example, a processing element may include memory control logic along with processor 1000. The processing element may include I/O control logic and/or may include I O control logic integrated with memory control logic. The processing element may also include one or more caches. FIGURE 1 1 illustrates a computing system 1100 that is arranged in a point-to-point (PtP) configuration according to an embodiment. In particular, FIGURE 1 1 shows a system where processors, memory, and input/output devices are interconnected by a number of point-to-point interfaces. As illustrated in FIGURE 1 1, system 1100 may include several processors, of which only two, processors 1102 and 1104, are shown for clarity. Processors 1 102 and 1104 may each include a set of cores 1103 and 1 105 to execute multiple processes of a program. Processors 1 102 and 1104 may also each include integrated memory controller logic (MC) 1 106 and 1108 to communicate with memories 11 10 and 11 12. The memories 11 10 and/or 1 112 may store various data such as those discussed with reference to memory 11 12. In alternative embodiments, memory controller logic 1106 and 1 108 may be discrete logic separate from processors 1 102 and 1 104. Processors 1102 and 1104 may be any type of a processor such as those discussed with reference to processor 102 of FIGURE 1. Processors 1102 and 1104 may exchange data via a point-to-point (PtP) interface 1 114 using point-to-point interface circuits 1 116 and 11 18, respectively. Processors 1 102 and 1 104 may each exchange data with a chipset 1120 via individual point-to-point interfaces 1 122 and 1 124 using point-to-point interface circuits 1 126, 1 128, 1 130, and 1132. Chipset 1120 may also exchange data with a high-performance graphics circuit 1 134 via a high-performance graphics interface 1136, using an interface circuit 1137, which could be a PtP interface circuit. In alternative embodiments, any or all of the PtP links illustrated in FIGURE 1 1 could be implemented as a multi-drop bus rather than a PtP link. At least one embodiment, as disclosed herein, may be provided within the processors 1102 and 1 104. Other embodiments, however, may exist in other circuits, logic units, or devices within the system 1100 of FIGURE 1 1. Furthermore, other embodiments may be distributed throughout several circuits, logic units, or devices illustrated in FIGURE 11. Chipset 1 120 may be in communication with a bus 1 140 via an interface circuit 1 141. Bus 1 140 may have one or more devices that communicate over it, such as a bus bridge 1 142 and I/O devices 1 143. Via a bus 1 144, bus bridge 1143 may be in communication with other devices such as a keyboard/mouse 1 145 (or other input device such as a touch screen, for example), communication devices 1 146 (such as modems, network interface devices, or other types of communication devices that may communicate through a computer network), audio I/O device 1 147, and/or a data storage device 1148. Data storage device 1 148 may store code 1 149 that may be executed by processors 1 102 and/or 1104. In alternative embodiments, any portions of the bus architectures could be implemented with one or more PtP links. The computer systems depicted in FIGS. 10 and 1 1 are schematic illustrations of embodiments of computing systems that may be utilized to implement various embodiments discussed herein. It will be appreciated that various components of the systems depicted in FIGURES 10 and 11 may be combined in a system-on-a-chip (SoC) architecture or in any other suitable configuration. For example, embodiments disclosed herein can be incorporated into systems such as, for example, mobile devices such as smart cellular telephones, tablet computers, personal digital assistants, portable gaming devices, etc. It will be appreciated that these mobile devices may be provided with SoC architectures in at least some embodiments. Note that in certain example implementations, the security module and extraction module functions outlined herein may be implemented by logic encoded in one or more tangible media (e.g., embedded logic provided in an application specific integrated circuit (ASIC), digital signal processor (DSP) instructions, software (potentially inclusive of object code and source code) to be executed by a processor, or other similar machine, etc.). In some of these instances, a memory element can store data used for the operations described herein. This includes the memory element being able to store software, logic, code, or processor instructions that are executed to carry out the activities described in this Specification. A processor can execute any type of instructions associated with the data to achieve the operations detailed herein in this Specification. In one example, the processor could transform an element or an article (e.g., data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented with fixed logic or programmable logic (e.g., software/computer instructions executed by a processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (e.g., FPGA, EPROM, EEPROM) or an ASIC that includes digital logic, software, code, electronic instructions, or any suitable combination thereof. In one example implementation, the security module and extraction module may include software in order to achieve the security activities outlined herein. The security module and extraction module can include memory elements for storing information to be used in achieving the security activities, as discussed herein. Additionally, the security module and extraction module may include a processor that can execute software or an algorithm to perform the security activities, as disclosed in this Specification. These devices may further keep information in any suitable memory element (random access memory (RAM), ROM, EPROM, EEPROM, ASIC, etc.), software, hardware, or in any other suitable component, device, element, or object where appropriate and based on particular needs. Additionally, the security module and extraction module can be software, hardware, firmware or a combination thereof. Any of the memory items discussed herein (e.g., databases, tables, trees, caches, etc.) should be construed as being encompassed within the broad term 'memory element.' Similarly, any of the potential processing elements, modules, and machines described in this Specification should be construed as being encompassed within the broad term 'processor.' Note that with the example provided above, as well as numerous other examples provided herein, interaction might be described in terms of two, three, or four elements. However, this has been done for purposes of clarity and example only. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a limited number of elements. It should be appreciated that the security module and extraction module (and their teachings) are readily scalable and can accommodate a large number of components, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope or inhibit the broad teachings of the security module and extraction module as potentially applied to a myriad of other architectures. It is also important to note that the operations in the preceding flow diagrams illustrate only some of the possible scenarios and patterns that may be executed by, or within, a security module and extraction module. Some of these operations may be deleted or removed where appropriate, or may be modified or changed considerably without departing from the scope of the present disclosure. In addition, a number of these operations have been described as being executed concurrently with, or in parallel to, one or more additional operations. However, the timing of these operations may be altered considerably. The preceding operational flows have been offered for purposes of example and discussion. A security module and an extraction module provide substantial flexibility in that any suitable arrangements, chronologies, configurations, and timing mechanisms may be provided without departing from the teachings of the present disclosure. Although the present disclosure has been described in detail with reference to particular arrangements and configurations, these example configurations and arrangements may be changed significantly without departing from the scope of the present disclosure. The following examples pertain to embodiments in accordance with this Specification. One or more embodiments may provide a method for analyzing an encrypted network flow. The method may include: monitoring the encrypted network flow between a first node and a second node, the network flow initiated from the first node; duplicating the encrypted network flow to form a copy of the encrypted network flow; decrypting the copy of the encrypted network flow using a shared secret, the shared secret associated with the first node and the second node; and scanning the network flow copy for targeted data. An example of an embodiment further comprises extracting the shared secret from the first node. An example of an embodiment further comprises delaying the encrypted network flow; and forwarding the encrypted network flow. An example of an embodiment further comprises, responsive to identifying targeted data in the network flow copy, terminating the encrypted network flow. An example of an embodiment further comprises, responsive to identifying targeted data in the network flow copy, decrypting the encrypted network flow before forwarding using the shared secret; modifying the unencrypted network flow to remove the targeted data; and encrypting a modified network flow using the shared secret; and forwarding the modified network flow. An example of an embodiment further comprises, wherein extracting the shared secret from the first node comprises: loading a shared library into an application on the first node, wherein the application is accessing the encrypted protocol session, and wherein the shared library allows access to an encryption protocol session through the application; and identifying the shared secret in the encryption protocol session. An example of an embodiment further comprises, wherein extracting the shared secret from the first node comprises: monitoring a network flow at a network layer; identifying an initiation of a handshake of an encryption protocol session; responsive to identifying the initiation, opening a memory space of a process initiating the encrypted protocol session; and identifying the shared secret in the encryption protocol session within the memory space of the process. An example of an embodiment comprises that the shared secret is at least one of a master secret, pre-master secret, session context. As used herein, the phrase "at least one of may mean any one or combination of the list. For example, at least one of A, B, and C could mean A, B, or C, or any combination thereof. An example of an embodiment further comprises limiting a number of encryption methods used to encrypt a network flow between the first node and the second node. One or more embodiments provide a method, apparatus, and/or machine accessible storage medium for extracting a shared secret from a first node. The method includes loading a shared library into an application on the first node, wherein the application is accessing the encrypted protocol session, and wherein the shared library allows access to an encryption protocol session through the application; and identifying the shared secret in the encryption protocol session. One or more embodiments provide a method, apparatus, and/or machine accessible storage medium for extracting a shared secret from a first node. The method includes monitoring a network flow at a network layer; identifying an initiation of a handshake of an encryption protocol session; responsive to identifying the initiation, opening a memory space of a process initiating the encrypted protocol session; and identifying the shared secret in the encryption protocol session within the memory space of the process. An example of an embodiment further comprises extracting the shared secret from the memory space of the process. An example of an embodiment further comprises sending the shared secret to a security module. One or more embodiments provide an apparatus. The apparatus comprising a security module configured to monitor the encrypted network flow between a first node and a second node, the network flow initiated from the first node; duplicate the encrypted network flow to form a copy of the encrypted network flow; decrypt the copy of the encrypted network flow using a shared secret, the shared secret associated with the first node and the second node; and scan the network flow copy for targeted data. An example of an embodiment further comprises an extraction module configured to extract the shared secret from the first node. An example of an embodiment further comprises, wherein the security module is further configured to: delay the encrypted network flow; and forward the encrypted network flow. An example of an embodiment further comprises, wherein the security module is further configured to: responsive to identifying targeted data in the network flow copy, terminate the encrypted network flow. An example of an embodiment further comprises, wherein the security module is further configured to: responsive to identifying targeted data in the network flow copy, decrypt the encrypted network flow before forwarding using the shared secret; modify the unencrypted network flow to remove the targeted data; encrypt a modified network flow using the shared secret; and forward the modified network flow. An example of an embodiment further comprises, wherein the extraction module being configured to extract the shared secret from the first node comprises the extraction module being configured to: load a shared library into an application on the first node, wherein the application is accessing the encrypted protocol session, and wherein the shared library allows access to an encryption protocol session through the application; and identify the shared secret in the encryption protocol session. An example of an embodiment further comprises, wherein the extraction module being configured to extract the shared secret from the first node comprises the extraction module being configured to: monitor a network flow at a network layer; identify an initiation of a handshake of an encryption protocol session; responsive to identifying the initiation, open a memory space of a process initiating the encrypted protocol session; and identify the shared secret in the encryption protocol session within the memory space of the process. An example of an embodiment further comprises, wherein the shared secret is at least one of a master secret, pre-master secret, session context. An example of an embodiment further comprises, wherein the security module is further configured to: limit a number of encryption methods used to encrypt a network flow between the first node and the second node. One or more embodiments provide at least one machine accessible storage medium having instructions stored thereon for analyzing an encrypted network flow, the instructions when executed on a machine, cause the machine to: monitor the encrypted network flow between a first node and a second node, the network flow initiated from the first node; duplicate the encrypted network flow to form a copy of the encrypted network flow; decrypt the copy of the encrypted network flow using a shared secret, the shared secret associated with the first node and the second node; and scan the network flow copy for targeted data. An example of an embodiment further comprises instructions, when executed on the machine, cause the machine to: extract the shared secret from the first node. An example of an embodiment further comprises instructions, when executed on the machine, cause the machine to: delay the encrypted network flow; and forward the encrypted network flow. An example of an embodiment further comprises instructions, when executed on the machine, cause the machine to: responsive to identifying targeted data in the network flow copy, terminate the encrypted network flow. An example of an embodiment further comprises instructions, when executed on the machine, cause the machine to: responsive to identifying targeted data in the network flow copy, decrypt the encrypted network flow before forwarding using the shared secret; modify the unencrypted network flow to remove the targeted data; and encrypt a modified network flow using the shared secret; and forward the modified network flow. An example of an embodiment further comprises, wherein the instructions, when executed on the machine, cause the machine to extract the shared secret from the first node, further comprises instructions, when executed on the machine, cause the machine to: load a shared library into an application on the first node, wherein the application is accessing the encrypted protocol session, and wherein the shared library allows access to an encryption protocol session through the application; and identify the shared secret in the encryption protocol session. An example of an embodiment further comprises, wherein the instructions, when executed on the machine, cause the machine to extract the shared secret from the first node, further comprises instructions, when executed on the machine, cause the machine to: monitor a network flow at a network layer; identify an initiation of a handshake of an encryption protocol session; responsive to identifying the initiation, open a memory space of a process initiating the encrypted protocol session; and identify the shared secret in the encryption protocol session within the memory space of the process. An example of an embodiment further comprises, wherein the shared secret is at least one of a master secret, pre-master secret, session context. An example of an embodiment further comprises instructions, when executed on the machine, cause the machine to limit a number of encryption methods used to encrypt a network flow between the first node and the second node.
Systems, apparatuses, and methods for routing interrupts on a coherency probe network are disclosed. A computing system includes a plurality of processing nodes, a coherency probe network, and one ormore control units. The coherency probe network carries coherency probe messages between coherent agents. Interrupts that are detected by a control unit are converted into messages that are compatiblewith coherency probe messages and then routed to a target destination via the coherency probe network. Interrupts are generated with a first encoding while coherency probe messages have a second encoding. Cache subsystems determine whether a message received via the coherency probe network is an interrupt message or a coherency probe message based on an encoding embedded in the received message.Interrupt messages are routed to interrupt controller(s) while coherency probe messages are processed in accordance with a coherence probe action field embedded in the message.
1.A system including:One or more processing nodes;Control unit; andA coherent detection network configured to transmit coherent detection messages between the one or more processing nodes and the control unit;Wherein the control unit is configured to:In response to detecting an interrupt, generate an interrupt message compatible with the coherent probe message; andThe interrupt message is sent on the path to the target via the coherent detection network.2.The system of claim 1, wherein the control unit is further configured to:Generate the first code for the coherent detection message;Embedding the first code in a given field of a coherent detection message sent on the coherent detection network;Generating a second code for the interrupt message, wherein the second code is different from the first code; andThe second code is embedded in a given field of an interrupt message sent on the coherent detection network.3.The system according to claim 1, wherein the interrupt message includes a code in the response field indicating that no response is required to be sent.4.The system of claim 1, wherein the system further comprises one or more cache subsystems, wherein each cache subsystem is configured to determine whether the received message is a coherent probe message or an interrupt based on the embedded code news.5.The system of claim 4, wherein each cache subsystem is further configured to: in response to determining that the received message is an interrupt message, broadcast the interrupt message to the plurality of processor cores of the corresponding node.6.The system of claim 1, wherein the fields of the interrupt message are aligned to match the fields of the coherent probe message.7.The system of claim 1, wherein the control unit is further configured to encode the coherent detection action field of the interrupt message by using an interrupt delivery indicator.8.A method including:The control unit generates an interrupt message compatible with the coherent detection message in response to detecting the interrupt; andThe interrupt message is sent on the path to the target via a coherent detection network, wherein the coherent detection network is configured to carry coherent detection between one or more processing nodes and the control unit.9.The method of claim 8, further comprising:Generate the first code for the coherent detection message;Embedding the first code in a given field of a coherent detection message sent on the coherent detection network;Generating a second code for the interrupt message, wherein the second code is different from the first code; andThe second code is embedded in a given field of an interrupt message sent on the coherent detection network.10.8. The method according to claim 8, wherein the interrupt message includes a code in the response field indicating that no response is required to be sent.11.The method of claim 8, further comprising determining, by the cache subsystem, based on the embedded code, whether the received message is a coherent detection message or an interrupt message.12.11. The method of claim 11, further comprising: broadcasting, by the cache subsystem, an interrupt message to a plurality of processor cores of the corresponding node in response to determining that the message is an interrupt message.13.8. The method of claim 8, wherein the fields of the interrupt message are aligned to match the fields of the coherent probe message.14.8. The method according to claim 8, further comprising: encoding the coherent detection action field of the interruption message using an interruption delivery indicator.15.A device including:Multiple processor cores; andCache subsystem;Wherein the device is configured to:In response to detecting an interrupt, generate an interrupt message compatible with the coherent probe message; andThe interrupt message is sent on the path to the target device via a coherent detection network, wherein the coherent detection network is configured to carry coherent detection between the device and one or more coherent agents.16.The device of claim 15, wherein the device is further configured to:Generate the first code for the coherent detection message;Embedding the first code in a given field of a coherent detection message sent on the coherent detection network;Generating a second code for the interrupt message, wherein the second code is different from the first code; andThe second code is embedded in a given field of an interrupt message sent on the coherent detection network.17.The device according to claim 15, wherein the interrupt message includes a code in the response field indicating that no response is required to be sent.18.The device of claim 15, wherein the device is further configured to determine whether the received message is a coherent probe message or an interrupt message based on the embedded code.19.The device of claim 18, wherein the device is further configured to: in response to determining that the received message is an interrupt message, broadcast the interrupt message to a plurality of processor cores of the corresponding node.20.The device of claim 15, wherein the fields of the interrupt message are aligned to match the fields of the coherent probe message.
Probing interrupt deliveryBackground techniqueGenerally, an interrupt or exception is an event that changes the execution of an instruction from the currently executing instruction stream to another instruction stream. Interrupts are usually generated by the processor or a device coupled to the processor. A typical interrupt handling mechanism changes the program control flow of the interrupted processor to an interrupt handler. Based on the programming of the interrupt controller or the type of interrupt being delivered, it is usually necessary to deliver input/output (I/O) devices and central processing unit (CPU) to CPU interrupts to any CPU thread in the computing system. Historically, sidebands have been used to pass interrupts to the core. The sideband line is a dedicated single core line used to pass the interrupt type and interrupt vector to each core. However, as the number of cores increases, the sideband lines become difficult to expand, resulting in a very large number of lines dedicated to interrupt delivery.Description of the drawingsThe advantages of the methods and mechanisms described in this article can be better understood by referring to the following description in conjunction with the accompanying drawings. In the accompanying drawings:Figure 1 is a block diagram of an implementation of a computing system.Figure 2 is a block diagram of another implementation of a computing system.Figure 3 is a block diagram of an implementation of the core complex.Figure 4 shows examples of coherent probe messages and interrupt messages according to various implementations.Fig. 5 is a general flow chart showing one implementation of a method for generating a message to be sent through a coherent detection network.Fig. 6 is a general flow chart showing one implementation of a method for determining whether a message is a coherent probe message or an interrupt message.Fig. 7 is a general flow chart showing one implementation of a method for generating an interrupt message.Figure 8 is a general flow chart showing one implementation of a method for processing received messages at the cache subsystem.Detailed waysIn the following description, numerous specific details are explained to provide a thorough understanding of the methods and mechanisms presented in this article. However, those of ordinary skill in the art should recognize that various implementations can be practiced without these specific details. In some cases, well-known structures, components, signals, computer program instructions, and techniques are not shown in detail to avoid obscuring the methods described herein. It should be understood that, in order to make the description simple and clear, the elements shown in the drawings are not necessarily drawn to scale. For example, the size of some elements may be enlarged relative to other elements.This document discloses various systems, devices, and methods for routing outages on a coherent detection network. In one implementation, a computing system includes at least multiple processing nodes, a coherent detection network, and one or more control units. The coherent detection network carries coherent detection messages between coherent agents. The interruption detected by the control unit is converted into a message compatible with the coherent detection message, and then routed to the target destination via the coherent detection network. The interrupt is generated with the first code, and the coherent detection message has the second code. The cache subsystem determines whether the message received via the coherent detection network is an interrupt message or a coherent detection message based on the code embedded in the received message. The interrupt message is routed to the interrupt controller, and the coherent detection message is processed according to the coherence detection action field embedded in the message.Referring now to FIG. 1, a block diagram of one implementation of the computing system 100 is shown. In one implementation, the computing system 100 includes at least core complexes 105A to 105N, an input/output (I/O) interface 120, a bus 125, one or more memory controllers 130, and a network interface 135. In other implementations, the computing system 100 includes other components, and/or the computing system 100 is arranged differently. In one implementation, each core complex 105A to 105N includes one or more general-purpose processors, such as a central processing unit (CPU). It should be noted that "core complex" is also referred to herein as "processing node" or "CPU". In some implementations, one or more core complexes 105A to 105N include data parallel processors with a highly parallel architecture. Examples of data parallel processors include graphics processing units (GPU), digital signal processors (DSP), and so on. In various implementations, each processor core in the core complex 105A to 105N includes an interrupt controller and a cache subsystem with one or more levels of cache. In one implementation, each core complex 105A to 105N includes a cache (eg, a third level (L3) cache) shared among multiple processor cores.The one or more memory controllers 130 represent any number and type of memory controllers that can be accessed by the core complexes 105A to 105N. One or more memory controllers 130 are coupled to any number and type of memory devices (not shown). For example, the type of memory in one or more memory devices coupled to one or more memory controllers 130 may include dynamic random access memory (DRAM), static random access memory (SRAM), NAND flash memory, and NOR flash memory. Memory, Ferroelectric Random Access Memory (FeRAM), etc. I/O interface 120 represents any number and type of I/O interface (for example, Peripheral Component Interconnect (PCI) bus, PCI expansion (PCI-X), PCIE (PCI high speed) bus, Gigabit Ethernet (GBE) bus , Universal Serial Bus (USB)). Various types of peripheral devices may be coupled to the I/O interface 120. Such peripheral devices include (but are not limited to) displays, keyboards, mice, printers, scanners, joysticks or other types of game controllers, media recording devices, external storage devices, network interface cards, etc.In various implementations, the computing system 100 is a computer, a laptop computer, a mobile device, a game console, a server, a streaming device, a wearable device, or any of various other types of computing systems or devices Kind. It should be noted that the number of components of the computing system 100 varies with different implementations. For example, in other implementations, there are more or less of each component than the number shown in FIG. 1. It should also be noted that in other implementations, the computing system 100 includes other components not shown in FIG. 1. In addition, in other implementations, the computing system 100 is structured in a different manner from that shown in FIG. 1.Turning now to FIG. 2, a block diagram of another implementation of the computing system 200 is shown. In one implementation, the system 200 includes a control unit 210, a coherent detection network 215, an interrupt controller 220, devices 225A to 225N, and nodes 230A to 230D. In one implementation, the control unit 210 is located in the coherence unit. In other implementations, the control unit 210 is part of any of various other types of components. Alternatively, in another implementation, the control unit 210 is an independent component. The devices 225A to 225N represent any number and type of peripheral devices or input/output (I/O) devices connected to the control unit 210 via the interrupt controller 220.In one implementation, the system 200 is a system on chip (SoC). In other implementations, the system 200 is any of various other types of computing systems. Nodes 230A to 230D represent any number and type of processing nodes. Each node 230A to 230D includes any number of processor cores 245A to 245N, 250A to 250N, 255A to 255N, and 260A to 260N, respectively. Although four nodes 230A to 230D are shown in the system 200 of FIG. 2, this is shown for illustrative purposes only. It should be understood that the number of nodes included in the system 200 varies according to the implementation manner. In other implementations, the system 200 includes other components and/or is organized in other suitable ways.In one implementation, the system 200 enforces a memory coherent protocol to ensure that a processor core or device does not simultaneously access data that is being modified by another core or device. In order to comply with the memory coherence protocol, the core and devices of the system 200 transmit coherent messages (for example, coherent probe messages and probe responses) on the coherent probe network 215. Therefore, the coherent detection network 215 is designed to carry coherent detection messages and detection responses between the coherent agents of the system 200. A coherent probe message is a message that seeks the coherent state of data associated with a specific memory location. The probe response is usually sent back to the coherent agent that generated the coherent probe message. The probe response indicates the coherence status of the reference data, transmits data in response to the probe, or provides other information in response to the probe. Generally, the coherent detection network 215 only carries coherent detection messages and detection responses. However, in the system 200, the coherent detection network 215 also carries interrupts for one or more of the cores 230A to 230D. This allows interrupts to benefit from the use of a dedicated low-latency network that spans multiple components within the system 200 and is scalable to any number of threads.In various implementations, each of the devices 225A to 225N can generate an interrupt by asserting an interrupt signal detected by the interrupt controller 220. In response to detecting the interrupt signal, the interrupt controller 220 generates an interrupt message with information such as a destination identifier, a delivery mode, an interrupt vector, or other suitable information. Then, the interrupt controller 220 transmits the interrupt message to the control unit 210. In one implementation, the control unit 210 converts the interrupt message into a coherent detection message with a special code, and then the control unit 210 transmits the coherent detection message with a special code on the coherent detection network 215 to one or more targets.To facilitate the transmission of interrupts on the coherent detection network 215, the control unit 210 includes logic for generating, receiving, processing, and forwarding interrupts. This logic also handles the normal processing of coherent probe messages. In one implementation, when the control unit 210 detects or receives an interrupt, the control unit 210 generates an interrupt message compatible with the format of the coherent detection message. Generating the interrupt message in a compatible format allows the coherent probe network 215 to carry the interrupt message in a similar manner to the coherent probe message. Although the interrupt message is compatible with the coherent probe message, the interrupt message includes an embedded code that allows other components to distinguish the interrupt message from the coherent probe message. After the interrupt message is generated in the coherent compatible format, the control unit 210 transmits the interrupt message on the coherent detection network 215 to one or more nodes 230A to 230D targeted by the interrupt. In one implementation, the control unit 210 broadcasts the interrupt message on the coherent detection network 215 to all nodes 230A to 230D. In another implementation manner, the control unit 210 only sends the interrupt message on the coherent detection network 215 to the node targeted by the interrupt message.In one implementation, the coherent detection network 215 is connected to the cache subsystem 240A to 240D in each node 230A to 230D, respectively. Each cache subsystem 240A to 240D includes any number of cache levels. For example, in one implementation, each cache subsystem 240A to 240D includes a third-level (L3) cache and a second-level (L2) cache. In this implementation, each core includes a local level one (L1) cache. In other implementations, each cache subsystem 240A to 240D includes other cache levels. When a given cache subsystem 240A to 240D receives a message via the coherent detection network 215, the given cache subsystem 240A to 240D determines whether the message is an interrupt message or a coherent detection message. If the message is an interrupt message, the given cache subsystem 240A to 240D sends the interrupt message to one or more interrupt controllers in the corresponding one or more cores. As shown in the system 200, the nodes 230A to 230D include interrupt controllers 247A to 247N, 252A to 252N, 257A to 257N and/or cores 245A to 245N, 250A to 250N, 255A to 255N, and/or 260A to 260N, respectively. 262A to 262N. In one implementation, in response to receiving the interrupt message, a given cache subsystem 240A to 240D broadcasts the interrupt message to all cores in the corresponding node. In another implementation, in response to receiving the interrupt message, a given cache subsystem 240A to 240D only sends the interrupt message to those cores for which the interrupt message is targeted. One or more interrupt controllers in one or more cores will check the interrupt message and generate an interrupt to send to the target core.Referring now to FIG. 3, a block diagram of one implementation of the core complex 300 is shown. In one implementation, the core complex 300 includes four processor cores 310A to 310D. In other implementations, the core complex 300 includes other numbers of processor cores. It should be noted that the "core complex" may also be referred to as a "processing node", "node" or "CPU" herein. In one implementation, the components of the core complex 300 are included in the core complexes 105A to 105N (of FIG. 1).Each processor core 310A to 310D includes a cache subsystem for storing data and instructions retrieved from a memory subsystem (not shown). For example, in one implementation, each core 310A to 310D includes a corresponding first level (L1) cache 315A to 315D. Each processor core 310A to 310D also includes or is coupled to a corresponding second level (L2) cache 320A to 320D. In addition, in one implementation, the core complex 300 includes a third level (L3) cache 330 shared by the processor cores 310A to 310D. It should be noted that in other implementations, the core complex 300 may include other types of cache subsystems with other numbers of caches and/or other configurations with different cache levels.The L3 cache 330 is coupled to the bus/structure via the coherent detection network 340. The L3 cache 330 receives both the coherent detection and interruption messages via the coherent detection network 340. The L3 cache 330 forwards the coherent detection and interruption messages to the L2 caches 320A to 320D. In one implementation, the L3 cache 330 broadcasts the received coherent detection and interrupt messages to all L2 caches 320A to 320D. In another implementation, the L3 cache 330 forwards the received coherent detection or interruption message to only those L2 caches 320A to 320D for which the detection or interruption message is targeted. In this implementation, the L3 cache 330 includes logic to check for coherent probe and interrupt messages to determine their targets. Once a message is received from the L3 cache 330, the L2 caches 320A to 320D check the message to determine whether the message is an interruption or a coherent probe. The L2 caches 320A to 320D forward the interrupt messages for processing to the interrupt controllers 317A to 317D, respectively. The L2 caches 320A to 320D process the coherent detection according to the embedded coherent detection action field.Turning now to FIG. 4, an example of encoding a coherent probe message and an interrupt message in a mixed message format is shown. Table 400 shows examples of message types that can be sent using a mixed message format. The leftmost column of the table 400 indicates the message type 410, where two different types of messages are shown in the table 400: a coherent detection message 410A and an interrupt message 410B. In other implementations, other numbers of different types of messages are encoded in a mixed message format. Using a mixed message format allows the interrupt message 410B to be formatted in a similar manner to the coherent probe message 410A. Therefore, the fields of the interrupt message 410B or in some cases a combination of fields are aligned to match the fields of the coherent probe message 410A. The mixed message format includes any number of fields, where the number of fields varies depending on the implementation. As shown in table 400, the mixed message format includes a coherent detection action field 415, an address field 420, a response field 425, and any number of other fields.The first entry of table 400 shows an example of coherent probe message 410A. For the coherent detection message 410A, the field 415 is encoded with the coherent detection action indicator 415A. The coherent detection action indicator 415A can be set to be equal to any one of various different values according to the detection action type. For the interrupt message 410B, the field 415 is encoded with an interrupt delivery indicator 415B to indicate that the message is an interrupt. In one implementation, the control logic in the cache subsystem (e.g., cache subsystem 240A of FIG. 2) looks at field 415 to determine whether the received message is a coherent probe message or an interrupt message.Field 420 specifies the address of the corresponding memory location targeted by the coherent probe message 415A. For the interrupt message 410B, the field 420 stores the interrupt type indicator 420B in the first subset of bits, and the field 420 stores the target indicator 420C in the second subset of bits. In other words, the address field 420 is repurposed to maintain both the interrupt type indicator 420B and the target indicator 420C of the interrupt message 410B. This is possible because the combination of the interrupt type indicator 420B and the target indicator 420C has the same size as the address field 420A. The interruption type indicator 420B stores the type of interruption transmitted by the interruption message 410B, and the target field 420C specifies the target of the interruption message 410B.Field 425 specifies the type of response that should be generated after processing the message. For the coherent probe message 410A, the field 425 is encoded with any of the values of the various response indicators 425A, which specify the type of response to be sent back to the source. For the interrupt message 410B, the response field 425 is encoded with a non-response indicator 425B to indicate that the response does not need to be sent back to the source. In other implementations, the mixed message format includes other fields. For example, in another implementation, the mixed message format includes an interrupt vector field for storing the memory location of the interrupt handler. Other types of fields are also possible and can be considered for mixed message formats.Referring now to FIG. 5, one implementation of a method 500 for generating a message for transmission through a coherent detection network is shown. For discussion purposes, the steps in this implementation and those steps of FIGS. 6 to 8 are shown in sequential order. However, it should be noted that in various implementations of the described methods, one or more of the described elements are performed simultaneously, performed in a different order than shown, or omitted entirely. Other additional elements are also implemented as needed. Any of the various systems or devices described herein are configured to implement method 500.The control logic in the fabric interconnect receives the message in a mixed message format (block 505). In response to receiving the message in the mixed message format, the control logic determines whether the message is a coherent probe message or an interrupt message (block 510). An example of how to determine whether a message is a coherent probe message or an interrupt message is described in the discussion about the method 600 of FIG. 6. If the received message is a break message (the "yes" branch of condition block 515), the control logic retrieves the target field from the break message, where the target field is a subset of the address field of the mixed message format (block 520). In other words, if the length of the address field is Y bits, the length of the target field is X bits, where X is less than Y, and where both X and Y are positive integers. An example where the target field is a subset of the address field is shown in the table 400 of FIG. 4. Next, the control unit routes the interrupt message to one or more devices specified in the target field via the coherent detection network (block 525). If the received message is a coherent probe message (the "no" branch of condition block 515), the control logic retrieves the address field from the coherent probe message (block 530). Next, the control logic forwards the coherent probe message to one or more devices corresponding to the address specified in the address field via the coherent probe network (block 535). After blocks 525 and 535, the method 500 ends.Turning now to FIG. 6, an implementation of a method for determining whether a message is a coherent probe message or an interrupt message is shown. The control logic receives the message via the coherent detection network (block 605). In response to receiving the message, the control logic retrieves the coherent probe action field from the received message (block 610). If the coherent detection action field is encoded with an interrupt delivery indicator (the "yes" branch of condition block 615), the control logic treats the received message as an interrupt message (block 620). If the coherent detection action field is encoded with a coherent detection action indicator ("No" branch of condition block 615), the control logic treats the received message as a coherent detection message (block 625). In other words, if the coherent detection action field of the message is encoded with any value other than the interrupt delivery indicator, the control logic treats the received message as a coherent detection message. After blocks 620 and 625, the method 600 ends.Referring now to FIG. 7, one implementation of a method 700 for generating an interrupt message is shown. The control logic receives the interrupt (block 705). Depending on the implementation, the control logic is located in the cache subsystem, coherence point, or other location within the computing system. In response to receiving the interrupt, the control logic generates an interrupt message compatible with the coherent probe message, where the fields of the generated interrupt message are aligned with the fields of the coherent probe message (block 710). The control logic then forwards the interrupt message to the target destination via the coherent detection network (block 715). After block 715, the method 700 ends.Turning now to FIG. 8, one implementation of a method 800 for processing received messages at the cache subsystem is shown. The control logic in the cache subsystem receives the message via the coherent detection network (block 805). In one implementation, the control logic is part of the L2 cache. In other implementations, the control logic is located at other levels of the cache subsystem. In response to receiving the message, the control logic determines whether the message is a coherent probe message or an interrupt message (block 810). An example of how to determine whether a message is a coherent probe message or an interrupt message is described in the method 600 of FIG. 6.If the message is an interrupt message (the "yes" branch of conditional block 815), the control logic retrieves the target field from the message (block 820). The control logic then routes the interrupt message to one or more interrupt controllers of the one or more processor cores targeted by the interrupt (block 825). Alternatively, in another implementation, the control logic broadcasts the interrupt message to the interrupt controllers of all processor cores in the node. If the message is a coherent probe message (the "no" branch of condition block 815), the control logic retrieves the coherent probe action field and address field from the message (block 830). Next, the control logic processes the coherent detection message according to the detection action specified in the coherent detection action field (block 835). After blocks 825 and 835, the method 800 ends.In various implementation manners, program instructions of software applications are used to implement the methods and/or mechanisms described herein. For example, program instructions that can be executed by a general-purpose or special-purpose processor are contemplated. In various implementations, such program instructions are represented by high-level programming languages. In other implementations, the program instructions are compiled from a high-level programming language into a binary form, an intermediate form, or other forms. Instead, write program instructions that describe the behavior or design of the hardware. Such program instructions are represented by a high-level programming language such as C. Alternatively, a hardware design language (HDL) such as Verilog is used. In various implementations, the program instructions are stored on any of a variety of non-transitory computer-readable storage media. During use, the computing system can access the storage medium to provide program instructions to the computing system for program execution. Generally speaking, such computing systems include at least one or more memories and one or more processors configured to execute program instructions.It should be emphasized that the above implementations are only non-limiting examples of implementations. Once the above disclosure is fully understood, numerous changes and modifications will become apparent to those skilled in the art. The appended claims are intended to be interpreted as covering all such changes and modifications.
Embodiments disclosed herein include a forked-chip transistor device having a dielectric or conductive ridge. For example, an integrated circuit structure includes a dielectric ridge. The first transistor device includes a first vertical stack of semiconductor channels spaced apart from a first edge of the dielectric ridge. The second transistor device includes a second vertical stack of semiconductor channels spaced apart from a second edge of the dielectric ridge. An N-type gate structure is on the first vertical stack of the semiconductor channel with a portion of the N-type gate structure laterally between and in contact with the first edge of the dielectric ridge and the first vertical stack of the semiconductor channel. A P-type gate structure is on and in contact with a second vertical stack of the semiconductor channel with a portion of the P-type gate structure laterally between and in contact with a second edge of the dielectric ridge and the second vertical stack of the semiconductor channel.
1.An integrated circuit structure comprising:dielectric ridges;a first transistor device including a first vertical stack of semiconductor channels spaced apart from a first edge of the dielectric ridge;a second transistor device including a second vertical stack of semiconductor channels spaced apart from a second edge of the dielectric ridge;N-type gate structure on the first vertical stack of semiconductor channels, a portion of the N-type gate structure being laterally between the first edge of the dielectric ridge and the first vertical stack of semiconductor channels of the semiconductor channel and come into contact with them; andP-type gate structure on the second vertical stack of semiconductor channels, a portion of the P-type gate structure being laterally between the second edge of the dielectric ridge and the second vertical stack of semiconductor channels of the semiconductor channel and get in touch with them.2.The integrated circuit structure of claim 1 wherein the first and second vertical stacks of semiconductor channels are first and second stacks of nanoribbons or nanowires.3.2. The integrated circuit structure of claim 1 or 2, wherein the total number of semiconductor channels in the first vertical stack of semiconductor channels is the same as the total number of semiconductor channels in the second vertical stack of semiconductor channels.4.2. An integrated circuit structure as claimed in claim 1 or 2, wherein the total number of semiconductor channels in the first vertical stack of semiconductor channels is different from the total number of semiconductor channels in the second vertical stack of semiconductor channels.5.An integrated circuit structure comprising:conductive ridge;first and second dielectric spacers along the first and second edges of the conductive ridge, respectively;a first transistor device including a first vertical stack of semiconductor channels adjacent to a first dielectric spacer along a first edge of the conductive ridge; andA second transistor device including a second vertical stack of semiconductor channels adjacent to a second dielectric spacer along a second edge of the conductive ridge.6.6. The integrated circuit structure of claim 5, wherein the first transistor device is a P-type device and the second transistor device is an N-type device.7.6. An integrated circuit structure as claimed in claim 5 or 6, wherein the first and second vertical stacks of semiconductor channels are first and second stacks of nanoribbons or nanowires.8.6. An integrated circuit structure as claimed in claim 5 or 6, wherein the total number of semiconductor channels in the first vertical stack of semiconductor channels is the same as the total number of semiconductor channels in the second vertical stack of semiconductor channels.9.6. An integrated circuit structure as claimed in claim 5 or 6, wherein the total number of semiconductor channels in the first vertical stack of semiconductor channels is different from the total number of semiconductor channels in the second vertical stack of semiconductor channels.10.The integrated circuit structure according to claim 5 or 6, further comprising:a first gate structure on the first vertical stack of semiconductor channels, the first gate structure including a first gate electrode and a first gate dielectric; andA second gate structure on a second vertical stack of semiconductor channels, the second gate structure including a second gate electrode and a second gate dielectric.11.A computing device comprising:board; andAn assembly coupled to the board, the assembly including an integrated circuit structure including:dielectric ridges;a first transistor device including a first vertical stack of semiconductor channels spaced apart from a first edge of the dielectric ridge;a second transistor device including a second vertical stack of semiconductor channels spaced apart from a second edge of the dielectric ridge;N-type gate structure on the first vertical stack of semiconductor channels, a portion of the N-type gate structure being laterally between the first edge of the dielectric ridge and the semiconductor channel of the first vertical stack of semiconductor channels and come into contact with them; andP-type gate structure on the second vertical stack of semiconductor channels, a portion of the P-type gate structure being laterally between the second edge of the dielectric ridge and the semiconductor channel of the second vertical stack of semiconductor channels and get in touch with them.12.The computing device of claim 11, further comprising:A memory coupled to the board.13.The computing device of claim 11 or 12, further comprising:A communication chip coupled to the board.14.The computing device of claim 11 or 12, further comprising:A camera coupled to the board.15.12. The computing device of claim 11 or 12, wherein the component is a packaged integrated circuit die.16.A computing device comprising:board; andAn assembly coupled to the board, the assembly including an integrated circuit structure including:conductive ridge;first and second dielectric spacers along the first and second edges of the conductive ridge, respectively;a first transistor device including a first vertical stack of semiconductor channels adjacent to a first dielectric spacer along a first edge of the conductive ridge; andA second transistor device including a second vertical stack of semiconductor channels adjacent to a second dielectric spacer along a second edge of the conductive ridge.17.The computing device of claim 16, further comprising:A memory coupled to the board.18.The computing device of claim 16 or 17, further comprising:A communication chip coupled to the board.19.The computing device of claim 16 or 17, further comprising:A camera coupled to the board.20.17. The computing device of claim 16 or 17, wherein the component is a packaged integrated circuit die.
Fork chip transistors with dielectric or conductive ridgestechnical fieldEmbodiments of the present disclosure relate to integrated circuit structures, and more particularly, to forksheet transistors for use in integrated circuits.Background techniqueScaling of features in integrated circuits has been a driving force in the evolving semiconductor industry over the past few decades. Scaling to smaller and smaller features enables increased functional unit densities on the limited real estate of semiconductor chips. For example, shrinking transistor size allows an increased number of memory or logic devices to be incorporated on a chip, thereby facilitating the manufacture of products with increased capabilities. However, the drive for more and more capabilities is not without its problems. The need to optimize the performance of each device becomes increasingly important.In the manufacture of integrated circuit devices, as device dimensions continue to shrink, multi-gate transistors such as tri-gate transistors are becoming more common. In conventional processes, tri-gate transistors are typically fabricated on bulk silicon substrates or silicon-on-insulator substrates. In some instances, bulk silicon substrates are preferred due to their lower cost and because they enable a less complex tri-gate fabrication process. On the other hand, as microelectronic device dimensions scale below 10 nanometers (nm), maintaining mobility improvements and short channel control presents challenges in device fabrication. Nanowires used to fabricate devices offer improved short-channel control.Scaling multi-gate and nanowire transistors is not without consequences, however. As the size of these basic building blocks of microelectronic circuits has decreased, and as the sheer number of basic building blocks fabricated in a given area has increased, the constraints on the lithographic process used to pattern these building blocks have changed Gotta be overwhelming. In particular, there may be a trade-off between the minimum dimension (critical dimension) of features patterned in a semiconductor stack and the spacing between those features.Description of drawings1A is a perspective view illustration of a fork-chip transistor according to an embodiment.FIG. 1B is a cross-sectional illustration of an interdigitated chip transistor across a semiconductor channel, according to an embodiment.2 illustrates a cross-sectional view of an integrated circuit structure including an interdigitated chip transistor with a dielectric ridge in accordance with an embodiment of the present disclosure.3 illustrates a cross-sectional view of another integrated circuit structure including an interdigitated chip transistor with a dielectric ridge in accordance with another embodiment of the present disclosure.4 illustrates cross-sectional views representing various operations in a method for fabricating an integrated circuit structure including an interdigitated chip transistor having a dielectric ridge, in accordance with an embodiment of the present disclosure.5 illustrates cross-sectional views representing various operations in a method for fabricating an integrated circuit structure including an interdigitated chip transistor having a dielectric ridge, in accordance with an embodiment of the present disclosure.6 illustrates cross-sectional views representing various operations in a method for fabricating an integrated circuit structure including a fork chip transistor having conductive ridges, in accordance with an embodiment of the present disclosure.7 illustrates plan views representing various architectures of integrated circuit structures including interdigitated chip transistors with conductive ridges in accordance with embodiments of the present disclosure.8A illustrates cross-sectional views representing various operations in a method for fabricating an integrated circuit structure including a fork chip transistor with conductive ridges, in accordance with an embodiment of the present disclosure.8B illustrates cross-sectional views representing various operations in a method for fabricating an integrated circuit structure including a fork-chip transistor with conductive ridges in accordance with an embodiment of the present disclosure.9 illustrates a computing device according to one implementation of an embodiment of the present disclosure.10 is an interposer implementing one or more embodiments of the present disclosure.Detailed waysDescribed herein are fork-chip transistors with dielectric or conductive ridges, and methods for fabricating fork-chip transistors with dielectric or conductive ridges. In the following description, various aspects of the illustrative implementations will be described using terms commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. However, it will be apparent to those skilled in the art that the present disclosure may be practiced with only some of the described aspects. For purposes of explanation, specific numbers, materials, and configurations are set forth in order to provide a thorough understanding of the illustrative implementations. However, it will be apparent to those skilled in the art that the present disclosure may be practiced without these specific details. In other instances, well-known features are omitted or simplified so as not to obscure the illustrative implementations.The following detailed description is merely illustrative in nature and is not intended to limit embodiments of the subject matter or the application and uses of such embodiments. As used herein, the word "exemplary" means "serving as an example, instance, or illustration." Any implementation described herein as exemplary is not necessarily to be construed as preferred or advantageous over other implementations. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary or the following detailed description.This specification includes references to "one embodiment" or "an embodiment." The appearances of the phrases "in one embodiment" or "in an embodiment" are not necessarily referring to the same embodiment. The particular features, structures or characteristics may be combined in any suitable manner consistent with this disclosure.the term. The following paragraphs provide definitions or context for terms found in this disclosure, including the appended claims:"include". The term is open ended. As used in the appended claims, the terms do not exclude additional structures or acts."configured to". Various units or components may be described or declared as "configured to" perform one or more tasks. In this context, "configured to" is used to connote structure by indicating that a unit or component includes structure that performs those one or more tasks during operation. As such, a specified unit or component can be said to be configured to perform a task even when the specified unit or component is not currently operational (eg, not turned on or activated). To name a few, a unit or circuit or component "configured to" perform one or more tasks is expressly intended not to invoke 35 U.S.C. §112 sixth paragraph with respect to that unit or component."First", "Second", etc. As used herein, these terms are used as labels for the nouns that precede them, and do not imply any type of ordering (eg, spatial, temporal, logical, etc.)."coupling". The following description refers to elements or nodes or features that are "coupled" together. As used herein, unless expressly stated otherwise, "coupled" means that one element or node or feature is directly or indirectly joined to (or in direct or indirect communication with) another element or node or feature, And not necessarily mechanically.Also, certain terms may also be used in the following description for the purpose of reference only, and are therefore not intended to be limiting. For example, terms such as "upper," "lower," "above," and "below" refer to directions in the figures to which reference is made. Terms such as "front," "rear," "back," "side," "outer," and "inner" describe the orientation or position, or both, of parts in an assembly within a consistent but arbitrary frame of reference , this becomes apparent by reference to the text describing the components in question and the associated figures. Such terms may include the words specifically mentioned above, derivatives thereof, and similarly introduced words."inhibition". As used herein, suppression is used to describe reducing or minimizing an effect. When a component or feature is described as inhibiting an action, movement, or condition, it can completely prevent an outcome or consequence, or completely prevent a future state. Additionally, "inhibit" can also refer to reducing or mitigating a consequence, property, or effect that might otherwise occur. Accordingly, when a component, element or feature is referred to as inhibiting a result or condition, it is not necessary to completely prevent or eliminate the result or condition.Embodiments described herein may relate to front end of line (FEOL) semiconductor processing and structures. FEOL is the first part of integrated circuit (IC) fabrication, in which individual devices (eg, transistors, capacitors, resistors, etc.) are patterned in a semiconductor substrate or layer. FEOL generally covers everything up to (but not including) metal interconnect layer deposition. Following the final FEOL operation, the result is usually a wafer with isolated transistors (eg, without any wires).Embodiments described herein may relate to back end of line (BEOL) semiconductor processing and structures. BEOL is the second part of IC fabrication where individual devices (eg, transistors, capacitors, resistors, etc.) become interconnected with wiring (eg, one or more metallization layers) on the wafer. BEOL includes: contacts, isolation layers (dielectrics), metal levels, and bonding sites for chip-to-package connections. In the BEOL portion of the manufacturing stage, contacts (pads), interconnect wires, vias and dielectric structures are formed. For modern IC processes, more than 10 metal layers can be added to the BEOL.The embodiments described below may apply to FEOL processes and structures, BEOL processes and structures, or both FEOL and BEOL processes and structures. In particular, although FEOL processing scenarios may be used to illustrate exemplary processing schemes, such methods may also be applicable to BEOL processing. Likewise, although a BEOL processing scenario may be used to illustrate an exemplary processing scheme, such an approach may also be applicable to FEOL processing.Various operations may be described as multiple discrete operations in turn, in a manner that is most helpful in understanding the present disclosure, however, the order of description should not be construed to imply that these operations are necessarily order-dependent. In particular, the operations need not be performed in the order presented.One or more embodiments described herein relate to self-aligned nanoribbon structures with dielectric ridges, which can be viewed as versions of forked-sheet (or nanocomb) transistors. One or more embodiments described herein relate to forked chip (or nanocomb) transistors with Faraday shields that may include conductive ridges.To provide context, to continue cell size scaling, nanowires/nanoribbons, self-aligned dielectric walls (or self-aligned gate terminal SAGEs), and stacked transistors are three viable boosters for continued cell size scaling. Unlike FinFETs, nanowire or nanoribbon structures allow higher drive currents per footprint due to their stackability. Self-Aligned Gate Termination (SAGE) uses dielectric walls to separate NMOS and PMOS, thus reducing the spacing of gate extensions and N-P boundaries over active fins. The nanocomb transistor architecture combines nanoribbon channels with self-aligned dielectric walls to dramatically scale cell heights in 2D CMOS.To provide further context, to address spacing requirements between features, a fork-chip transistor architecture has been proposed. In a bifurcated chip architecture, a dielectric backbone or ridge is disposed between the first transistor and the second transistor. The semiconductor channels (eg, strips, lines, etc.) of the first and second transistors contact opposite sidewalls of the dielectric ridge. As such, the spacing between the first transistor and the second transistor is reduced to the width of the dielectric ridge. Such an architecture does not allow gate-all-around (GAA) control of the semiconductor channel since one surface of the semiconductor channel contacts the dielectric ridge. Additionally, a compact interconnection architecture between the first transistor and the second transistor has not been proposed.As noted above, forked chip transistors allow for increased density of non-planar transistor devices. An example of a semiconductor device 100 having interdigitated chip transistors 120A and 120B is shown in FIG. 1A . The forked chip transistor includes a dielectric ridge 110 extending upwardly from the substrate 101 , wherein the transistor 120 is adjacent to either sidewall of the dielectric ridge 110 . As such, the spacing between transistors 120A and 120B is equal to the width of dielectric ridge 110 . Accordingly, the density of such forked chip transistors 120 may be increased compared to other non-planar transistor architectures (eg, FinFETs, nanowire transistors, etc.).The sheet 105 of semiconductor material extends (laterally) away from the dielectric ridge 110 . In the illustration of FIG. 1A , sheets 105A and 105B are shown on either side of dielectric ridge 110 . Sheet 105A is for the first transistor 120A, and sheet 105B is for the second transistor 120B. Sheets 105A and 105B pass through gate structure 112 . Portions of lamellae 105A and 105B within gate structure 112 are considered channels, and portions of lamellae 105A and 105B on opposite sides of gate structure 112 are considered source/drain regions. In some implementations, the source/drain regions comprise epitaxially grown semiconductor bodies, and the lamellae 105 may exist only within the gate structures 112 . That is, the stacked sheets 105A and 105B are replaced by blocks of semiconductor material.Referring now to FIG. 1B , a cross-sectional illustration of semiconductor device 100 through gate structure 112 is shown. As shown, a vertical stack of semiconductor channels 106A and 106B is provided through gate structure 112 . Semiconductor channels 106A and 106B are connected to the source/drain regions out-of-plane of FIG. 1B . Semiconductor channels 106A and 106B are surrounded by gate dielectric 108 on three sides. Surfaces 107 of semiconductor channels 106A and 106B are in direct contact with dielectric ridges 110 . Work function metal 109 may surround gate dielectric 108 , and gate fill metals 113A and 113B may surround work function metal 109 . In this illustration, semiconductor channels 106A and 106B are shown with different shading. However, in some implementations, semiconductor channels 106A and 106B may be the same material. The isolation layer 103 may be disposed over the gate fill metals 113A and 113B.While such interleaved chip transistors 120A and 120B provide many benefits, there are still many areas for improvement in order to provide higher density, improved interconnect architecture, and improved performance. For example, embodiments disclosed herein provide further density improvements by stacking multiple transistor layers on top of each other. Although the semiconductor device 100 in FIGS. 1A and 1B illustrates a single layer (ie, a pair of adjacent fork-chip transistors 120A and 120B), the embodiments disclosed herein have the same overlays illustrated in FIGS. 1A and 1B . The region includes a first layer and a second layer (eg, to provide four cross-chip transistors). Additionally, the embodiments disclosed herein provide an interconnect architecture that allows electrical coupling between the first layer and the second layer to efficiently utilize multiple layers. Additionally, embodiments disclosed herein include interconnect architectures that allow bottom-side connections to buried layers.In an embodiment, the material of the dielectric ridges may consist of materials suitable for eventual electrical isolation or for helping to isolate active regions of adjacent transistor devices. For example, in one embodiment, the dielectric ridge is composed of a dielectric material such as, but not limited to, silicon dioxide, silicon oxynitride, silicon nitride, or carbon-doped silicon nitride. In an embodiment, the dielectric ridges consist of or include dielectrics such as oxides of silicon (eg, silicon dioxide (SiO2)), doped oxides of silicon, fluorinated oxides of silicon, carbon doped silicon Heterooxides, low-k dielectric materials known in the art, and combinations thereof. The dielectric ridge material may be formed by techniques such as, for example, chemical vapor deposition (CVD), physical vapor deposition (PVD), or by other deposition methods. In an embodiment, the dielectric ridge consists of a high-k layer along with a low-k layer, eg, where a high-k portion is included for stiffness.In a first aspect, according to one or more embodiments of the present disclosure, self-aligned nanoribbon structures with dielectric ridges are described.The nanocomb architecture described below can be implemented to enable self-aligned gate edges and SRAM scaling. Furthermore, this architecture alleviates the challenges of metal gate formation and wrap-around contacts in stacked nanowire or nanoribbon architectures. As an example, FIG. 2 illustrates a cross-sectional view of an integrated circuit structure including a fork-chip or nanocomb transistor with dielectric ridges in accordance with an embodiment of the present disclosure.Referring to FIG. 2, an integrated circuit structure 200 includes a PMOS region 202 and an NMOS region 204 over a substrate 201, such as a silicon substrate. PMOS region 202 includes a stack of nanowires or nanoribbons 206 and a P-type metal gate electrode 210 . The NMOS region 204 includes a stack of nanowires or nanoribbons 208 and an N-type metal gate electrode 212 . Dielectric ridge 214 is between PMOS region 202 and NMOS region 204 . In one embodiment, the dielectric ridges 214 are in contact with the nanowires or nanoribbons 206 of the PMOS region 202 and are in contact with the nanowires or nanoribbons 208 of the NMOS region 204, as depicted.It is to be appreciated that the architecture of Figure 2 can be combined with the loss problem of short channel effect (SCE) - which is otherwise a key advantage of gate all around (GAA) architecture - and the difference between nMOS and pMOS Electrical coupling is associated. For architectures such as those described in connection with Figure 2, attempts to improve the SCE have included attempts to improve the interface quality through engineering measures such as nitridation to compensate for the loss of the SCE. Also, recessed ridges have been implemented to improve SCE. However, improvements can still be made, especially since they do not provide a path to block or suppress electrical coupling problems. In contrast, according to embodiments of the present disclosure, self-aligned spaces are used to accommodate HiK/metal gates between one side edge of the nanoribbon and the dielectric ridge to improve SCE and shield the electrical coupling between NMOS and PMOS . As an example, FIG. 3 illustrates a cross-sectional view of another integrated circuit structure including a fork chip transistor with a dielectric ridge or a self-aligned nanoribbon transistor in accordance with another embodiment of the present disclosure.3, an integrated circuit structure 300 includes a PMOS region 302 and an NMOS region 304 over a substrate 301, such as a silicon substrate. PMOS region 302 includes a stack of nanowires or nanoribbons 306 and a P-type metal gate electrode 310 . The NMOS region 304 includes a stack of nanowires or nanoribbons 308 and an N-type metal gate electrode 312 . Dielectric ridge 314 is between PMOS region 302 and NMOS region 304 . In one embodiment, dielectric ridges 314 are spaced apart from nanowires or nanoribbons 306 of PMOS region 302 and spaced apart from nanowires or nanoribbons 308 of NMOS region 304, as depicted. In one such embodiment, the P-type metal gate electrode 310 is laterally between the dielectric ridges 314 and the nanowires or nanoribbons 306 of the PMOS region 302 . The N-type metal gate electrode 312 is laterally between the dielectric ridge 314 and the nanowire or nanoribbon 308 of the NMOS region 304 .Referring again to FIG. 3 , the integrated circuit structure includes a dielectric ridge 314 . The first transistor device includes a first vertical stack of semiconductor channels 306 spaced from a first edge of a dielectric ridge 314 . The second transistor device includes a second vertical stack of semiconductor channels 308 spaced apart from the second edge of the dielectric ridge 314 . N-type gate structure 310 is on the first vertical stack of semiconductor channels 306 , and portion 316 of N-type gate structure 310 is laterally on the semiconductor trench where the first edge of dielectric ridge 314 and the first vertical stack of semiconductor channels 306 between and in contact with them. P-type gate structure 312 is on the second vertical stack of semiconductor channels 308 , and portion 318 of P-type gate structure 312 is laterally on the semiconductor trench where the second edge of dielectric ridge 314 and the second vertical stack of semiconductor channels 308 between and in contact with them.In an embodiment, the N-type gate structure 310 includes a first gate electrode and a first gate dielectric. In one such embodiment, the portion 316 of the N-type gate structure 310 lying laterally between and in contact with the first edge of the dielectric ridge 314 and the first vertically stacked semiconductor channel of the semiconductor channel 306 includes: A portion of the first gate electrode and a portion of the first gate dielectric. In an embodiment, the P-type gate structure 312 includes: a second gate electrode and a second gate dielectric. In one such embodiment, the portion 318 of the P-type gate structure 312 lying laterally between and in contact with the second edge of the dielectric ridge 314 and the second vertically stacked semiconductor channel of the semiconductor channel 308 includes: A portion of the second gate electrode and a portion of the second gate dielectric.According to one or more embodiments of the present disclosure, the full surround gate process flow is followed by shallow trench isolation (STI) grooves. Self-aligned SiGe epi caps that only grow Si and SiGe surfaces can be used. Next is ridge fill, and the isotropic groove etch fills the space between the NMOS and PMOS. Conventional process operations can then be implemented to complete the devices. It is to be appreciated that there may be ways to implement the sacrificial cap without an epitaxial process, but the SiGe process can inherently add self-aligned features. Also, the SiGe capping process eliminates the extra dimple etch for making internal spacers. Otherwise, additional pit etching may be required to incorporate the low-k spacers into the sacrificial cap.The process may include self-aligning cap formation and ridge filling. As an example, FIG. 4 illustrates cross-sectional views representing various operations in a method for fabricating an integrated circuit structure including an interdigitated chip transistor having a dielectric ridge, in accordance with an embodiment of the present disclosure.Referring to FIG. 4 , starting structure 400 includes silicon subfins 402 in isolation layer 404 over substrate 401 . Fin stack 406 is over silicon sub-fins 402 . The fin stacks 406 each include silicon nanowires 408 with an intermediate sacrificial silicon germanium layer 410 . A sacrificial cap 412 , such as a silicon germanium cap, is formed over the starting structure 400 . Then, a dielectric material 414 is formed. Dielectric material 414 is then patterned to form stack 450 including dielectric ridges 414A. Subsequent processing can then be performed, as described below.5 illustrates cross-sectional views representing various operations in a method for fabricating an integrated circuit structure including an interdigitated chip transistor having a dielectric ridge, in accordance with an embodiment of the present disclosure.5, starting structure 500 includes stack 450 of FIG. 4 in gate trench 504 in dielectric layer 522 after a dummy gate removal process. Then, the sacrificial cap 412 and sacrificial silicon germanium layer 410 of the stack 450 are removed. The resulting structure leaves gaps 506 between the silicon nanowires 408 and the dielectric ridges 414A. Subsequent gate stack formation may provide gate material in the gaps 506 between the silicon nanowires 408 and the dielectric ridges 414A, eg, to provide structures such as those described in connection with FIG. 3 .In another aspect, according to one or more embodiments of the present disclosure, nanocomb transistors with Faraday barriers are described.To provide context, the nanocomb transistor (or fork-chip transistor) architecture is a viable option for cell height scaling. Nanocomb architectures can be used with nanoribbon transistors in combination with self-aligned dielectric walls to reduce the spacing between NMOS and PMOS boundaries. However, the thin dielectric walls may cause dynamic threshold voltage fluctuations due to capacitive coupling effects from the thin dielectric walls adjacent to the gate. This can cause failures on CMOS logic gates if the transistor threshold voltage is disturbed by devices in its vicinity. Since NMOS and PMOS are very close together, fork chip/nanocomb transistors can have significant crosstalk. When the dielectric wall becomes narrower, the adjacent gate can be regarded as the back gate, which is coupled through the dielectric wall to the Si channel on the other side. This can cause dynamic Vt fluctuations and cause failures on CMOS circuits.To address such problems, in an embodiment, the Faraday barrier is fabricated in a dielectric wall, wherein the wall is filled with conductive material to provide conductive ridges. In one embodiment, the conductive material is then grounded so that it can shield the electric field from adjacent devices, thereby preventing cross-interference between adjacent NMOS or PMOS. In an embodiment, the grounded wall can prevent cross-interference from adjacent devices. Thus, embodiments can be implemented to address cross-talk issues in nanocomb transistor architectures.6 illustrates cross-sectional views representing various operations in a method for fabricating an integrated circuit structure including a fork chip transistor having conductive ridges, in accordance with an embodiment of the present disclosure.Starting structure 600 includes a region above substrate 602 with nanowires or nanoribbons 604 and NMOS gate stacks 608 , and a region above substrate 602 with nanowires or nanoribbons 606 and PMOS gate stacks 610 . Dielectric structures 612 are on either side of and between these two regions. Each dielectric structure 612 includes a dielectric cap 616 on a dielectric wall 614 with adjacent dielectric spacers 618 .Referring again to FIG. 6, a first option structure 620 includes a conductive ridge 622 that replaces the dielectric cap 616 and the dielectric walls 614 of the dielectric structure 612 between the two regions. Conductive contacts 626 in dielectric layer 624 are formed in electrical contact with conductive ridges 622 . The second option structure 630 includes a conductive ridge 632 that replaces the dielectric walls 614 of the dielectric structure 612 between the two regions. Conductive contacts 634 in the substrate 602 or in the dielectric layer are formed in electrical contact with the conductive ridges 632 .Referring again to FIG. 6 , the integrated circuit structure includes a conductive ridge 622 or 632 in accordance with an embodiment of the present disclosure. The first and second dielectric spacers 618 are along the first and second edges of the conductive ridges 622 or 632, respectively. The first transistor device includes a first vertical stack of semiconductor channels 604 adjacent to first dielectric spacers 618 along a first edge of conductive ridges 622 or 632 . The second transistor device includes a second vertical stack of semiconductor channels 604 adjacent to second dielectric spacers 618 along the second edge of conductive ridge 622 or 632 .7 illustrates plan views representing various architectures of integrated circuit structures including interdigitated chip transistors with conductive ridges in accordance with embodiments of the present disclosure.Referring to FIG. 7 , an integrated circuit structure 700 includes an NMOS region 702 and a PMOS region 704 . The NMOS region includes: an NMOS gate electrode 706 and an intermediate contact 708 . The PMOS region includes: a PMOS gate electrode 710 and an intermediate contact 712 . The conductive ridges take the form of inner conductive plugs 714A and outer conductive plugs 714B.Referring again to FIG. 7 , integrated circuit structure 750 includes: NMOS region 752 and PMOS region 754 . The NMOS region includes: an NMOS gate electrode 756 and an intermediate contact 758 . The PMOS region includes: a PMOS gate electrode 760 and an intermediate contact 762 . The conductive ridges take the form of outer conductive planes 764A and outer conductive planes 764B.In an embodiment, the conductive material is a metal, graphene, or a doped semiconductor, such as polysilicon or amorphous silicon. Also, depending on the circuit layout, conductive material may be filled into the plugs or planes. In some cases, plugs may be preferred to avoid parasitic capacitances between the walls and the source/drain material. In other cases, however, a flat surface may be preferred because the conductive walls provide another routing path from the front side of the wafer to the back side of the wafer. Also, in an embodiment, the Faraday planes are parallel to the fins, but the Faraday plugs are only made in specific locations of the poly gate.The Faraday barrier plug process can be accomplished by etching a poly cut, removing the dielectric walls, and filling the walls with conductive material. The conductive walls can be grounded from the front side of the interconnect or the back side of the interconnect. 8A illustrates cross-sectional views representing various operations in a method for fabricating an integrated circuit structure including a fork chip transistor with conductive ridges, in accordance with an embodiment of the present disclosure.Referring to part (i) of FIG. 8A , a starting structure 800 includes a stack of nanowires or nanoribbons 802 and an intermediate sacrificial layer 804 over an oxide layer 806 on the substrate 801 side. Dielectric structures 808 are outside and between the stack of nanowires or nanoribbons 802 . A sacrificial gate material 810 , such as a polysilicon material, is over the stack of nanowires or nanoribbons 802 and intermediate sacrificial layer 804 and dielectric structure 808 . Referring to part (ii) of FIG. 8A , openings 812 are formed in the sacrificial gate material 810 . Referring to part (iii) of FIG. 8A , the dielectric walls are removed from the dielectric structure 808 through the openings 812 . Referring to part (iv) of Figure 8A, conductive ridges 816 are formed in place of the removed dielectric walls.The Faraday barrier planes can be fabricated after the fins are patterned. In an example, FIG. 8B illustrates cross-sectional views representing various operations in a method for fabricating an integrated circuit structure including a fork-chip transistor with conductive ridges, in accordance with an embodiment of the present disclosure.Referring to part (i) of FIG. 8B , the starting structure includes a stack of nanowires or nanoribbons 852 and an intermediate sacrificial layer 854 over a substrate 851 . Fin hardmask 856 is over the stack of nanowires or nanoribbons 852 and intermediate sacrificial layer 854 . The central fin portion has spacers 858 thereon. Referring to part (ii) of FIG. 8B , the stack of nanowires or nanoribbons 852 and intermediate sacrificial layer 854 is etched to form openings that are eventually filled with a sacrificial hardmask 860 (such as a carbon hardmask). Referring to portion (iii) of FIG. 8B , the structure of portion (ii) is wall etched to form outer openings 862 and inner openings 864 . Referring to part (iv) of FIG. 8B , an inner dielectric wall 868 is formed in the inner opening 864 . An outer dielectric wall 866 is formed in the outer opening 866 . Then, the sacrificial hard mask 860 is removed. Referring to part (v) of FIG. 8B , the inner dielectric walls 868 are replaced by conductive ridges 870 .In an embodiment, an underlying semiconductor substrate as described herein represents a general workpiece object used to fabricate integrated circuits. Semiconductor substrates often include wafers or other segments of silicon or another semiconductor material. Suitable semiconductor substrates include, but are not limited to, single crystal silicon, polycrystalline silicon, and silicon-on-insulator (SOI), as well as similar substrates formed from other semiconductor materials, such as substrates including germanium, carbon, or III-V materials.It is to be appreciated that in certain embodiments, the channel layer (or corresponding release layer) of the plurality of nanowires (or nanoribbons) may be composed of silicon. As used throughout this document, a silicon layer may be used to describe a silicon material composed of a very large amount, if not all, of silicon. However, it is to be appreciated that, in practice, 100% pure silicon may be difficult to form, and thus may include very small percentages of carbon, germanium or tin. Such impurities may be included as unavoidable impurities or components during deposition of Si, or may "contaminate" Si upon diffusion during post-deposition processing. As such, embodiments described herein for silicon layers may include silicon layers that include relatively small amounts (eg, "impurity" levels) of non-Si atoms or species, such as Ge, C, or Sn. It is to be appreciated that the silicon layers described herein may be undoped, or may be doped with dopant atoms, such as boron, phosphorous or arsenic.It is to be appreciated that in certain embodiments, the channel layer (or corresponding release layer) of the plurality of nanowires (or nanoribbons) may be composed of silicon germanium. As used throughout, a silicon germanium layer may be used to describe a silicon germanium material that consists of a substantial portion of both silicon and germanium, such as at least 5% of both. In some embodiments, the (atomic) amount of germanium is the same or substantially the same as the amount of silicon (eg, Si50Ge50). In some embodiments, the amount of germanium is greater than the amount of silicon. In certain embodiments, the silicon germanium layer comprises approximately 60% germanium and approximately 40% silicon (Si40Ge60). In other embodiments, the amount of silicon is greater than the amount of germanium. In a particular embodiment, the silicon germanium layer comprises approximately 30% germanium and approximately 70% silicon (Si70Ge30). It is to be appreciated that, in practice, 100% pure silicon germanium (commonly referred to as SiGe) may be difficult to form and therefore may contain very small percentages of carbon or tin. Such impurities may be included as unavoidable impurities or components during deposition of SiGe, or may "contaminate" SiGe upon diffusion during post-deposition processing. As such, embodiments described herein for silicon germanium layers may include silicon germanium layers containing relatively small amounts (eg, "impurity" levels) of non-germanium and non-silicon atoms or species, such as carbon or tin. It is to be appreciated that the silicon germanium layers described herein may be undoped, or may be doped with dopant atoms, such as boron, phosphorous, or arsenic.It is to be appreciated that in certain embodiments, the channel layer (or corresponding release layer) of the plurality of nanowires (or nanoribbons) may be composed of germanium. As used throughout this document, a germanium layer may be used to describe a germanium material that consists of a very large amount, if not all, of germanium. However, it is to be appreciated that, in practice, 100% pure Ge may be difficult to form, and thus may include very small percentages of carbon, silicon or tin. Such impurities may be included as unavoidable impurities or components during deposition of Ge, or may "contaminate" Ge upon diffusion during post-deposition processing. As such, embodiments described herein for germanium layers may include germanium layers containing relatively small amounts (eg, "impurity" levels) of non-germanium atoms or species, such as Si, C, or Sn. It is to be appreciated that the germanium layers described herein may be undoped or may be doped with dopant atoms, such as boron, phosphorous or arsenic.It is to be appreciated that while some embodiments describe the use of Si or SiGe (wires or ribbons) and complementary Si or SiGe (sacrificial) layers, other pairs of semiconductor materials that can be alloyed and epitaxially grown may be implemented (eg, InAs and InGaAs) to implement the various embodiments herein.In an embodiment, the source or drain structure is fabricated from a silicon alloy formed using a selective epitaxial deposition process. In some implementations, the silicon alloy can be in situ doped silicon germanium, in situ doped silicon carbide, or in situ doped silicon. In alternative implementations, other silicon alloys may be used. For example, alternative silicon alloy materials that may be used include, but are not limited to, nickel suicide, titanium suicide, cobalt suicide, and possibly may be doped with one or more of boron and/or aluminum.In embodiments, a dielectric spacer may separate the gate electrode from the source or drain structures. The nanowire channel can pass through the spacer to connect to source or drain structures on either side of the nanowire channel. In an embodiment, the gate dielectric surrounds the perimeter of the exposed portion of the nanowire or nanoribbon channel. For example, the gate dielectric may be any suitable oxide, such as silicon dioxide or a high-k gate dielectric material. Examples of high-k gate dielectric materials include, for example: hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium oxide silicon, tantalum oxide, titanium oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide , yttrium oxide, aluminum oxide, lead oxide scandium tantalum and lead zinc niobate. In some embodiments, when using high-k materials, an annealing process may be performed on the gate dielectric layer to improve its quality. ‎In an embodiment, the gate electrode surrounds the gate dielectric layer. It is to be appreciated that the gate electrode may include a work function metal over the gate dielectric layer, and a gate fill metal. When the work function metal is to be used as the N-type work function metal, the work function metal of the gate electrode preferably has a work function between about 3.9 eV and about 4.2 eV. N-type materials that can be used to form the metal of the gate electrode include, but are not limited to, hafnium, zirconium, titanium, tantalum, aluminum, and metal carbides including these elements, ie, titanium carbide, zirconium carbide, tantalum carbide, hafnium carbide and aluminum carbide. When the work function metal is to be used as the P-type work function metal, the work function metal of the gate electrode preferably has a work function between about 4.9 eV and about 5.2 eV. P-type materials that can be used to form the metal of the gate electrode include, but are not limited to, ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides such as ruthenium oxide. ‎In the illustrated embodiment, each distinct transistor is shown as having three or four nanowire or nanoribbon channels. However, it will be appreciated that, according to various embodiments, each transistor may include any number of nanowire or nanoribbon channels.In another aspect, the integrated circuit structures described herein can be fabricated using back-side reveal of a front-side structure fabrication method. In some exemplary embodiments, exposure of the backside of a transistor or other device structure requires wafer-level backside processing. In contrast to conventional through-silicon via TSV-type techniques, exposure of the backside of a transistor as described herein can be performed at the density of device cells, and even within sub-regions of the device. Additionally, such exposure of the backside of the transistor can be carried out to remove substantially all of the donor substrate on which the device layer was disposed during frontside device processing. As such, micrometer-deep TSVs become unnecessary where the thickness of the semiconductor in the device cell may be only tens or hundreds of nanometers after exposure of the backside of the transistor.The exposure techniques described herein may enable a paradigm shift from "bottom-up" device fabrication to "center-out" fabrication, where the "center" is employed in front-side fabrication, exposed from the backside , and again any layers employed in backside fabrication. Processing both the front side and the exposed back side of the device structure can address many of the challenges associated with fabricating 3D ICs when primarily reliant on front side processing.The backside exposure of the transistor approach can be employed, for example, to remove at least a portion of the carrier layer and the intermediate layer of the donor-host substrate assembly. The process flow begins with the input of the donor-host substrate assembly. The thickness of the carrier layer in the donor-host substrate is polished (eg, CMP) and/or etched using a wet or dry (eg, plasma) etching process. Any grinding, polishing and/or wet/dry etching process known to be suitable for carrier layer formation may be employed. For example, where the carrier layer is a Group IV semiconductor (eg, silicon), CMP slurries known to be suitable for semiconductor thinning may be employed. Likewise, any wet etchant or plasma etching process known to be suitable for thinning Group IV semiconductors may be employed.In some embodiments, prior to the foregoing, the carrier layer is cleaved along a fracture plane substantially parallel to the intermediate layer. This cleavage or fracture process can be used to remove a significant portion of the carrier layer as a bulk mass, thereby reducing the polishing or etching time required to remove the carrier layer. For example, with a carrier layer thickness of 400-900 microns, cleavage of 100-700 microns can be achieved by practicing any blanket implant known to promote wafer-level fracture. In some exemplary embodiments, light elements (eg, H, He, or Li) are implanted to a uniform target depth within the carrier layer where fracture planes are desired. Following such a cleaving process, the thickness of the carrier layer remaining in the donor-host substrate assembly can then be polished or etched to complete the removal. Alternatively, grinding, polishing and/or etching operations may be employed to remove greater thicknesses of the carrier layer without breaking the carrier layer.Next, the exposure of the intermediate layer is detected. Detection is used to identify the point in time when the backside surface of the donor substrate has almost advanced to the device layer. Any endpoint detection technique known to be suitable for detecting transitions between the materials employed for the carrier layer and the intermediate layer may be practiced. In some embodiments, the one or more endpoint criteria are based on detecting changes in optical absorption or emission of the backside surface of the donor substrate during the polishing or etching performed. In some other embodiments, the endpoint criteria are associated with changes in optical absorption or emission of by-products during polishing or etching of the backside surface of the donor substrate. For example, the absorption or emission wavelengths associated with carrier layer etch by-products may vary as a function of the different constitutions of the carrier layer and the intermediate layer. In other embodiments, the endpoint criteria are associated with changes in species quality in by-products of polishing or etching the backside surface of the donor substrate. For example, by-products of processing can be sampled by a quadrupole mass analyzer, and changes in species mass can be related to different compositions of the carrier layer and the intermediate layer. In another exemplary embodiment, the endpoint criterion is associated with a change in friction between the backside surface of the donor substrate and the polishing surface in contact with the backside surface of the donor substrate.In cases where the removal process is selective to the carrier layer relative to the interlayer, detection of the interlayer can be enhanced because non-uniformities in the carrier removal process can be measured by the delta etch rate between the carrier layer and the interlayer. ) to alleviate. Detection may even be skipped if the grinding, polishing and/or etching operations remove the interlayer at a rate substantially lower than the carrier layer is removed. If no endpoint criteria are employed, grinding, polishing and/or etching operations with predetermined fixed durations may be stopped on the interlayer material if the thickness of the interlayer is sufficient for the selectivity of the etching process. In some examples, the carrier etch rate:interlayer etch rate is 3:1 to 10:1 or more.When the interlayer is exposed, at least a portion of the interlayer can be removed. For example, one or more component layers of the intermediate layer may be removed. For example, the thickness of the intermediate layer can be uniformly removed by polishing. Alternatively, a masking or blanket etch process can be used to remove the thickness of the interlayer. This process may employ the same polishing or etching process as is used to thin the carrier, or may be a different process with different process parameters. For example, where the interlayer provides an etch stop for the carrier removal process, the latter operation may employ a different polishing or etch process that favors removal of the interlayer over removal of the device layer . Where intermediate layer thicknesses of less than a few hundred nanometers are to be removed, the removal process may be relatively slow, optimized for uniformity across the wafer, and more precisely controlled than the process employed for carrier layer removal. The CMP process employed may, for example, employ a slurry between a semiconductor (eg, silicon) and a dielectric material (eg, SiO) surrounding the device layers and embedded within interlayers, eg, as electrical isolation between adjacent device regions. provides very high selectivity (eg, 100:1 to 300:1 or more).For embodiments in which the device layer is exposed by completely removing the interlayer, backside processing may begin on the exposed backside of the device layer or specific device regions therein. In some embodiments, the backside device layer processing includes further polishing or wet processing through the thickness of the device layer disposed between the intermediate layer and a device region previously fabricated in the device layer, such as a source or drain region /Dry etching.In some embodiments in which the carrier layer, interlayer, or device layer backside is recessed by wet and/or plasma etching, such an etching process may impart significant non-planarity or topography to the device layer backside surface patterned etching or material selective etching. As described further below, the patterning may be within a device cell (ie, "intra-cell" patterning), or may be across device cells (ie, "inter-cell" patterning). In some patterned etch embodiments, at least a portion of the thickness of the interlayer is employed as a hardmask for patterning of the backside device layer. Thus, the masked etch process may precede the corresponding masked device layer etch.The processing scheme described above can produce a donor-host substrate assembly including an IC device having a backside of an interlayer, a backside of a device layer, and/or a backside of one or more semiconductor regions within the device layer. side, and/or exposed front side metallization. Any of these revealed regions can then be subjected to additional backside processing during downstream processing.FIG. 9 illustrates a computing device 900 according to one implementation of an embodiment of the invention. Computing device 900 houses board 902 . Board 902 may include a number of components including, but not limited to, processor 904 and at least one communication chip 906 . Processor 904 is physically and electrically coupled to board 902 . In some implementations, the at least one communication chip 906 is also physically and electrically coupled to the board 902 . In other implementations, the communication chip 906 is part of the processor 904 .Depending on its application, computing device 900 may include other components that may or may not be physically and electrically coupled to board 902 . These other components include, but are not limited to: volatile memory (eg, DRAM), non-volatile memory (eg, ROM), flash memory, graphics processors, digital signal processors, cryptographic processors, chipsets, antennas , monitors, touch screen displays, touch screen controllers, batteries, audio codecs, video codecs, power amplifiers, global positioning system (GPS) devices, compasses, accelerometers, gyroscopes, speakers, cameras, and mass storage devices ( such as hard disk drives, compact discs (CDs), digital versatile discs (DVDs), etc.).Communication chip 906 enables wireless communication for the transfer of data to and from computing device 900 . The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communication channels, etc., that can communicate data via non-solid-state media using modulated electromagnetic radiation. The term does not imply that the associated devices do not contain any wires, although in some embodiments they may not contain any wires. The communication chip 906 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, Long Term Evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, and any other wireless protocol designated as 3G, 4G, 5G and above. Computing device 900 may include a plurality of communication chips 906 . For example, the first communication chip 906 may be dedicated to shorter-range wireless communications such as Wi-Fi and Bluetooth, and the second communication chip 906 may be dedicated to applications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and Other longer-range wireless communications.The processor 904 of the computing device 900 includes an integrated circuit die packaged within the processor 904 . In one embodiment, the integrated circuit die of processor 904 may include interdigitated chip transistors with dielectric or conductive ridges, such as those described herein. The term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform the electronic data into other electronic data that may be stored in the registers and/or memory.The communication chip 906 also includes an integrated circuit die packaged within the communication chip 906 . In an embodiment, the integrated circuit die of the communication chip 906 may include interdigitated chip transistors with dielectric or conductive ridges, such as those described herein.In further implementations, another component housed within computing device 900 may include fork-chip transistors with dielectric or conductive ridges, such as those described herein.In various implementations, computing device 900 may be a laptop computer, netbook, notebook, ultrabook, smart phone, tablet computer, personal digital assistant (PDA), ultra mobile PC, mobile phone, desktop computer, server, printer , scanners, monitors, set-top boxes, entertainment control units, digital cameras, portable music players or digital video recorders. In further implementations, computing device 900 may be any other electronic device that processes data.FIG. 10 illustrates an interposer 1000 including one or more embodiments of the present disclosure. The interposer 1000 is an intermediate substrate for bridging the first substrate 1002 to the second substrate 1004 . The first substrate 1002 may be, for example, an integrated circuit die. The second substrate 1004 may be, for example, a memory module, a computer motherboard, or another integrated circuit die. According to embodiments described herein, in embodiments, one or both of the first substrate 1002 and the second substrate 1004 may include interdigitated chip transistors having dielectric or conductive ridges. Typically, the purpose of the interposer 1000 is to spread connections to wider spacing or to reroute connections to different connections. For example, the interposer 1000 can couple the integrated circuit die to a ball grid array (BGA) 1006 , which can then be coupled to the second substrate 1004 . In some embodiments, the first and second substrates 1002 / 1004 are attached to opposite sides of the interposer 1000 . In other embodiments, the first and second substrates 1002 / 1004 are attached to the same side of the interposer 1000 . And in further embodiments, three or more substrates are interconnected by means of the interposer 1000 .The interposer 1000 may be formed of epoxy, glass fiber reinforced epoxy, ceramic materials, or polymeric materials such as polyimide. In further implementations, the interposer 1000 may be formed from alternative rigid or flexible materials, which may include the same materials described above for use in semiconductor substrates, such as silicon, germanium, and other III-V or Group IV materials.Interposer 1000 may include metal interconnects 1008 and vias 1010 , including but not limited to through-silicon vias (TSVs) 1012 . The interposer 1000 may further include embedded devices 1014 that include both passive and active devices. Such devices include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transducers, sensors, and electrostatic discharge (ESD) devices. More complex devices such as radio frequency (RF) devices, power amplifiers, power management devices, antennas, arrays, sensors, and MEMS devices can also be formed on the interposer 1000 . The devices or processes disclosed herein may be used in the manufacture of the interposer 1000 according to embodiments of the present disclosure.Accordingly, embodiments of the present disclosure may include fork-chip transistors with dielectric or conductive ridges, and methods for fabricating fork-chip transistors with dielectric or conductive ridges.The above description of illustrative implementations of the present disclosure, including what is described in the Abstract, is not intended to be exhaustive or to limit the disclosure to the precise form disclosed. While specific implementations of, and examples for, the disclosure are described herein for illustrative purposes, various etc. are within the scope of the disclosure, as those skilled in the relevant art will appreciate. Effective modifications are possible.These modifications can be made to the present disclosure in light of the above detailed description. The terms used in the appended claims should not be construed to limit the disclosure to the specific implementations disclosed in the specification and claims. Rather, the scope of the present disclosure is to be determined solely by the appended claims, which are to be construed in accordance with the established rules of claim interpretation.Example Embodiment 1: An integrated circuit structure includes a dielectric ridge. The first transistor device includes a first vertical stack of semiconductor channels spaced apart from the first edge of the dielectric ridge. The second transistor device includes a second vertical stack of semiconductor channels spaced apart from the second edge of the dielectric ridge. The N-type gate structure is on the first vertical stack of semiconductor channels, a portion of the N-type gate structure is laterally between and with the first edge of the dielectric ridge and the first vertically stacked semiconductor channels of the semiconductor channels touch. The P-type gate structure is on the second vertical stack of semiconductor channels, a portion of the P-type gate structure is laterally between and with the second edge of the dielectric ridge and the second vertically stacked semiconductor channel of the semiconductor channels touch.Example Embodiment 2: The integrated circuit structure of Example Embodiment 1, wherein the first and second vertical stacks of semiconductor channels are first and second stacks of nanoribbons or nanowires.Example Embodiment 3: The integrated circuit structure of Example Embodiment 1 or 2, wherein the total number of semiconductor channels in the first vertical stack of semiconductor channels is the same as the total number of semiconductor channels in the second vertical stack of semiconductor channels .Example Embodiment 4: The integrated circuit structure of Example Embodiment 1 or 2, wherein the total number of semiconductor channels in the first vertical stack of semiconductor channels is different from the total number of semiconductor channels in the second vertical stack of semiconductor channels .Example Embodiment 5: An integrated circuit structure includes a conductive ridge. The first and second dielectric spacers are along the first and second edges of the conductive ridge, respectively. The first transistor device includes a first vertical stack of semiconductor channels adjacent to a first dielectric spacer along a first edge of the conductive ridge. The second transistor device includes a second vertical stack of semiconductor channels adjacent to a second dielectric spacer along a second edge of the conductive ridge.Example Embodiment 6: The integrated circuit structure of Example Embodiment 5, wherein the first transistor device is a P-type device and the second transistor device is an N-type device.Example Embodiment 7: The integrated circuit structure of Example Embodiment 5 or 6, wherein the first and second vertical stacks of semiconductor channels are first and second stacks of nanoribbons or nanowires.Example Embodiment 8: The integrated circuit structure of Example Embodiments 5, 6, or 7, wherein the total number of semiconductor channels in the first vertical stack of semiconductor channels is the same as the number of semiconductor channels in the second vertical stack of semiconductor channels The total is the same.Example Embodiment 9: The integrated circuit structure of Example Embodiments 5, 6, or 7, wherein the total number of semiconductor channels in the first vertical stack of semiconductor channels is different from the total number of semiconductor channels in the second vertical stack of semiconductor channels total.Example Embodiment 10: The integrated circuit structure of Example Embodiments 5, 6, 7, 8, or 9, further comprising: a first gate structure on the first vertical stack of semiconductor channels, the first gate structure comprising a a gate electrode and a first gate dielectric; and a second gate structure on a second vertical stack of semiconductor channels, the second gate structure including a second gate electrode and a second gate dielectric.Example Embodiment 11: A computing device includes a board and a component coupled to the board. The assembly includes an integrated circuit structure including a dielectric ridge. The first transistor device includes a first vertical stack of semiconductor channels spaced apart from the first edge of the dielectric ridge. The second transistor device includes a second vertical stack of semiconductor channels spaced apart from the second edge of the dielectric ridge. The N-type gate structure is on the first vertical stack of semiconductor channels, a portion of the N-type gate structure is laterally between and with the first edge of the dielectric ridge and the first vertically stacked semiconductor channels of the semiconductor channels touch. The P-type gate structure is on the second vertical stack of semiconductor channels, a portion of the P-type gate structure is laterally between and with the second edge of the dielectric ridge and the second vertically stacked semiconductor channel of the semiconductor channels touch.Example Embodiment 12: The computing device of Example Embodiment 11, further comprising: a memory coupled to the board.Example Embodiment 13: The computing device of Example Embodiment 11 or 12, further comprising: a communication chip coupled to the board.Example Embodiment 14: The computing device of Example Embodiments 11, 12, or 13, further comprising: a camera coupled to the board.Example Embodiment 15: The computing device of Example Embodiments 11, 12, 13, or 14, wherein the component is a packaged integrated circuit die.Example Embodiment 16: A computing device includes a board and a component coupled to the board. The assembly includes an integrated circuit structure having conductive ridges. The first and second dielectric spacers are along the first and second edges of the conductive ridge, respectively. The first transistor device includes a first vertical stack of semiconductor channels adjacent to a first dielectric spacer along a first edge of the conductive ridge. The second transistor device includes a second vertical stack of semiconductor channels adjacent to a second dielectric spacer along a second edge of the conductive ridge.Example Embodiment 17: The computing device of Example Embodiment 16, further comprising: a memory coupled to the board.Example Embodiment 18: The computing device of example embodiment 16 or 17, further comprising: a communication chip coupled to the board.Example Embodiment 19: The computing device of Example Embodiments 16, 17, or 18, further comprising: a camera coupled to the board.Example Embodiment 20: The computing device of Example Embodiments 16, 17, 18, or 19, wherein the component is a packaged integrated circuit die.
An inductor with multiple loops and semiconductor devices with such an inductor integrated thereon are proposed. In an aspect, the semiconductor device may include a die on a substrate, an inductor on the die in which the inductor comprises a wire with multiple non planar loops above the die. In another aspect, the semiconductor device may include a plurality of posts on a die on a substrate, and an inductor on the die. The inductor may include a wire looped around the plurality of posts such that the inductor includes multiple non-planar loops.
CLAIMSWHAT IS CLAIMED IS:1. A semiconductor device, comprising:a substrate;a die on the substrate; andan inductor on the die,wherein the inductor comprises a wire with multiple non-planar loops above the die.2. The semiconductor device of claim 1, wherein the inductor has an air core.3. The semiconductor device of claim 2, further comprising a cap on the die and surrounding the inductor.4. The semiconductor device of claim 3, wherein an inside of the cap is unfilled other than the inductor.5. The semiconductor device of claim 1, further comprising a mold on the die, the mold encapsulating the inductor.6. The semiconductor device of claim 1, further comprising a post on the die, wherein the inductor is looped around the post.7. The semiconductor device of claim 1, further comprising a plurality of posts on the die, wherein the inductor is looped around the plurality of posts.8. The semiconductor device of claim 7, wherein the inductor does not completely wrap around any individual post of the plurality of posts.9. The semiconductor device of claim 7, wherein at least one loop of the inductor does not completely wrap around any individual post of the plurality of posts.10. The semiconductor device of claim 7,wherein the plurality of posts comprise first and second posts, andwherein the wire is looped around the first and second posts in a figure 8 formation.11. The semiconductor device of claim 7,wherein the plurality of posts comprise a first plurality of posts, the wire is a first wire, and the inductor is a first inductor,wherein the plurality of posts also comprise a second plurality of posts, and wherein the semiconductor device further comprises a second inductor on the die, the second inductor comprising a second wire looped around the second plurality of posts, and the second inductor having multiple non-planar loops above the die.12. The semiconductor device of claim 11, wherein the first inductor vertically intersects with the second inductor.13. The semiconductor device of claim 11, wherein the first inductor is inside the second inductor.14. The semiconductor device of claim 7, further comprising a contact on the die,wherein the contact configured to be electrically coupled to one of an input pin, an output pin, a power pin, and a ground pin of the die, andwherein the inductor surrounds the contact.15. The semiconductor device of claim 7, further comprising first and second bond pads on the die,wherein first and second ends of the wire terminate at first and second bond pads, respectively.16. The semiconductor device of claim 7,wherein the plurality of posts are conductive posts and the wire is an insulated wire, orwherein the plurality of posts are non-conductive posts and the wire is a non- insulated wire.17. A method of fabricating a semiconductor device, the method comprising:providing a die on a substrate; andforming an inductor on the die,wherein forming the inductor comprises looping a wire such that the inductor includes multiple non-planar loops above the die.18. The method of claim 17, wherein forming the inductor comprises looping the wire such that the inductor has an air core.19. The method of claim 18, further comprising surrounding the inductor with a cap on the die.20. The method of claim 18, further comprising encapsulating the inductor with a mold on the die.21. The method of claim 17, further comprising forming a post on the die, wherein forming the inductor comprises looping the wire around the post.22. The method of claim 17, further comprising forming a plurality of posts on the die,wherein forming the inductor comprises looping the wire around the plurality of posts.23. The method of claim 22, wherein forming the inductor comprises looping the wire around the plurality of posts such that at least one loop of the inductor does not completely wrap around any individual post of the plurality of posts.24. The method of claim 22,wherein forming the plurality of posts comprise forming first and second posts, andwherein forming the inductor comprises looping the wire around the first and second posts in a figure 8 formation.25. The method of claim 22,wherein the plurality of posts comprise a first plurality of posts, the wire is a first wire, and the inductor is a first inductor,wherein the plurality of posts also comprise a second plurality of posts, and wherein the method further comprises forming a second inductor on the die by looping a second wire around the second plurality of posts such that the second inductor includes multiple non-planar loops above the die.26. The method of claim 25, wherein forming the second inductor comprises forming the second inductor so as to vertically intersect with the first inductor.27. The method of claim 22, further comprising forming a contact on the die and electrically coupled to one of an input pin, an output pin, a power pin, and a ground pin of the die, andwherein forming the inductor comprises forming the inductor so as to surround the contact.28. A semiconductor device, comprising:a substrate;a die on the substrate;an inductor on the die; andmeans for terminating the inductor on the die, wherein the inductor comprises a wire with multiple non-planar loops above the die.29. The semiconductor device of claim 28, further comprising a plurality of posts on the die,wherein the inductor is looped around the plurality of posts, andwherein at least one loop of the inductor does not completely wrap around any individual post of the plurality of posts.30. The semiconductor device of claim 29,wherein the plurality of posts comprise a first plurality of posts, the wire is a first wire, and the inductor is a first inductor,wherein the plurality of posts also comprise a second plurality of posts, and wherein the semiconductor device further comprises a second inductor on the die, the second inductor comprising a second wire looped around the second plurality of posts, and the second inductor having multiple non-planar loops above the die.
SOLENOID INDUCTORCROSS-REFERENCE TO RELATED APPLICATIONS[0001] The present Application for Patent claims the benefit of Provisional Patent Application No. 62/252,567 entitled "SOLENOID INDUCTOR WITH AIR CORE" filed November 8, 2015, pending, and assigned to the assignee hereof and hereby expressly incorporated herein by reference in its entirety.Field of Disclosure[0002] One or more aspects of the present disclosure relate generally to an inductor, and particularly to a solenoid inductor on a die.Background[0003] Existing thin film processes is insufficient for generating 3D inductor for high performance. For example, a size of a conventional Near Field Communication (NFC) antenna 100 illustrated in FIG. 1, which is essentially an inductor, is 50 mm x 85 mm (4,250 mm2). For applications such as smart phones and other mobile devices, this represents a significant amount of surface area.SUMMARY[0004] This summary identifies features of some example aspects, and is not an exclusive or exhaustive description of the disclosed subject matter. Whether features or aspects are included in, or omitted from this Summary is not intended as indicative of relative importance of such features. Additional features and aspects are described, and will become apparent to persons skilled in the art upon reading the following detailed description and viewing the drawings that form a part thereof.[0005] A first aspect may be directed to a semiconductor device. The semiconductor device may comprise a substrate, a die on the substrate, and an inductor on the die. The inductor may comprise a wire with multiple non-planar loops above the die.[0006] A second aspect may be directed toward a method of forming a semiconductor device. The method may comprise providing a substrate, providing a die on the substrate, and forming an inductor on the die. Forming the inductor may comprise looping a wire such that the inductor includes multiple non-planar loops above the die.[0007] A third aspect may be directed toward a semiconductor device. The semiconductor device may comprise a substrate, a die on the substrate, an inductor on the die, and means for terminating the inductor also on the die. The inductor may comprise a wire with multiple non-planar loops above the die.BRIEF DESCRIPTION OF THE DRAWINGS[0008] The accompanying drawings are presented to aid in the description of embodiments disclosed and are provided to show illustrations of the embodiments and not limitation thereof.[0009] FIG. 1 illustrates a conventional Near Field Communication antenna;[0010] FIG. 2 illustrates example embodiments of inductors;[0011] FIGs. 3A - 3F illustrate stages of an example method of fabricating a device with one or more inductors on chip;[0012] FIGs. 4A - 4B illustrate example embodiments of inductors formed with a plurality of posts;[0013] FIGs. 5A - 5E illustrate stages of an example method of fabricating a device with inductors formed on a die using a plurality of posts;[0014] FIGs. 6A - 6D illustrate more example embodiments of inductors formed with a plurality of posts;[0015] FIGs. 7A - 7F illustrate stages of an example process to fabricate the semiconductor device with intersecting inductors;[0016] FIG. 8 illustrates a flow chart of an example method of fabricating a device; and[0017] FIG. 9 illustrates examples of devices with a device with inductor(s) integrated therein.DETAILED DESCRIPTION[0018] Aspects are disclosed in the following description and related drawings directed to specific embodiments of one or more aspects of the present disclosure. Alternate embodiments may be devised without departing from the scope of the discussion. Additionally, well-known elements will not be described in detail or will be omitted so as not to obscure the relevant details. [0019] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. Likewise, the term "embodiments" does not require that all embodiments of the disclosed subject matter include the discussed feature, advantage or mode of operation.[0020] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises", "comprising,", "includes" and/or "including", when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.[0021] Further, many embodiments are described in terms of sequences of actions to be performed by, for example, elements of a computing device. It will be recognized that various actions described herein can be performed by specific circuits (e.g., application specific integrated circuits (ASICs)), by program instructions being executed by one or more processors, or by a combination of both. Additionally, these sequence of actions described herein can be considered to be embodied entirely within any form of computer readable storage medium having stored therein a corresponding set of computer instructions that upon execution would cause an associated processor to perform the functionality described herein. Thus, the various aspects may be embodied in a number of different forms, all of which have been contemplated to be within the scope of the claimed subject matter. In addition, for each of the embodiments described herein, the corresponding form of any such embodiments may be described herein as, for example, "logic configured to" perform the described action.[0022] As indicated above, there are limitations with the current state of the inductor-on-chip technology. Conventional thin film processes cannot generate 3D inductors for high performance. However, in one or more aspects, solenoid inductors on chips that alleviate some or all limitations of the conventional inductors on chip are proposed. [0023] FIG. 2 illustrates non-limiting example embodiments of 3D inductors such as solenoid inductors. In this figure, a device 200 (e.g., semiconductor device) with two inductors on a chip or die 210 on a substrate 205 (e.g., a PCB) is shown. The inductor 250 on the left may comprise a wire 240 wound or looped around a post 220. The ends of the wire 240 terminate on bond pads 230. The bond pads 230 may be examples of means for terminating the inductors. The inductor 250 on the right may comprise a looped wire 240 that does not surround any post 220, i.e., the right inductor 250 may have an air core.[0024] For each inductor 250, it is preferred that the wire 240 be looped vertically, i.e., the inductor 250 may have multiple non-planar loops. This is unlike the surface mounted loops of conventional inductor loops such as the NFC antenna illustrated in FIG. 1. Note that the loops of the conventional antenna 100 are all on a single plane. However, each inductor 250 of FIG. 2 may comprise multiple non-planar loops including a first loop and a second loop in which the first loop is not on a same plane as the second loop. Also, the first and second loops may vertically overlap with each other at least partially. The ends of the wire 440 may terminate on bond pads 230. A non-exhaustive list of advantage include:• Reduction of losses in the magnetic field;• Very high inductor performance;• Magnetic field in the vertical direction limits coupling with other inductors on the die or on the substrate; and• Inductance can be tuned on the die.[0025] FIGs. 3A - 3F illustrate side views of stages of a non-limiting example method to fabricate a semiconductor device with one or more inductors on a chip or die. Where possible, the element numberings of FIG. 2 will be carried over such that correlation between FIG. 2 and FIGs. 3 A - 3F are made more clear. As seen in FIG. 3 A, one or more bond pads 230 may be formed on the die 210. The bond pads 230 may be assumed to be conductive and serve as terminating points of the inductors 250 (see FIG. 3C). Also, the bond pads 230 may be electrically coupled to the circuitry of the die 210 (not shown).[0026] As seen in FIG. 3B, one or more posts 220 may be formed on the die 210. The posts 220 may be conductive or non-conductive. The posts 220 may also be formed from permeable materials. Then as seen in FIG. 3C, one or more inductors 250 may be formed on the die 210 by looping the wires 240 around the posts 220. Each of the inductors 250 may comprise multiple non-planar loops. The posts 220 may serve as guides to which the wires 240 may be looped or wound. It should be noted that for each inductor 250, the two ends of the corresponding wire 240 terminate at different bond pads 230. The wires 240 may be insulated or non-insulated. If the posts 220 are conductive, then insulated wires 240 are preferred. If the posts 220 are non- conductive, then non-insulated wires 240 may be used. Of course, it is also possible to wind insulated wires 240 around the non- conductive posts 220.[0027] The inductors 250 illustrated in FIG. 3C may be satisfactory for some applications.In other words, the fabrication of the semiconductor device 200 may stop at this stage {compare with the left inductor 250 in FIG. 2). However, the fabrication may proceed to a stage illustrated in FIG. 3D. In this stage, the posts 220 may be removed so that the inductors 250 have air cores. The inductors 250 with air cores of FIG. 3D may offer improved performance over the inductors of FIG. 3C with the posts 220. Note that due to the loops, the magnetic field will be vertical (as illustrated by an arrow within the far right inductor). The vertically oriented magnetic field is also true for FIG. 3C. This will help to limit coupling among the inductors 250 on the die 210. While FIG. 3C illustrates an example in which all posts 220 are removed, this is not a requirement. That is, one or more posts 220 may remain.[0028] The fabrication may also stop at the stage illustrated in FIG. 3D. But as seen in FIG. 3E, the fabrication may proceed to a stage in which the inductors 250 are capped with caps 370 for additional protection. In one aspect, the cap 370 may simply surround the inductor 250 such that inside the cap 370 is unfilled other than with the inductor 250. In another aspect, instead of surrounding the inductors 250 with the caps 370, the fabrication may proceed to a stage in which the inductors 250 may be protected by being encapsulated with a mold 360 as seen in FIG. 3F.[0029] While not shown, a variety of inductor combinations are possible. For example, when there are multiple inductors 250, there can be a combination of inductors 250 with and without the posts 220. As another example, some inductors 250 may be capped with the caps 370, some may be encapsulated with the molds 360, while yet others may have neither. Also it is emphasized that the inductors 250 are unlike the conventional inductors with surface mounted planar loops. For example, the loops of the inductors 250 may be on different planes. Also, the loops may at least partially overlap vertically. That is, one loop of the inductor 250 need not be entirely inside of another loop of the same inductor 250.[0030] In FIGs. 2 and 3A - 3F, each inductor 250 is shown as being formed by looping a wire 240 around a single post 220 multiple times. However, other inductors may be formed by looping a wire around multiple (two or more) posts. FIGs. 4A - 4B illustrate non-limiting example embodiments of 3D inductors where an inductor may be formed using multiple posts. In FIG. 4A, the semiconductor device 400 may comprise a die 410 on a substrate (substrate not shown), a plurality of posts 420 on the die 410, and one or more inductors 450 formed on the die 410. At least one inductor 450 may comprise a wire 440 looped around the plurality of posts 420.[0031] In this particular instance, the inductor 450 on the left will be described. As seen, the inductor 450 may comprise the wire 440 looped around two posts 420. As seen, the wire 440 may be looped multiple times around the posts 420. Also, the multiple loops of the inductor 450 may be non-planar. The two ends of the inductor 450, i.e., the two ends of the corresponding wire 440, may terminate at two bond pads 430 - first and second bond pads 430-1, 430-2. The inductor 450 may be encapsulated with a mold 460.[0032] FIG. 4B illustrates another embodiment of a device 400 with inductors 450 formed using a plurality of posts 420. The device of FIG. 4B is similar to the device of FIG. 4A. But instead of the mold 460, the inductors 450 of the device 400 may be capped with caps 470. In an aspect, other than the inductor 450, the insides of the caps 470 may be unfilled.[0033] While not shown, it is also contemplated that in some embodiments, the inductors 450 need not be provided with either the cap 470 or the mold 460. Also for FIGs. 4A and/or 4B, the posts 420 may be removed in some embodiments such that the core of the inductor 450 is air.[0034] FIGs. 5A - 5E illustrate stages of a non-limiting example method of fabricating a device with inductors formed on a die using multiple posts. Where possible, the element numberings of FIGs. 4A and 4B will be carried over. As seen in FIG. 5 A, a plurality of bond pads 430 may be formed on a die 410. The bond pads 430 may be assumed to be conductive and serve as terminating points of inductors 450. Also, the bond pads 430 may be electrically coupled to the circuitry of the die 410 (not shown).[0035] As seen in FIG. 5B, a plurality of posts 420 may be formed on the die 410. Then as seen in FIG. 5C, an inductor 450 may be formed on the die 410 by looping a wire 440 around the plurality of posts 420. Again, the inductor 450 may comprise multiple loops. Also preferably, the loops may be vertically oriented or non-planar. That is, at least first and second loops of the inductor 450 may be on different planes. The first and second loops may also intersect vertically at least partially. The plurality of posts 420 may be conductive or non-conductive. The two ends of the wire 440 corresponding to the inductor 450 may terminate at the first and second bond pads 430-1, 430-2. The wire 440 may be insulated or non-insulated. If the posts 420 are conductive, the wire 440 may be insulated. If the posts 420 are non-conductive, the wire 440 can be insulated or non-insulated.[0036] For some application, the inductors 450 illustrated in FIG. 5C may be satisfactory, and thus, the fabrication of the semiconductor device 400 may stop at this stage. However, for other applications, the fabrication may proceed to the stage illustrated in FIG. 5D in which the inductor 450 is encapsulated with a mold 460 (with or without the posts 420). See also FIG. 4A. Alternatively, the fabrication may proceed to the stage illustrated in FIG. 5E in which the inductor 450 is capped with a cap 470 instead of being encapsulated. See also FIG. 4B. Again, inside of the cap 470 may be unfilled except the inductor 450 (with or without the posts 420).[0037] A variety of inductor combinations are possible. For example, in one aspect as mentioned above, the process may stop after the stage illustrated in FIG. 5C. In another aspect, the process may proceed to removing the posts 420 (not shown) after the stage illustrated in FIG. 5C and the fabrication process may then stop. Alternatively, regardless of whether the posts 420 are removed or not, the fabrication process may then proceed to providing the cap 470 or the mold 460 (not shown).[0038] FIGs. 4A - 4B and 5A - 5E illustrate side views of devices 400 with implementations of inductors 450 formed using multiple posts 420. FIGs. 6A - 6D illustrate top views of example of some specific implementations of different types of inductors that may be formed utilizing multiple posts 420. Where possible, the element numberings of FIGs. 4A, 4B and 5A - 5D will be carried over. Also, in FIGs. 6A - 6D, the mold 460 and the cap 470 will not be shown so as to minimize clutter. But it should be realized that the packaged devices of some embodiments may include the mold 460 and/or the cap 470. FIGs. 6A - 6D can be viewed as illustrating top views of some particular implementations of the semiconductor devices 400 corresponding to the side view of FIG. 5C in which an inductor 450 may be formed by looping a wire 440 around a plurality of posts 420.[0039] FIG. 6A illustrates a semiconductor device 400 with an inductor 450 that may be used as a Near Field Communication (NFC) antenna and/or used in applications such as wireless charging. In this figure, four posts 420 are shown and the wire 440 may be looped multiple times around the four posts 420. Note that the wire 440 may be non-planarly looped around any number of posts 420 (e.g., three or more) for such applications. The first and second ends of the wire 440 may terminate at the first and second bond pads 430-1, 430-2.[0040] In this particular example, none of the loops of the inductor 450 completely wraps around any individual post 420. However, this is not a requirement. In an aspect, the inductor 450 may include at least one loop that does not completely wrap around any of the individual posts 420 of the plurality of posts 420 (not shown).[0041] For NFC applications (e.g., operations at 13.56 MHz), the configuration of FIG. 6A can provide the necessary inductance L (e.g., between 1 μΗ and 3.6 μΗ) while requiring smaller area than conventional NFC antennas. For example, an inductance of a rectangular loop Lrectmay be approximated by equation (1) below. Then by providing an inductor 450 with the following characteristics {loops = 6.5, area = 11 mm X 11 mm, wire = 10 μm Cu), an inductance L ~ 2 μΗ can be achieved. In other words, sufficient inductance can be achieved while occupying significantly smaller area (11 mm X 11 mm) than the conventional NFC antenna (50 mm X 85 mm, see FIG. 1[0042] FIG. 6B illustrates a semiconductor device 400 with an inductor 450 formed by looping a wire 440 in a figure 8 formation. As seen, two posts 420 are shown which may also be referred to as the first and second posts 420-1, 420-2. The first and second ends of the wire 440 may terminate at the first and second bond pads 430-1, 430-2. The wire 440 may be looped multiple times around the first and second posts 420-1, 420-2 such that the loops are non-planar. With such a configuration, an upward oriented magnetic field and a downward oriented magnetic field may be generated. For example, a magnetic field loop may be realized.[0043] While not shown, more than two posts 420 may be utilized. For example, one or more posts 420 may be provided in addition to the first and second posts 420-1, 420-2 such that multiple figure 8 formations can be formed using the single wire 440. Again, the inductor 450 may include at least one loop that does not completely wrap around any individual post 420.[0044] FIG. 6C illustrates a semiconductor device 400 with an inductor 450 that can be used to detect power. The inductor 450 of FIG. 6C may be looped around an input/output connection 650. An example may be a contact. For example, the contact 650 may be configured to electrically couple to any one of an input pin, an output pin, a power pin, or a ground pin of the die 410. The contact 650 may be formed on a surface of the die 410. The contact 650 may be a solder ball in one or more embodiments.[0045] With the inductor 450 of FIG. 6C, it is possible to detect the electrical switching that takes place at the contact 650 (e.g., when power is turned on/off, when logic switches from low to high and vice versa). The inductor 450 may be formed by looping the wire 440 multiple times around the plurality of posts 420 so as to surround the contact 650. The ends of the wire 440 may terminate at bond pads 430-1, 430-2. While only three posts 420 are shown, the number of posts 420 can be greater. Note that the shape of the inductor loop can better conform to the shape of the contact 650 as the number of posts 420 grow. At least one loop may be such that it does not completely wrap around any individual post 420.[0046] FIG. 6D illustrates a semiconductor device with two inductors 450-1, 450-2 that are nearby each other. In this figure, the plurality of posts 420 may be viewed as comprising a first plurality of posts 420-1, 420-2 and a second plurality of posts 420-3, 420-4. The first inductor 450-1 may be formed by looping a first wire 440-1 multiple times around the first plurality of posts 420-1, 420-2, e.g., so as to form non-planar loops. The ends of the first wire 440-1 may terminate at bond pads 430-1, 430-2. Also, the second inductor 450-2 may be formed by looping a second wire 440-2 multiple times around the second plurality of posts 420-3, 420-4, e.g., so as to form non-planar loops. The ends of the second wire 440-2 may terminate at bond pads 430-3, 430-3.[0047] In FIG. 6D, the two nearby inductors 450-1, 450-2 are shown as vertically intersecting. With this configuration, the magnetic fields can be isolated or canceled as desired. However, it is not a requirement that the inductors 450-1, 450-2 vertically intersect. For example, while not shown, one inductor (e.g., first inductor 450-1) may be placed inside another inductor (e.g., second inductor 450-2). The two inductors 450-1, 450-2 can be placed sufficiently near each other so that some coupling can take place (e.g., for magnetic field isolation and/or cancellation). Note that the amount of coupling can be controlled. Also, one or both of the first and/or second plurality of posts 420 can comprise more than two posts 420 (not shown). In addition, there can be more than two inductors 450 placed nearby one other (not shown).[0048] FIGs. 7A - 7F illustrate some stages of a non-limiting example process to fabricate the semiconductor device 400 illustrated in FIG. 6D. FIG. 7A illustrates the first plurality of posts 420-1, 420-2, the second plurality of posts 420-3, 420-4, and the bond pads 430-1, 430-2, 430-3, 430-4 formed on the die 410. In FIG. 7B, the first wire 440-1 is illustrated as being looped around the first plurality of posts 420-1, 420-2 to form the first inductor 450-1. The first wire 440-1 may be bonded to the bond pad 430-1 near the post 420-1 and looped outside of the post 420-2. As seen FIG. 7C, the first wire 440-1 may continue around the post 420-1 and above the portion of the first wire 440-1 previously shown in FIG. 7B in a figure 8 formation. The first wire 440-1 may be looped multiple times in this figure 8 formation {see also inductor 450 of FIG. 6B) to where it is bonded to the bond pad 430-2 to complete the first inductor 450-1.[0049] In a similar way, the second inductor 450-2 may be formed. As seen in FIG. 7D, the second wire 440-2 may be bonded to the bond pad 430-3 near the post 420-3 and looped around outside of the post 420-4. As seen FIG. 7E, the second wire 440-2 may continue around the post 420-3 and above the portion of the second wire 440-1 previously shown in FIG. 7D again in a figure 8 formation. The second wire 440-2 may be looped multiple times in this figure 8 formation to where it is bonded to the bond pad 430-4 to complete the second inductor 450-2. The second wire 440-2 may be above the first wire 440-1. [0050] FIG. 7F illustrates a side view of a cross section of the semiconductor device along the line A-A of FIG. 7E. Note that the loops of the second wire 440-2 (illustrated as dots) are above the loops of the first wire 440-1. In this side view, the first wire 440-1 (corresponding to the first inductor 450-1) is shown as having multiple non- planar loops. Similarly, the second wire 440-2 (corresponding to the second inductor 450-2) is shown as having multiple non-planar loops.[0051] Regarding the inductors 450 formed by utilizing multiple posts 420, the wire 440 need not completely wrap around any individual post 420. Also, the loops may be consistent. That is, the loops of the inductor 450 may vertically overlap with each other. In this way, the magnetic field can be made more uniform within the core of the inductor 450. In one or more aspects, when an inductor 450 is formed using a plurality of posts 420, it can be said that for at least one loop of the inductor 450, the wire 440 corresponding to the inductor 450 need not completely wrap around any individual post 420. It can also be said that at least one loop of the inductor 450 may vertically overlaps with at least one other loop of the inductor 450.[0052] FIG. 8 illustrates a flow chart of a non-limiting example method of fabricating a device such as the devices 200, 400. It should be noted that not all illustrated blocks of FIG. 8 need to be performed, i.e., some blocks may be optional. Also, the numerical references to the blocks of the FIG. 8 should not be taken as requiring that the blocks should be performed in a certain order.[0053] In block 810, a die 210, 410 may be provided on a substrate 205 such as a PCB. In block 820, one or more bond pads 230, 430 may be formed on the die 210, 410. FIGs. 3A and 5A may correspond to the block 820. In block 830, one or more posts 220, 420 may be formed on the die 210, 410. FIGs. 3B and 5B may correspond to the block 830.[0054] In block 840, one or more inductors 250, 450 may be formed. FIGs. 3C and 5C may correspond to the block 840. An inductor 250, 450 may be formed by looping a wire 240, 440 such that the inductor 250, 450 includes multiple non-planar loops above the die 210, 410. The inductor 250 may be formed by looping the wire 240 around a single post 220 as seen in FIG. 3C.[0055] The inductor 450 may be formed by looping the wire 440 around multiple posts 420 as seen in FIG. 5C. Specific example implementations are illustrated in FIGs. 6A (e.g., NFC antenna), 6B (e.g., figure 8 loops) and 6C (e.g., power detection). In an aspect, at least one loop of the inductor 450 need not completely wrap around any individual post 420. The fabrication method 800 may stop after the block 840.[0056] The method 800 may also continue in block 860 in which the posts 220, 420 may be removed. FIG. 3D may correspond to block 830. This block is optional in that the posts 220, 420 need not be removed. If the posts 220, 420 are removed, then the inductor 250, 450 may have an air core. The fabrication method 800 may stop after the block 860.[0057] In block 870, the inductor 250, 450 may be surrounded with a cap 370, 470.FIGs. 3E and 5E may correspond to block 870. Alternatively, in block 880, the inductor 250, 450 may be encapsulated with a mold 360, 460. FIGs. 3F and 5D may correspond to block 880.[0058] If a power detection inductor is desired (see FIG. 6C), the method 800 in block 835 may form a contact 650 on the die 410, and the inductor 450 may be formed in block 840 to surround the contact 650. The contact 650 may be coupled to any one of one of an input pin, an output pin, a power pin, and a ground pin of the die 410.[0059] If multiple inductors are desired (see FIGs. 6D, 7 A - 7F), then in addition to forming the first inductor 450-1 in block 840, the method 800 in block 845 may form the second inductor 450-2 in block 845. For example, the second inductor 450-2 may be formed by looping a second wire 440 around the second plurality of posts 420-3, 420-4. The second inductor 450-2 may include multiple non-planar loops above the die 410. The second inductor 450-2 may also vertically intersect with the first inductor 450-1.[0060] FIG. 9 illustrates various electronic devices that may be integrated with any of the aforementioned devices 200, 400 that includes inductors 250, 450. For example, a mobile phone device 902, a laptop computer device 904, and a fixed location terminal device 906 may include a device package 900 as described herein. The device package 900 may be, for example, any of the integrated circuits, dies, integrated devices, integrated circuit devices, device packages, semiconductor devices, package-on-package devices, and so on. The devices 902, 904, 906 illustrated in FIG. 9 are merely exemplary. Other electronic devices may also feature the device 200, 400 including, but not limited to, a group of devices (e.g., electronic devices) that includes mobile devices, hand-held personal communication systems (PCS) units, portable data units such as personal digital assistants, global positioning system (GPS) enabled devices, navigation devices, set top boxes, music players, video players, entertainment units, fixed location data units such as meter reading equipment, communications devices, smartphones, tablet computers, computers, wearable devices, servers, routers, electronic devices implemented in automotive vehicles (e.g., autonomous vehicles), or any other device that stores or retrieves data or computer instructions, or any combination thereof.[0061] Those of skill in the art will appreciate that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0062] Further, those of skill in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithms described in connection with the implementations disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and processes have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present technology described herein.[0063] The methods, sequences, and/or algorithms described in connection with the implementations disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. [0064] Accordingly, an implementation of the technology described herein can include a computer-readable media embodying a method of manufacturing a semiconductor device. Accordingly, the technology described herein is not limited to illustrated examples, and any means for performing the functionality described herein are included in implementations of the technology described herein.[0065] While the foregoing disclosure shows illustrative implementations of the technology described herein, it should be noted that various changes and modifications could be made herein without departing from the scope of the technology described herein as defined by the appended claims. The functions and/or actions of the method claims in accordance with the implementations of the technology described herein described herein need not be performed in any particular order. Furthermore, although elements of the technology described herein may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
Systems and methods are disclosed for coordinating resource usage between applications in a tightly sandbox environment. A scheduling indicator can be left in a system file that multiple applications can use to align their requests for a system resource. Alternatively, IP loopback can be used to pass a scheduling indicator between applications that are otherwise sandboxed. If either of these approaches is not possible, then applications can schedule system resource requests using a common algorithm that selects a start time and optionally a period of subsequent system resource requests based on a common piece of information such as a system clock signal or IP address. In these ways the total amount of time during which the system resource is being utilized by various applications can be reduced, thus reducing power consumption, and network activity.
WHAT IS CLAIMED IS: CLAIMS 1. A method comprising: reading, by a second application, a scheduling indicator written to a publicly-accessible file by a first application, wherein the scheduling indicator indicates a schedule of one or more requests for a system resource by the first application; scheduling a request for the system resource by the second application, the request for the system resource by the second application being time-aligned with at least one of the one or more requests for the system resource by the first application; and executing the request for the system resource by the second application. 2. The method of Claim 1, wherein the request for the system resource by the second application is time-aligned with a first of the one or more requests for the system resource by the first application. 3. The method of Claim 1, wherein the request for the system resource by the second application is time-aligned with a second of the one or more requests for the system resource by the first application. 4. The method of Claim 3, wherein a second request for the system resource by the second application is time-aligned with a third of the one or more requests for the system resource by the first application. 5. The method of Claim 1, wherein the request for the system resource by the second application is time-aligned with the third request for the system resource by the first application. 6. The method of Claim 1, wherein time-aligned includes occurring concurrently. 7. The method of Claim 1, wherein time-aligned includes occurring substantially at the same time. 8. The method of Claim 1, wherein time-aligned includes occurring at different times. 9. The method of Claim 8, wherein time-aligned includes occurring with a common periodicity. 10. The method of Claim 9, wherein the common periodicity is a least common multiple of a periodicity of each of two or more requests for the system resource. 11. The method of Claim 8, wherein time-aligned includes occurring sequentially. 12. The method of Claim 1, wherein time-aligned includes occurring with a common periodicity. 13. The method of Claim 1, wherein the scheduling indicator is a modification of an existing scheduling indicator. 14. The method of Claim 1, wherein the scheduling indicator is new information written to the publicly-accessible file. 15. The method of Claim 1, further comprising writing another scheduling indicator to the publicly-accessible file, wherein the another scheduling indicator indicates a schedule of the request for the system resource by the second application. 16. The method of Claim 1, wherein the first and second applications are sandboxed to prevent direct communication between the first and second application. 17. The method of Claim 1, wherein the scheduling is further performed by an operating system. 18. A system comprising: a first application scheduled to make one or more requests for a system resource; a memory that stores a first schedule of the one or more requests for the system resource by the first application; a second application; and a scheduling module of the second application, that: reads the first schedule from the memory; schedules a request for the system resource by the second application according to a second schedule based on the first schedule such that the request for the system resource by the second application is time-aligned with at least one of the one or more requests for the system resource by the first application; and records the second schedule to the memory. 19. The system of Claim 18, further comprising a scheduling module of a third application, that: reads the first schedule from the memory; schedules a request of the system resource by the third application according to a third schedule based on at least the first schedule such that the request of the system resource by the third application is time-aligned with at least one of the one or more requests for the system resource by the first application; records the third schedule to the memory. 20. The system of Claim 19, wherein the scheduling module of the third application further: reads the first and second schedule from the memory; schedules a request of the system resource by the third application according to a third schedule based on the first and second schedules such that the request of the system resource by the third application is time-aligned with at least one of the one or more requests for the system resource by the first application and the request for the system resource by the second application; records the third schedule to the memory. 21. The system of Claim 18, wherein the request for the system resource by the second application is scheduled in an unused timeslot of the first schedule. 22. The system of Claim 18, wherein the system resource is selected from the group consisting of: a processing resource, a memory resource, and a network resource. 23. The system of Claim 18, wherein the first schedule is publicly accessible. 24. The system of Claim 23, wherein the first schedule is a system file. 25. An apparatus comprising: a publicly-accessible file; a first application making one or more requests for a system resource, determining if the publicly-accessible file exists, if not then creating the publicly- accessible file, modifying the publicly-accessible file to include data usable by one or more other applications to coordinate timing of their requests for the system resource with at least one of the one or more requests for the system resource made by the first application; and a second application reading the publicly-accessible file and coordinating timing of a request for the system resource made by the second application with the timing of at least one of the one or more requests for the system resource made by the first application, the apparatus precluding direct communication between the first and second applications. 26. The apparatus of Claim 25, wherein the coordinating considers a duration of the one or more requests for the system resource made by the first application and the request for the system resource made by the second application. 27. The apparatus of Claim 25, wherein the coordinating considers an expected duration of use of the system resource associated with the request for the system resource made by the second application. 28. A non- transitory, tangible computer readable storage medium, encoded with processor readable instructions to perform a method for coordinating requests for system resources of a plurality of applications running in an operating system that sandboxes applications from communicating with each other, the method comprising: accessing a common piece of information; scheduling requests for the system resource by first and second applications, where at least one request of the system resource by the first application is time-aligned with at least one request for the system resource by the second application, the scheduling being derived via algorithm from the common piece of information; and executing the requests of the system resource by the first and second applications. 29. The non-transitory, tangible computer readable storage medium of Claim 28, wherein accessing includes receiving a system broadcast. 30. The non-transitory, tangible computer readable storage medium of Claim 29, wherein the common piece of information is a system clock signal. 31. The non-transitory, tangible computer readable storage medium of Claim 28, wherein the common piece of information is an IP address of a user device on which the non-transitory, tangible computer readable storage medium resides and operates. 32. A system comprising: a means for accessing a common piece of information; a means for deriving a schedule of application requests for a system resource, the requests being made by two or more applications sandboxed from each other on the system, the schedule derived from the common piece of information; and a means for performing the requests for the system resource according to the schedule. 33. The system of Claim 32, wherein the common piece of information is accessible to a plurality of applications sandboxed from each other on the system. 34. A method for reducing computing resource consumption in a user device that precludes direct communication between applications running on the device, the method comprising: scheduling one or more requests for the system resource by the first application; passing a scheduling indicator from a first application to a second application via a loopback interface of the user device; determining a schedule of the one or more requests for the system resource by the first application from the scheduling indicator; scheduling a request for the system resource by the second application, the request for the system resource by the second application time-aligned with at least one of the one or more requests for the system resource by the first application; and executing the request for the system resource by the second application.
SYSTEMS AND METHODS TO COORDINATE RESOURCE USAGE IN TIGHTLY SANDBOXED ENVIRONMENTS BACKGROUND Field [0001] The presently disclosed embodiments relate generally to power conservation in a computing device, and more specifically to coordination of processes within a computing device. Background [0002] Some operating systems isolate (or sandbox) applications from each other to improve security and system stability. Sandboxing precludes applications from directly communicating with each other or being coordinated via a background service such as a daemon. Sandboxing can also preclude IP loopback communications, wherein two consenting applications use a loopback interface to communicate. Other forms of sandboxing can further preclude access to common system files. MICROSOFT'S WINDOWS 8 operating system, operating in METRO mode, is one example of an operating system that carries out some of the above forms of sandboxing. [0003] Yet, coordination between applications can be a key to certain power saving methods. For instance, when various applications use a modem at different times, the modem may remain active for long periods despite only being in use for short bursts of time. Coordination between the applications can activate the modem for shorter periods of time, and less often, thus conserving device power. For example, if several applications use the radio to advertise their presence (e.g., peer-to-peer applications looking for peers on other devices), it could be beneficial for these applications to send discovery messages at substantially the same time. In this fashion, the radio need only be activated when the messages are jointly sent, and can remain off until the discovery messages need to be sent again. Without such coordination, each application would utilize the radio to transmit according to its own schedule resulting in frequent activations of the radio. In a worst case scenario the radio would not have a chance to power down or idle. Other instances of application coordination can also reduce power or achieve other functionality. Coordination can make logic processes simpler to carry out. In an example, multiple applications may attempt to control an LED of a mobile device. Determining which application gets priority in this control is simplified if the requests for the LED arrive at substantially the same time. Thus, coordination of the requests can simplify the logic decision. This is just one of many instances where coordination of processes is beneficial for logic decisions. [0004] There is therefore a need in the art for coordination of application processes on systems where sandboxing is implemented. SUMMARY [0005] Embodiments disclosed herein address the above stated needs by enabling coordination of system resource requests from multiple applications despite various forms of sandboxing. Systems and methods are disclosed for coordinating resource usage between applications in a tightly sandbox environment. A scheduling indicator can be left in a system file that multiple applications can use to align their requests for a system resource. Alternatively, IP loopback can be used to pass a scheduling indicator between applications that are otherwise sandboxed. If either of these approaches is not possible, then applications can schedule system resource requests using a common algorithm that selects a start time and optionally a period of subsequent system resource requests based on a common piece of information such as a system clock signal or IP address. In these ways the total amount of time during which the system resource is being utilized by various applications can be reduced, thus reducing power consumption, and network activity. [0006] One aspect of the disclosure can be characterized as a method comprising, reading, by a second application, a scheduling indicator written to a publicly-accessible file by a first application, wherein the scheduling indicator indicates a schedule of one or more requests for a system resource by the first application. The method can further include scheduling a request for the system resource by the second application, the request for the system resource by the second application being time-aligned with at least one of the one or more requests for the system resource by the first application. The method can also include executing the request for the system resource by the second application. [0007] Another aspect of the disclosure can be characterized as a system having a first application, a memory, a second application, and a scheduling module. The memory can store a first schedule of one or more requests for a system resource by the first application. The scheduling module of the second application can read the first schedule from the memory and schedule a request for the system resource by the second application. The request for the system resource by the second application can be scheduled according to a second schedule based on the first schedule such that the request for the system resource by the second application is substantially time-aligned with at least one of the one or more requests for the system resource by the first application. Finally, the scheduling module of the second application can record the second schedule to the memory. [0008] Yet another aspect of the disclosure is an apparatus having a publicly-accessible file, a first application, and a second application. The first application can make one or more requests for a system resource, determine if the publicly-accessible file exists, and if not, then create the publicly-accessible file. The first application can further modify the publicly-accessible file to include data usable by one or more other applications to coordinate timing of their requests for the system resource with at least one of the one or more requests for the system resource made by the first application. The second application can read the publicly-accessible file and coordinate timing of a request for the system resource made by the second application with the timing of at least one of the one or more requests for the system resource made by the first application. The apparatus can also preclude direct communication between the first and second applications. [0009] Yet a further aspect of the disclosure can be characterized as a non-transitory, tangible computer readable storage medium, encoded with processor readable instructions to perform a method for coordinating requests for system resources of a plurality of applications running in an operating system that sandboxes applications from communication with each other. The method can include accessing a common piece of information. The method can further include scheduling requests for the system resource by the first and second applications, where at least one request of the system resource by the first application is substantially time-aligned with at least one request for the system resource by the second application, the scheduling being derived via algorithm from the common piece of information. The method can also include executing the requests of the system resource by the first and second applications. [0010] Another aspect of the disclosure can be characterized as a system comprising various means. The system can include a means for accessing a common piece of information. The system can also include a means for deriving a schedule of application requests for a system resource, the requests being made by two or more applications sandboxed from each other on the system. The schedule can be derived from the common piece of information. Also, the system can include a means for performing the requests for the system resource according to the schedule. [0011] Yet another aspect of the disclosure can be characterized as a method for reducing computing resource consumption in a user device that precludes direct communication between applications running on the user device. The method can include scheduling one or more requests for the system resource by the first application. The method can further include passing a scheduling indicator from a first application to a second application via a loopback interface of the user device. The method can also include determining a schedule of the one or more requests for the system resource by the first application from the scheduling indicator. The method can yet further include scheduling a request for the system resource by the second application. The request for the system resource by the second application can be substantially time-aligned with at least one of the one or more requests for the system resource by the first application. Finally, the method can include executing the request for the system resource by the second application. BRIEF DESCRIPTION OF THE DRAWINGS [0012] FIG. 1 illustrates a user device 100 having a system resource 102 and applications 104 and 106 both making requests for the system resource 102; [0013] FIG. 2 illustrates a method 200 for coordinating usage of system resources by two or more applications sandboxed so as to be unable to communicate with each other or to receive communications from other applications such as a coordinating daemon; [0014] FIG. 3 illustrates a user device for scheduling aligned application requests for a system resource; [0015] FIG. 4 illustrates a method for scheduling aligned application requests for a system resource; and [0016] FIG. 5 shows a diagrammatic representation of one embodiment of a machine in the exemplary form of a computer system. DETAILED DESCRIPTION [0017] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. [0018] As noted above, when multiple applications vie for the same system resource (CPU, GPU, memory, network interface, browser engine, to name a few) inefficient utilization of that system resource may result. Therefore there is a need for coordination between applications. At the same time there may be certain applications whose inherent functionality depends upon coordination between applications. Yet, many operating systems prevent direct communication between applications and thus coordination is not possible via existing methods. This disclosure describes systems and methods for enabling coordination between applications in a tightly sandbox environment— where applications are prevented from directly communicating with each other. [0019] In one embodiment, applications can schedule system resource requests based on a scheduling indicator stored in a system file accessible to all or most applications on the device (see FIGs. 1 and 2). This embodiment is advantageous since the scheduling indicator can include complex and sophisticated information to aid in scheduling. However, many operating systems have begun to preclude public access to system files, and thus this method is not always usable. Another embodiment, that enables coordination when there is not public access to system files, is to distribute a scheduling indicator to applications via the IP loopback protocol and a loopback interface. This approach also allows complex and sophisticated scheduling information to be circulated, but again suffers from the fact that some operating systems are precluding IP loopback as part of their sandboxing protocols. If neither of the first two approaches can be used, then, applications can schedule system resource requests based on a common piece of information such as an IP address of the device or a system clock signal (see FIGs. 3 and 4). [0020] FIG. 1 illustrates a user device 100 having a system resource 102 and applications 104 and 106 both making requests for the system resource 102. If not aligned, these requests could conflict or at least cause inefficient utilization of the system resource 102. The system resource 102 can include any one or more of the application processor 120, the memory 112, or the network interface 142. The first application 104 can include a scheduling module 108 configured to leave a scheduling indicator in a public file 132 that a scheduling module 110 of the second application 106 can read to determine a schedule that the first application 104 intends to follow in making requests for the system resource 102. The scheduling module 110 of the second application 106 can determine a schedule for the second application 106 to make requests for the system resource 102 so as to time-align with the requests of the first application 104. In this way the scheduling modules 108, 110 help to coordinate system resource 102 requests by the two applications 104, 106, which otherwise cannot coordinate with each other via direct communication or via a daemon or other background service providing coordination instructions. Time- alignment of the requests enables the system resource 102 to decrease an amount of time that it remains in an active state, thus reducing power consumption and improving the user experience since the system resource 102 has more available time to attend to other requests and application needs. [0021] In some embodiments, an operating system (OS) (not illustrated) can approve of or implement the schedule of the second application 106. The OS may have certain guidelines that require modification to the schedule. For instance, the OS may not allow requests for system resources 102 to made at exactly the same moment. The OS may therefore take the schedule and shift the times in which the second application 106 makes requests for the system resource 102 back by an amount of time, such that requests of the first and second applications 104, 106 occur sequentially. [0022] The user device 100 may include an application processor 120 on which the first and second applications 104, 106 run on. The application processor 120 can include a cache 120, which can be a single memory component or a distributed system of memory components. For instance, each core of a multi-core application processor 120 can include its own cache component. [0023] The user device 100 may include a memory 112 that can include random access memory (RAM) 114 and optionally the cache 120 and optionally a portion of a hard drive (HDD) 118 that is being used as virtual memory 119. The HDD 118 can be a part of a storage 116. The memory 112 can store private files 134 and public files 132, where private files are only available to the system or a limited group of applications. The public file 132 is accessible by most if not all applications including the first and second applications 104, 106. This means the first and second applications 104, 106 can read the public file and in some embodiments they can also write to it. [0024] The user device 100 can further include a network interface 142 that enables communication between the user device 100 and a network 130 such as the Internet. The system resource 102, the network interface 142, the application processor 120, the memory 112, and the storage 116 are all interconnected and in communication with each other via a bus 140. [0025] The system resource 102 can include any resource that has the potential to power down or enter some sort of dormant state. In particular, the system resource 102 can include a processing resource, a network resource, or a memory resource. For instance, a processing resource can be a CPU, application processor 120, baseband processor, or GPU to name a few. The network resource can be the network interface 142 or a network connection, to name two non-limiting examples. The memory resource can be the memory 112 including any one or more of the cache 120, the RAM 114, and/or the virtual memory 119. Although the application processor 120, the memory 112, and the network interface 142 are illustrated as being separate from the system resource 102, it should be understood that this is merely to simplify the visualization of the user device 100, and that in practice the system resource 102 can include any one or more of these system resources. [0026] The scheduling module 110 of the second application 106 can search for the public file 132, and if it is not found, then it can create the public file 132. If the public file 132 does exist, then the scheduling module 110 can read the scheduling indicator in the public file 132. The scheduling indicator can include a schedule of requests for the system resource 102 made by or expected to be made by the first application 104. The scheduling indicator may also include designations of time slots that the first application 104 has reserved for making requests for the system resource 102. The scheduling indicator may alternatively be a time at which the first application 104 made a request for the system resource 102 along with an expected period between subsequent requests. The time can be periodically updated to simplify the calculation that the scheduling module 110 of the second application 106 performs in order to determine how to time- align system resource 102 requests with those made by the first application 104. [0027] Upon reading the scheduling indicator, the scheduling module 110 can calculate or determine a schedule for the second application 106 to make requests for the system resource 102 so as to time-align with the requests that will be made by the first application 104. Time-alignment can include two or more requests occurring concurrently, substantially at the same time, or at different times. If occurring at different times, the two or more requests can occur sequentially— with as little lag between requests as possible. In some cases, requests can even overlap although not begin at the same instant. Whatever time the requests occur at, they may also share a common period. In some cases this includes each request having the same period, while in others two or more requests may occur with different periods, so time-aligned means that they are aligned according to a least common multiple of the various periods. In other embodiments, the scheduling module 110 may select non-concurrent and non- consecutive time slots in the schedule, where there is a specific reason for selecting such time slots. [0028] Where the scheduling module 110 selects concurrent time slots to those to be used by the first application 104, the second application 106 can make requests according to its schedule without doing anything further. However, when consecutive time slots are selected, the scheduling module 110 may leave a scheduling indicator in the public file 132 letting other applications know which time slots have been (or will be) used by the second application 106. This may involve modifying the existing scheduling indicator or adding a new scheduling indicator to the public file 132. [0029] The scheduling module 110 selects a schedule for the second application 106 wherein at least some requests are time-aligned with some requests of the first application 104. In one instance, the second application 106 may immediately make a system resource 102 request regardless of alignment and then time-align the second system resource 102 request. In another instance, the second application 106 may delay its first system resource 102 request in order to time-align with the requests from the first application 104. Which option is used depends on the requirements of the second application 106. For instance, where an immediate delay in requesting the system resource 102 is not tolerable, the request may be immediately made with delay occurring on the second cycle so that all further requests are time-aligned with those of the first application 102. In one embodiment, a logical decision may be made to determine whether to delay the first system resource 102 request from the second application 106 or whether to make the request immediately and delay the second request. This determination may be made based on a threshold tolerance for the delay— if the required delay exceeds the threshold, then the second application 106 may make its first request without delay and without alignment, while delaying the second request in order to achieve alignment; if the required delay is below the threshold, then the second application 106 can delay its first system resource 102 request so that time- alignment is achieved with the first cycle. [0030] The embodiments of the user device 100 have so far been described in terms of a first and second application 104, 106. However, one of skill in the art will recognize that the operations of the second application 106 can also apply to a plurality of other applications such that a plurality of applications make concurrent or consecutive system resource 102 requests (time-aligned requests). These embodiments also apply where new applications come online and time-align with a plurality of other applications that are already coordinated, and also applies where one or more coordinated applications pull out of the coordination and either run in a non-coordinated fashion or go offline. [0031] One of skill in the art will further understand that while only a single private file 134 is illustrated, in practice a plurality of private files 134 are likely to be encountered. Additionally, while components are illustrated as having direct communication with the bus 140, in some embodiments there may be interfacing components between a given component and the bus 140. For instance, a memory controller could act as an interface between the memory 112 and the bus 140. [0032] The RAM 114 can represent one or more hardware components. The HDD 118 can take a variety of forms such as a disc drive with magnetically stored bits or as a flash- based drive having bits stored via switches such as the charge on a transistor, to name just two examples. [0033] FIG. 2 illustrates a method 200 for coordinating usage of system resources by two or more applications sandboxed so as to be unable to communicate with each other or to receive communications from other applications such as a coordinating daemon. The method 200 begins with a first application (e.g., first application 104) searching for a public file 202 (e.g., public file 132) in a memory (e.g., memory 112). Where there is no public file 204, the public file can be created 206. The first application can execute a request for a system resource 214 (e.g., system resource 102), and then optionally leave a scheduling indicator in the public file 216. Alternatively, the first application can first leave a scheduling indicator in the public file 210 and then execute the request 214. The scheduling indicator can be used by other applications to coordinate their requests for the system resource with requests made by the first application. [0034] The method 200 can then repeat for a second application (e.g., second application 106). The second application may initially search for a public file 202, and upon finding the public file 204, the second application can read the scheduling indicator 208 stored by the first application. Based on the scheduling indicator the second application can select one or more times or timeslots for making a request for the system resource. The one or more times can be selected to time-align with the times when the first application will be making the same request. Alignment of the requests means that the system resource is more efficiently used than it would be with the first and second applications making non-aligned requests for the system resource. [0035] After selecting one or more times, the second application can execute the one or more requests at the selected one or more times 214. Optionally, the second application can leave a scheduling indicator in the public file before 212 or after 216 the execution of the request 214, where the scheduling indicator enables further applications to coordinate their requests for the system resource with the requests of the first and second applications. For the second application, the optional leave a scheduling indicator operations 212, 216 may be most useful where the first and second applications are time-aligned in a sequential rather than concurrent fashion. This is because, if the first and second applications make requests at the same times, then the scheduling indicator written to memory by the first application should be sufficient to enable further applications to also execute requests concurrently with the first and second applications. However, where the first and second applications execute requests consecutively, then both the first and second applications should leave a scheduling indicator, or the second application should update the scheduling indicator, so that further applications can schedule their requests in time slots other than those already selected for the first and second applications. [0036] Although the create public file operation 206 is illustrated as taking place before the execute request for hardware resource operation 214, in some embodiments the create public file operation 206 can occur after the execute operation 214. [0037] In an alternative, applications can unwittingly coordinate by accessing a common piece of information (Block 402 in FIG. 4) and using it to determine the starting time for requests and by each application having code that requests the system resource with the same periodicity. A common piece of information could include an IP address, for instance. Each application can include an algorithm that determines a start time and optionally a period based on the common piece of information (Blocks 404 and 406). For instance, where the common piece of information is an IP address, the algorithm can derive a start time and optionally a period as derived from the IP address. In some embodiments, all applications from a given library can be programmed to have a certain start time and period as derived from the common piece of information. Here, applications using different libraries may generate non-aligned system requests, but at least all those applications using a given library can be coordinated. Once the schedule of requests has been derived, the requests for the system resource can be executed per the schedule (Block 408). [0038] In another alternative embodiment, applications can coordinate using a system broadcast that the user device 100 would typically produce regardless of the herein disclosed systems and methods. For instance, a system clock signal is one example of a system broadcast. In other words, the scheduling modules 108, 110 can be configured to monitor for system broadcasts and use the broadcast time as a reference point to set a schedule of requests for the system resource 102. Each scheduling module 108, 110 is configured to use the same period, but to use system broadcasts as the starting time or at least a reference for a start time. For instance, an offset from the system broadcast can be included such that the first and second applications 104, 106 execute the requests at the same start time delayed from the system broadcast. In some embodiments, the system broadcast can be periodic. [0039] Although sandboxed applications cannot directly coordinate with each other, applications may all have access to a common piece of information, such as information provided in a system broadcast. For instance, all applications may have access to a clock signal broadcast throughout the system, or to an IP address of the device. [0040] Alternatively, a daemon or coordinating application can pass coordinating information, such as a schedule or start time and period, to other applications via the IP loopback feature of most devices. In this embodiment, a first application transmits scheduling indicator to a willing recipient application through the network interface of the user device. For instance, data packets can be routed to an IP address that is internal to the device, such that data packets pass to the network interface and are then routed back into the device to the recipient application. [0041] It should be understood that the order of method steps in FIG. 2 is exemplary only, and in alternative embodiments the method steps can be interchanged without departing from the scope of the invention. [0042] FIG. 3 illustrates a user device for scheduling aligned application requests for a system resource. The following description will include parenthetical references to the method illustrated in FIG. 4. The user device 300 can include a first application 304 and a second application 306 both attempting to access the same system resource 302. In this embodiment, both applications 304, 306 may have access to a common piece of information 332, such as an IP address of the user device 300 or a clock signal, to name two non-limiting examples. Upon accessing the common piece of information 332 (Block 402), scheduling modules 308, 310 in each of the first and second applications 304, 306 can run a scheduling algorithm 334 (Block 404) to select times (or alternatively to derive a schedule) for the requests of the system resource 302 (Block 406), where the algorithm derives the times based on the common piece of information. For instance, the scheduling algorithm 334 may parse an IP address of the user device 300 and convert the parsed IP address into a time and period, where the time is a first time that each of the first and second applications 304, 306 are to access the system resource 302, and the period is a period with which the first and second applications 304, 306 are to access the system resource 302 thereafter. Once the schedule is determined, the application processor 320 may execute the time-aligned requests for the system resource 302 (Block 408) made by the first and second applications 304, 306 according to the schedule. [0043] The common piece of information 332 and the scheduling algorithm 334 can be stored in a memory 312, which can reside in at least a part of one or more of the following: a cache 320, RAM 314, or virtual memory 319. The virtual memory 319 can be a segment of a hard drive (HDD) 318 that is set aside for memory 312 usage when more preferred memory (e.g., RAM 314) is filled. The HDD 318 can be a part of a storage 316. [0044] In some instances, the common piece of information 332 can be stored as part of the BIOS, and can therefore be stored in a type of storage that is not illustrated. For instance, the BIOS is sometimes stored on a memory 312 that is separate from the cache 320, RAM 314, or virtual memory 319. In some cases, the common piece of information 332 can be stored in a ROM residing on the application processor 320 or accessible by the application processor 320. In other instances, the common piece of information 332 does not reside in memory but is a signal propagating through the user device 300, such as a system clock signal. [0045] In some instances, the scheduling algorithm 334 can be a part of the scheduling modules 308, 310. In other instances, the scheduling algorithm 334 can be part of an API used to develop the applications 304, 306. In one embodiment, access to the common piece of information 332 means that the applications 304, 306 can read the common piece of information 332. [0046] FIG. 4 illustrates a method for scheduling aligned application requests for a system resource. The method 400 can include accessing a common piece of information in accessing operation 402. Two or more applications can then run a scheduling algorithm in run scheduling algorithm operation 404. The algorithm can select times and optionally a period (or alternatively derive a schedule) for the requests of the system resource in selection operation 406. Once the times and optionally a period are selected, or the schedule is determined, the time-aligned requests can be executed in execute operation 408. [0047] The systems and methods described herein can be implemented in a machine such as a computer system in addition to the specific physical devices described herein. FIG. 5 shows a diagrammatic representation of one embodiment of a machine in the exemplary form of a computer system 500 within which a set of instructions can execute for causing a device to perform or execute any one or more of the aspects and/or methodologies of the present disclosure. The components in FIG. 5 are examples only and do not limit the scope of use or functionality of any hardware, software, embedded logic component, or a combination of two or more such components implementing particular embodiments. [0048] Computer system 500 may include a processor 501, a memory 503, and a storage 508 that communicate with each other, and with other components, via a bus 540. The bus 540 may also link a display 532, one or more input devices 533 (which may, for example, include a keypad, a keyboard, a mouse, a stylus, etc.), one or more output devices 534, one or more storage devices 535, and various tangible storage media 536. All of these elements may interface directly or via one or more interfaces or adaptors to the bus 540. For instance, the various tangible storage media 536 can interface with the bus 540 via storage medium interface 526. Computer system 500 may have any suitable physical form, including but not limited to one or more integrated circuits (ICs), printed circuit boards (PCBs), mobile handheld devices (such as mobile telephones or PDAs), laptop or notebook computers, distributed computer systems, computing grids, or servers. [0049] Processor(s) 501 (or central processing unit(s) (CPU(s))) optionally contains a cache memory unit 502 for temporary local storage of instructions, data, or computer addresses. Processor(s) 501 are configured to assist in execution of computer readable instructions. Computer system 500 may provide functionality as a result of the processor(s) 501 executing software embodied in one or more tangible computer- readable storage media, such as memory 503, storage 508, storage devices 535, and/or storage medium 536. The computer-readable media may store software that implements particular embodiments, and processor(s) 501 may execute the software. Memory 503 may read the software from one or more other computer-readable media (such as mass storage device(s) 535, 536) or from one or more other sources through a suitable interface, such as network interface 520. The software may cause processor(s) 501 to carry out one or more processes or one or more steps of one or more processes described or illustrated herein. Carrying out such processes or steps may include defining data structures stored in memory 503 and modifying the data structures as directed by the software. [0050] The memory 503 may include various components (e.g., machine readable media) including, but not limited to, a random access memory component (e.g., RAM 504) (e.g., a static RAM "SRAM", a dynamic RAM "DRAM, etc.), a read-only component (e.g., ROM 505), and any combinations thereof. ROM 505 may act to communicate data and instructions unidirectionally to processor(s) 501, and RAM 504 may act to communicate data and instructions bidirectionally with processor(s) 501. ROM 505 and RAM 504 may include any suitable tangible computer-readable media described below. In one example, a basic input/output system 506 (BIOS), including basic routines that help to transfer information between elements within computer system 500, such as during start-up, may be stored in the memory 503. [0051] Fixed storage 508 is connected bidirectionally to processor(s) 501, optionally through storage control unit 507. Fixed storage 508 provides additional data storage capacity and may also include any suitable tangible computer-readable media described herein. Storage 508 may be used to store operating system 509, EXECs 510 (executables), data 511, API applications 512 (application programs), and the like. Often, although not always, storage 508 is a secondary storage medium (such as a hard disk) that is slower than primary storage (e.g., memory 503). Storage 508 can also include an optical disk drive, a solid-state memory device (e.g., flash-based systems), or a combination of any of the above. Information in storage 508 may, in appropriate cases, be incorporated as virtual memory in memory 503. [0052] In one example, storage device(s) 535 may be removably interfaced with computer system 500 (e.g., via an external port connector (not shown)) via a storage device interface 525. Particularly, storage device(s) 535 and an associated machine-readable medium may provide nonvolatile and/or volatile storage of machine-readable instructions, data structures, program modules, and/or other data for the computer system 500. In one example, software may reside, completely or partially, within a machine-readable medium on storage device(s) 535. In another example, software may reside, completely or partially, within processor(s) 501. [0053] Bus 540 connects a wide variety of subsystems. Herein, reference to a bus may encompass one or more digital signal lines serving a common function, where appropriate. Bus 540 may be any of several types of bus structures including, but not limited to, a memory bus, a memory controller, a peripheral bus, a local bus, and any combinations thereof, using any of a variety of bus architectures. As an example and not by way of limitation, such architectures include an Industry Standard Architecture (ISA) bus, an Enhanced ISA (EISA) bus, a Micro Channel Architecture (MCA) bus, a Video Electronics Standards Association local bus (VLB), a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, an Accelerated Graphics Port (AGP) bus, HyperTransport (HTX) bus, serial advanced technology attachment (SATA) bus, and any combinations thereof. [0054] Computer system 500 may also include an input device 533. In one example, a user of computer system 500 may enter commands and/or other information into computer system 500 via input device(s) 533. Examples of an input device(s) 533 include, but are not limited to, an alpha-numeric input device (e.g., a keyboard), a pointing device (e.g., a mouse or touchpad), a touchpad, a joystick, a gamepad, an audio input device (e.g., a microphone, a voice response system, etc.), an optical scanner, a video or still image capture device (e.g., a camera), and any combinations thereof. Input device(s) 533 may be interfaced to bus 540 via any of a variety of input interfaces 523 (e.g., input interface 523) including, but not limited to, serial, parallel, game port, USB, FIREWIRE, THUNDERBOLT, or any combination of the above. [0055] In particular embodiments, when computer system 500 is connected to network 530, computer system 500 may communicate with other devices, specifically mobile devices and enterprise systems, connected to network 530. Communications to and from computer system 500 may be sent through network interface 520. For example, network interface 520 may receive incoming communications (such as requests or responses from other devices) in the form of one or more packets (such as Internet Protocol (IP) packets) from network 530, and computer system 500 may store the incoming communications in memory 503 for processing. Computer system 500 may similarly store outgoing communications (such as requests or responses to other devices) in the form of one or more packets in memory 503 and communicated to network 530 from network interface 520. Processor(s) 501 may access these communication packets stored in memory 503 for processing. [0056] Examples of the network interface 520 include, but are not limited to, a network interface card, a modem, and any combination thereof. Examples of a network 530 or network segment 530 include, but are not limited to, a wide area network (WAN) (e.g., the Internet, an enterprise network), a local area network (LAN) (e.g., a network associated with an office, a building, a campus or other relatively small geographic space), a telephone network, a direct connection between two computing devices, and any combinations thereof. A network, such as network 530, may employ a wired and/or a wireless mode of communication. In general, any network topology may be used. [0057] Information and data can be displayed through a display 532. Examples of a display 532 include, but are not limited to, a liquid crystal display (LCD), an organic liquid crystal display (OLED), a cathode ray tube (CRT), a plasma display, and any combinations thereof. The display 532 can interface to the processor(s) 501, memory 503, and fixed storage 508, as well as other devices, such as input device(s) 533, via the bus 540. The display 532 is linked to the bus 540 via a video interface 522, and transport of data between the display 532 and the bus 540 can be controlled via the graphics control 521. [0058] In addition to a display 532, computer system 500 may include one or more other peripheral output devices 534 including, but not limited to, an audio speaker, a printer, and any combinations thereof. Such peripheral output devices may be connected to the bus 540 via an output interface 524. Examples of an output interface 524 include, but are not limited to, a serial port, a parallel connection, a USB port, a FIREWIRE port, a THUNDERBOLT port, and any combinations thereof. [0059] In addition or as an alternative, computer system 500 may provide functionality as a result of logic hardwired or otherwise embodied in a circuit, which may operate in place of or together with software to execute one or more processes or one or more steps of one or more processes described or illustrated herein. Reference to software in this disclosure may encompass logic, and reference to logic may encompass software. Moreover, reference to a computer-readable medium may encompass a circuit (such as an IC) storing software for execution, a circuit embodying logic for execution, or both, where appropriate. The present disclosure encompasses any suitable combination of hardware, software, or both. [0060] Those of skill in the art would understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. [0061] Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. [0062] The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. [0063] The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal. [0064] The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Techniques and mechanisms to manage power states for a system-on-chip (SOC). Multiple modules of the SOC include a first module to perform a task including one or more accesses to a memory. In an embodiment, the SOC is transitioned to one of a path-to-memory-available (PMA) power state and a path-to-memory-not-available (PMNA) power state, where the transition is in response to an indication that, of the multiple modules, only the first module is to access the memory during the task. The PMA power state enables data communication between the memory and the first module and prevents data communication between the memory and any other module of the multiple modules. In another embodiment, the PMNA power state prevents data communication between the memory and any of the multiple modules, but allows a low latency transition from the PMNA power state to the PMA power state.
1.A system on chip (SOC) circuit for providing memory access, the SOC circuit including:A plurality of modules, including a first module, each of the plurality of modules includes a corresponding circuit configured to request access to the memory;A memory controller coupled to each of the plurality of modules; andA power management unit, which includes a circuit configured to receive one or more signals indicating that any access to the memory by the plurality of modules during the task of the first module is to be The access to the first module, wherein in response to the one or more signals, the power management unit causes the SOC circuit to transition to one of a first power state and a second power state, wherein the first power state Enable data communication between the memory and the first module and block data communication between the memory and any module of the plurality of modules other than the first module, and wherein the second power The state prevents data communication between the memory and any module;Wherein the first module exchanges data to perform the operation of the task, which includes the first module to exchange data with the memory via a memory controller, and wherein the power management unit further executes in the first power state And the transition between the second power state,Its characteristics are:Providing a clock signal to the first module during the first power state and during the second power state;Among the plurality of modules, only the first module includes a circuit configured to request one of the first power state and the second power state, wherein the response from the first power state to the first power state Two power state transitions, the first module is configured to transition from enabling data exchange with the memory to preventing data exchange with the memory.2.The SOC circuit of claim 1, wherein the SOC includes the memory.3.The SOC circuit of any one of claims 1 and 2, wherein a memory clock signal is provided to the memory during the first power state, and wherein the memory clock signal is prevented from being provided to the memory during the second power state The memory clock signal.4.The SOC circuit according to any one of claims 1 to 2, wherein one module of the plurality of modules other than the first module is in the system on a chip except for the first power state and the Is coupled to the power rail during a power state other than the second power state, and wherein the one module of the plurality of modules is removed from the power rail during one of the first power state and the second power state Decoupling.5.The SOC circuit according to any one of claims 1 to 2, wherein each of the plurality of modules is coupled to pass through a corresponding active power state during active power states other than the first power state and the second power state A power rail receives power, and wherein only the first module of the plurality of modules is coupled to receive power via a corresponding power rail during the first power state.6.The SOC circuit of claim 5, wherein only the first module among the plurality of modules is coupled to receive power via a corresponding power rail during the second power state.7.The SOC circuit of claim 5, wherein the memory controller is coupled to receive power during the first power state.8.The SOC circuit of claim 7, wherein the memory controller is coupled to receive power during the second power state.9.The SOC circuit according to any one of claims 1 to 2, wherein only the first module in the plurality of modules includes one of the first power state and the second power state coupled to request The circuit.10.3. The SOC circuit of any one of claims 1 to 2, wherein during the first power state, the memory is configured to receive a memory refresh signal from the memory controller.11.The SOC circuit according to any one of claims 1 to 2, wherein performing the transition between the first power state and the second power state includes changing to the first module, the memory controller Or the power gating of the memory.12.The SOC circuit according to any one of claims 1 to 2, wherein performing the transition between the first power state and the second power state includes changing the first module, the memory controller, or The clock gating of the memory.13.A computer-readable storage medium having instructions stored thereon, which when executed by one or more processing units cause the one or more processing units to perform a method, the method comprising:Receiving one or more signals indicating that any access to the memory by the plurality of modules during the task of the first module of the plurality of modules of the system on chip (SOC) is to become an access of the first module;In response to the one or more signals, transition to one of the first power state of the SOC and the second power state of the SOC, wherein the first power state enables the memory and the first module And block data communication between the memory and any module of the plurality of modules except the first module, and wherein the second power state blocks the memory and any module Data communication;Exchanging data during the first power state to perform the operation of the task, which includes exchanging data between the first module and the memory via the memory controller of the SOC;Performing a transition between the first power state and the second power state,Its characteristics are:Providing a clock signal to the first module during the first power state and during the second power state;Among the plurality of modules, only the first module includes a circuit configured to request one of the first power state and the second power state, wherein the response from the first power state to the first power state Two power state transitions, the first module is configured to transition from enabling data exchange with the memory to preventing data exchange with the memory.14.The computer-readable storage medium of claim 13, wherein the SOC includes the memory.15.The computer-readable storage medium of any one of claims 13 and 14, wherein a memory clock signal is provided to the memory during the first power state, and wherein the memory clock signal is prevented from being transmitted to the memory during the second power state. The memory provides the memory clock signal.16.A method for providing memory access, the method comprising:Receiving one or more signals indicating that any access to the memory by the plurality of modules during the task of the first module of the plurality of modules of the system on chip (SOC) is to become an access of the first module;In response to the one or more signals, transition to one of the first power state of the SOC and the second power state of the SOC, wherein the first power state enables the memory and the first module And block data communication between the memory and any module of the plurality of modules except the first module, and wherein the second power state blocks the memory and any module Data communication;Exchanging data during the first power state to perform the operation of the task, which includes exchanging data between the first module and the memory via the memory controller of the SOC;Performing a transition between the first power state and the second power state,Its characteristics are:Providing a clock signal to the first module during the first power state and during the second power state;Among the plurality of modules, only the first module includes a circuit configured to request one of the first power state and the second power state, wherein the response from the first power state to the first power state Two power state transitions, the first module is configured to transition from enabling data exchange with the memory to preventing data exchange with the memory.17.The method of claim 16, wherein a memory clock signal is provided to the memory during the first power state, and wherein the memory clock signal is prevented from being provided to the memory during the second power state.18.The method according to any one of claims 16 to 17, wherein one module of the plurality of modules other than the first module is in the SOC except for the first power state and the second power state. A power state other than a state is coupled to the power rail, and wherein the one module of the plurality of modules is decoupled from the power rail during one of the first power state and the second power state.19.The method according to any one of claims 16 to 17, wherein each of the plurality of modules is coupled to pass through a corresponding power state during an effective power state other than the first power state and the second power state A rail to receive power, and wherein only the first module of the plurality of modules is coupled to receive power via a corresponding power rail during the first power state.20.A system for providing memory access, the system comprising:System on chip (SOC) circuit, which includes:A plurality of modules, the plurality of modules include a first module, each of the plurality of modules includes a corresponding circuit configured to request access to the memory;A memory controller coupled to each of the plurality of modules; andA power management unit, which includes a circuit configured to receive one or more signals indicating that any access to the memory by the plurality of modules during the task of the first module is to be The access to the first module, wherein in response to the one or more signals, the power management unit causes the SOC circuit to transition to one of a first power state and a second power state, wherein the first power state Enable data communication between the memory and the first module and block data communication between the memory and any module of the plurality of modules other than the first module, and wherein the second power The state prevents data communication between the memory and any module;Wherein the first module exchanges data to perform the operation of the task, which includes the first module to exchange data with the memory via the memory controller, and wherein the power management unit further executes the operation in the first power state and The transition between the second power state;A dipole antenna for exchanging wireless communication based on the operation of the SOC circuit,Its characteristics are:Providing a clock signal to the first module during the first power state and during the second power state;Among the plurality of modules, only the first module includes a circuit configured to request one of the first power state and the second power state, wherein the response from the first power state to the first power state Two power state transitions, the first module is configured to transition from enabling data exchange with the memory to preventing data exchange with the memory.21.The system of claim 20, wherein the SOC includes the memory.22.The system according to any one of claims 20 and 21, wherein only the first module among the plurality of modules includes a module coupled to request one of the first power state and the second power state Circuit.23.A device for providing memory access, the device comprising:A component for receiving one or more signals indicating that any access to the memory by the plurality of modules during the task of the first module of the plurality of modules of the system-on-chip (SOC) is to become the first Module access;Means for transitioning to one of the first power state of the SOC and the second power state of the SOC in response to the one or more signals, wherein the first power state enables the memory and the Data communication between the first module and block data communication between the memory and any module of the plurality of modules other than the first module, and wherein the second power state prevents the memory from communicating with any Data communication between modules;Means for exchanging data during the first power state to perform the operation of the task, which includes exchanging data between the first module and the memory via the memory controller of the SOC;Means for performing a transition between the first power state and the second power state,Its characteristics are:Providing a clock signal to the first module during the first power state and during the second power state;Among the plurality of modules, only the first module includes a circuit configured to request one of the first power state and the second power state, wherein the response from the first power state to the first power state Two power state transitions, the first module is configured to transition from enabling data exchange with the memory to preventing data exchange with the memory.24.The apparatus of claim 23, wherein a memory clock signal is provided to the memory during the first power state, and wherein the memory clock signal is prevented from being provided to the memory during the second power state.25.The device according to any one of claims 23 to 24, wherein one module of the plurality of modules other than the first module is in the SOC except for the first power state and the second power state A power state other than a state is coupled to the power rail, and wherein the one module of the plurality of modules is decoupled from the power rail during one of the first power state and the second power state.26.The apparatus according to any one of claims 23 to 24, wherein each of the plurality of modules is coupled to pass through a corresponding power state during an effective power state other than the first power state and the second power state A rail to receive power, and wherein only the first module of the plurality of modules is coupled to receive power via a corresponding power rail during the first power state.
Power management of memory access in system-on-chipTechnical fieldThe embodiments discussed herein generally relate to power management of integrated circuits. More specifically, certain embodiments include, but are not limited to, power states that facilitate power-efficient access to the memory of the system-on-chip.Background techniqueIn a system on chip (SOC), the circuit components of the SOC are integrated on a single chip. SOC integrated circuits are becoming even more common in a variety of applications including, for example, embedded applications with set-top boxes, mobile phones, laptop media devices, and so on. Although the high integration of components in the SOC provides advantages such as chip area savings and better signal quality, power consumption and performance delay are becoming increasingly important constraints for devices including such SOCs. Especially with laptop SOC applications, efficient power management functionality is a valuable aspect of many SOC implementations.Memory access has a significant impact on SOC efficiency and performance. Generally, different components of the SOC access the same memory resources in various ways. The existing SOC memory access technical solutions involve powering up the entire SOC in various ways, and powering up the main voltage supply of the SOC when the memory of the SOC needs to be accessed. However, there are huge costs associated with such methods, at least in terms of delay and conversion energy. In addition, there are challenges associated with memory sharing between components of the SOC, such as latency requirements for component operation, power efficiency in accessing memory, and the like.Description of the drawingsVarious embodiments of the present invention are illustrated in the figures of the accompanying drawings by way of example and not limitation and in which:FIG. 1 is a high-level functional block diagram illustrating elements of a system on a chip that provides memory access according to an embodiment.Figure 2 illustrates a flowchart of elements of a method for operating a system on a chip according to an embodiment.FIG. 3 is a state diagram illustrating a power state transition of a system on a chip according to an embodiment.Fig. 4 is a timing chart illustrating elements of handshake for operating the system on a chip according to the embodiment.FIG. 5 is a timing chart illustrating elements of tasks performed by the system on a chip according to the embodiment.Figure 6 is a high-level functional block diagram illustrating elements of a computer platform that provides access to memory resources according to an embodiment.Figure 7 is a high-level functional block diagram illustrating elements of a mobile device that provides access to memory resources according to an embodiment.Specific embodimentAs the degree of integration in the SOC increases, the number and types of SOC components that use memory resources also increase. Therefore, the need to provide power-efficient memory access to SOC components is growing. The techniques and mechanisms discussed herein provide power states in various ways that facilitate efficient access to memory by specific modules of the multiple modules residing in the SOC. Such techniques and/or mechanisms can provide a first SOC power state, where access to the memory is provided to the first SOC module instead of one or more other SOC modules, which may be in addition to the SOC. Access to memory in different power states. The power state may further include a second power state, which prevents the first module and other modules from accessing the memory. However, the second power state can act as a backup power state, which facilitates a low-latency transition to the first power state.Figure 1 illustrates elements of a system-on-chip (SoC) 100 that provides power management for memory access according to certain embodiments. The SOC 100 is just one example of an integrated circuit (IC) that includes multiple components (referred to herein as "modules"), each of which accesses the same memory resources included in or coupled to the IC in various ways . Such an IC may provide one or more SOC power states that support memory access to only some (eg, only one) of the multiple modules in terms of memory availability to multiple modules.Certain embodiments are discussed herein with regard to power states that facilitate memory access by the module 130 of the SOC 100, where such power states prevent memory access by one or more other modules 110 of the SOC 100. However, such a discussion can be extended to additionally or alternatively apply to the memory access of any of the various other modules of the SOC. The specific number and type of one or more other modules 110 are merely illustrative and not limited to certain embodiments.The SOC 100 may include circuits that operate as the following components: desktop computers, laptop computers, handheld devices (eg, smart phones, palm devices, tablets, etc.), game consoles, wireless communication devices, or other such computing capable device of. In order to facilitate such operations, the SOC 100 may include a plurality of modules—for example, it includes a module 130 and one or more modules 110—and a memory controller 140 coupled to the plurality of modules. Provides access to the memory included in the SOC 100 or coupled to the SOC 100. By way of illustration and not limitation, the memory controller 140 may provide access to the memory 145 (for example, a dynamic random access memory (DRAM) module) included in the SOC 100. In another embodiment, the memory 145 is part of another IC chip (not shown) that can be stacked with the SOC 100 in the IC die stack of the packaging device. The operation of the memory 145 and/or the memory controller 140 may follow, for example, double data rate (DDR) specifications (for example, DDR4 SDRAM JEDEC standard JESD79-4 in September 2012), high bandwidth memory (HBM) specifications (for example, October 2013) Some or all of the requirements of the HBM DRAM standard JESD235) or other such specifications.The interconnection circuit 120 may couple the various modules of the SOC 100 to the memory controller 140—and in some embodiments, to each other—for various exchanges of data and/or control messages. The interconnection circuit 120 may include any of a variety of combinations of one or more buses, crossbars, structures, and/or other connection mechanisms for coupling the modules 110, 130 to the memory controller 140 in various ways. The interconnection circuit 120 may include, for example, one or more address and/or data buses. It should be understood that some or all of the modules 110, 130 may each be coupled to the memory controller 140 via distinct communication paths. For example, according to some embodiments, one or more dedicated data and/or control lines, etc. may be used to couple only a specific one of the modules 110, 130 to the memory 145. The communication between the modules 110, 130 and the memory controller 140 can be modified from conventional communication techniques, which are not described in detail herein and are not limited to certain embodiments.The modules 110, 130 may send requests to the memory controller 140 to access the memory 145 in various ways-for example, where the modules 110, 130 request such access independently of each other. Although certain embodiments are not limited in this regard, one or more of the modules 110 may include a processor unit 111 coupled to the memory controller 140. The processor unit 111 may include one or more cores 112 for executing an operating system (OS) (not shown). In addition, the processor unit 111 may include a cache memory (not shown), such as static random access memory (SRAM) and the like, or any of various types of internal integrated memories. In an example, the memory 145 may store a software program, which may be executed by the processor unit 111. In some embodiments, the processor unit 111 may access basic input/output system (BIOS) instructions—for example, stored in the memory 145 or in a separate storage device.The one or more modules 110 may include additional or alternative modules (as represented by the illustrative display module 114) for performing image data processing, and a hub module 116 for acting as one or more other components of the SOC 100 (not Shown) of the hub. The hub module 116 may include, for example, a platform hub, an input/output (I/O) hub, or other such hub circuits. Similar to the processor unit 111, the display module 114 and the hub module 116 can each access the memory 145 via the memory controller 140 at various times—for example, depending on the designated power state of the SOC 100.The SOC 100 can operate in any of two or more power states at different times, and can provide logic—for example, it includes hardware, firmware, and/or execution software—to support, initiate, or implement in such a manner Transitions between power states. According to an exemplary embodiment, the power management unit 105 of the SOC 100 may include state logic 162, which includes hardware and/or execution software for identifying the specified power state to be configured for the SOC 100-for example, in such an identification part based on the module 110, 130 in the current and/or expected future operation. In addition, the power management unit 105 may include or be coupled to a circuit for configuring different power states at different times by the state logic 162 in various ways. By way of illustration and not limitation, the power management unit 105 may include clock gating logic 160, which includes circuits for performing clock gating of one or more components of the SOC 100 to configure the power state of the SOC 100 in various ways . Alternatively or in addition, the power management unit 105 may include power gate logic 164 for performing power gating for configuring such power states. In some embodiments, the voltage supply logic 166 may selectively enable or disable one or more supply voltages to achieve a specified power state. The specific mechanisms by which such clock gating, power gating, and/or voltage regulation are to be implemented can be modified from conventional power control mechanisms, which are not detailed herein to avoid making the features of certain embodiments difficult to understand.In one embodiment, the one or more power states configured with the power management unit 105 may selectively communicate with the memory 145 for a subset of the modules 110, 130-for example, only the subset. The first power state may enable data communication between the memory module 130 and the memory 145 via the memory controller 140, wherein the first power state also prevents some or all of the one or more modules 110 from participating in data exchange with the memory 145 . In some embodiments, the second power state acts as a backup mode, which allows for a rapid transition to the first power state in order to access the memory 145 through the module 130. Such a power state can provide improved power efficiency in meeting the tasks of the module 130 that are considered critical to the operation of the SOC 100, or in addition to be expected to be at least during periods of memory access inactivity for one or more of the modules 110 To execute.For example, the module 130 may provide the functionality of I/O communication between the SOC 100 and an agent (not shown) coupled therewith. Such an agent may reside on a platform, which includes the SOC 100, or alternatively may communicate with such a platform via any of a variety of combinations of one or more wired networks and/or wireless networks. In an embodiment, the module 130 includes a communication processor, a modem, a WiFi network module, a Bluetooth network module, a cellular phone module, or other such communication I/O interface hardware. In some embodiments, the module 130 includes a global positioning system (GPS) module, a global navigation satellite system (GNSS) module or other receiver and/or transmitter circuits for exchanging geodetic information. In still other embodiments, the module 130 includes a streaming circuit for the SOC 100 for outputting or receiving audio data streams. These are just some examples of the functionality provided by the module 103 to perform tasks including memory access-for example, when one or more other modules 110 are in a relatively deep low power mode.In order to efficiently support the operation of the module 130 when one or more modules 110 are inactive (at least with regard to accessing the memory 145), the power management unit 105 may implement a power state to selectively disable the memory 145 and the one or more modules 110. Data communication. In addition, when the module 130 does not access the memory 145 but can be expected to access the memory 145 immediately during the activity of one or more modules 110, the power management unit 105 may selectively implement another power state for additional power efficiency. Such a power state may be implemented in various ways in response to the signaling 150 exchanged between the module 150 and the power management unit 105. In some embodiments, the module 130 is the only module among the modules 110 and 130 that can request the power management unit 105 or other signals to be transmitted to the power management unit 105 to achieve such a power state. The signaling 150 can provide fast operation of the control circuit that implements power state transitions independent of executing firmware (or other such code).FIG. 2 illustrates elements of a method 200 for operating an SOC according to an embodiment. For example, the method 200 may be performed to configure the power state of the SOC 100 in various ways. In an embodiment, the method 200 is performed using a circuit having some or all of the features of the power management unit 105.The method 200 may include detecting at 210 that during the task of the first module of the plurality of modules of the SOC, the access to the memory by the plurality of modules of the SOC is to become the access of the first module. The first module may have some or all of the features of the module 130-for example, where multiple modules are coupled to the memory 145 via the memory controller 140. The detection at 210 may be based on one or more signals received by, for example, the power management unit 105, which indicate the current activity of the plurality of modules and/or the expected future activity of the plurality of modules. Such one or more signals may specify or otherwise indicate that only the first module among the plurality of modules is expected to require memory access for at least a period of time that allows for deactivation of one or more other modules of the plurality of modules Memory access (with accompanying power savings). The specific number and/or type of such one or more signals (which may be received as a priori input) is not limited to certain embodiments. The specific mechanism by which such one or more signals are generated, transmitted, and/or evaluated can be modified from conventional platform performance evaluation techniques, which are not described in detail in this article.In response to the detection at 210, the method 200 may transition the SOC to one of a first power state and a second power state at 220, where the first power state enables data communication between the memory and the first module and prevents Data communication between the memory and any one of the plurality of modules except the first module. For the sake of brevity, such a first power state is referred to herein as a path-to-memory available (PMA) power state. In contrast, the second power state can prevent data communication between the memory and any of the multiple modules. However, the second power state may allow for a rapid transition to the first power state-for example, as compared to any corresponding transition that may be provided by another power state of the SOC. Therefore, the second power state can facilitate quick recovery of memory access by the first module in the first power state. For the sake of brevity, such a second power state is referred to herein as a path to memory unavailable (PMNA) power state.During the first power state, the method 200 may exchange data at 230 to perform operations on the tasks of the first module. The exchange at 230 may include exchanging data between the first module and the memory via the memory controller of the SOC. Before or after the data exchange at 230, the method 200 may perform the transition of the SOC between the first power state and the second power state at 240. Any change between enabling data communication with the memory and the plurality of modules and preventing data communication with the memory and the plurality of modules due to the transition at 240 is a change in the communication between the memory and the first module. Therefore, the first module may be the only module among the plurality of modules that transitions between preventing data exchange with the memory and allowing data exchange with the memory due to the transition performed at 240. In contrast, each of the other modules can remain unable to communicate with the memory before, during, and after the transition at 240.220The transition at may include transitioning the SOC from a power state other than any one of the first power state and the second power state. For example, FIG. 3 shows a state diagram 300 that includes power state and power state transitions for SOC (eg, operating according to method 200). As illustrated in the state diagram 300, the state diagram 305 (the state diagram 305 includes the available power state PMA 310 to the memory path and the power state PMNA 320 to the memory path unavailable) according to one embodiment may be one or the other including SOC Part of a larger state diagram for multiple other power states. The state diagram 305 includes a transition 315 from PMA 310 to PMNA 320. Such a transition 315 may occur in response to the power management logic of the SOC detecting an opportunity to at least temporarily reduce power consumption (in addition to other power savings provided by PMA 320) before the expected immediate memory access of the first module. The state diagram 305 further includes a transition 325 from the PMNA 320 to the PMA 310, which may occur, for example, in response to the first module indicating that such a next memory access is required-for example, when inactivity of other modules is expected to continue.The state diagram 300 and table 350 of FIG. 3 illustrate some differences between the PMA 310 and/or PMNA 320 with respect to various conventional power states. However, readers with ordinary skills in technology will realize that the states and state transitions of the timing diagram 300 outside the state diagram 305 are merely illustrative, and are not limited to certain embodiments. In an embodiment, the state diagram 300 further includes a transition 335 from the PMA 310 to the fully operational power state Active 330 outside the state diagram 305. Although in Active 330, the SOC can support the memory access of any and each of the multiple modules of the SOC. The state diagram 300 further shows various low power states LPS1 340a, LPS2 340b...LPSn 340n outside the state diagram 305, where such low power states can be transitioned to PMA 310 in various ways via corresponding transitions 345a, 345b...345n /Change from PMA 310. Some or all of such low power states may treat multiple modules equally at least in terms of supporting access to memory by multiple modules. Although certain embodiments are not limited in this regard, LPS1 340a, LPS2 340b...LPSn340n may include any of various conventional standby, sleep, hibernation, and/or other power states. Examples of such conventional power states include, for example, power states such as SOi1, SOi2... for SOC manufactured by Intel Corporation of Santa Clara, California, USA.As shown in Table 350, the low power states LPS1 340a, LPS2 340b...340b can include disabling the memory itself to prevent any data exchange in various ways-for example, where the memory device is decoupled, powered down, gated by the clock , Power gating and/or the like. As shown in the illustrative table 350, such deactivation may include placing the memory in a self-refresh mode, which, for example, prevents data exchange between the memory and the memory controller. In contrast, memory is enabled during PMA 310 to facilitate data exchange with the first module, and (in some embodiments) can even be enabled as such during PMNA 320-for example, where some other components of the SOC are instead in PMNA 320 is configured to prevent such data exchange.In an embodiment, the memory itself is partially disabled during PMNA 320-for example, by placing the memory in a self-refresh mode and/or by gating the communication of the memory clock signal to the memory, preventing such communication, or otherwise restricting such communication. Communication. During the PMA state, the memory can instead be configured to receive an explicit memory refresh signal from the memory controller-for example, instead of operating in a self-refresh mode. For example, as shown in table 350, a memory clock signal may be provided to the memory during the PMA power state, where the memory clock signal is prevented from being provided to the memory during the PMNA power state.Alternatively or in addition, the system clock signal may be passed to the first module (rather than other modules of the SOC) during PMA 310-and in some embodiments during PMNA 320-but not to the first module or other modules (in During one or more other low power states of the SOC). Therefore, the transition between the PMA power state and the PMNA power state-for example, one of the transitions 315, 325-may include changing the power gating and/or to one or more of the first module, the memory controller, or the memory. Or clock gating. In the case that the memory, the memory controller, and/or the first module remain at least partially powered and/or clocked during the PMNA 320, some or all of these components of the SOC can be used to restore clock signaling to such components. And easy can be used to realize the "instant on" of transition 325.In some embodiments, modules other than the first module of the SOC may be coupled to the power rail during the operating power state (except the PMA power state), where the module is clock gated during the PMA state and/or PMNA power state , Power gating and/or decoupling from the power rail. For example, each of the plurality of modules may be coupled to receive power via the corresponding power rail during Active 330, wherein only the first module of the plurality of modules is coupled to receive sufficient power during PMA 320 to enable memory access. The first module may also be the only module that is coupled to such power among the multiple modules during PMNA 320.In some embodiments, the memory controller is coupled to receive power during the PMA power state and in some embodiments may be coupled to receive at least some power during the PMNA power state. For example, the memory controller may be power gated and/or clock gated during PMNA320. Alternatively or in addition, the PMA power state may include an interconnect circuit that is decoupled and/or powered down to prevent data communication between the memory controller and one or more modules of the SOC other than the first module. In such an embodiment, the PMNA power state may include decoupling and/or power down to further prevent data communication between the memory controller and the first module of other interconnect circuits.Referring now to FIG. 4, a timing diagram 400 is shown for the signals exchanged between the modules of the SOC and the power management logic of the SOC. The module can optionally be provided with access to the memory through the PMA power state of the SOC. The timing diagram 400 may represent an exchange-such as an exchange of a signal 150, for example-for controlling each of one or more transitions to a PMA power state or PMNA power state. For example, such one or more power state transitions may include one or both of transitions 315, 325. The specific timing of the signals shown in the timing diagram 400 is not limited to certain embodiments.As shown in the illustrative timing diagram 400, the signal PreWake 410 may be asserted by the module, where the PreWake 410 forwards the request signal for the PMA power mode to be expected to the power management logic in advance. In response to PreWake 410, one or more clock signal sources of the SOC may be activated-for example for the SOC to transition from a low power state (eg, one of LPS1 340a, LPS2340b...LPSn 340n).At time t1, the signal PMA_REQ 420 may be asserted by the module to request the power management logic to configure the PMA power state. Subsequently, the power management logic may assert the signal PMA_ACK 430, which confirms the request passed by the PMA_REQ 420 back to the module. The request signal PMA_REQ 420 can then be de-asserted-for example, after the rising edge of PMA_ACK 430 is received by the module.In response to the PMA power status request, MEM_LINK_STATUS 470 can be asserted by the power management logic to signal to the module that the link is available for the module to exchange data with the memory. In response, the module may access the memory via the link-for example during the illustrative period between time t5 and time t6. During this period, the signal PMNA_REQ 440 may be asserted one or more times by the module to request the power management logic to configure the PMNA power state in various ways. Assertions such as PMNA_REQ440 can be made by the module in anticipation of an upcoming period of inactivity (at least regarding memory access). The SOC may transition between the PMA power state and the PMNA power state multiple times during streaming and/or other operations of tasks that access the memory.Upon completion of the task, the module may assert the signal PMA_RELEASE 450 to indicate to the power management unit (at least temporarily) that the module no longer needs memory, and in some cases, the delay due to anticipated future linking procedures is acceptable. The module may then assert the signal PMA_RELEASE_ACK 460-for example, during the de-assertion of MEM_LINK_STATUS 470-confirm the reception of PMA_RELEASE 450 back to the power management logic. After the MEM_LINK_STATUS 470 indicates that the memory is released, the PreWake 410 may be de-asserted to signal to the power management unit that the PMA power state will not be needed-for example, in the event that the SOC transitions to a low power state.Referring now to FIG. 5, timing diagrams 500, 510 are shown to illustrate the operation of the SOC, where such operations include various power state transitions according to embodiments. The timing diagrams 500, 510 may represent the operation of the SOC, which includes, for example, some or all of the features of the SOC 100. In an embodiment, one or more of the power transitions shown in FIG. 5 are performed according to the operations of the method 200.The timing diagrams 500 and 510 represent the characteristics of memory paging operations. These memory paging operations can be performed, for example, in the case of supporting third-generation (3G) communications. For example, according to the International Mobile Telecommunications-2000 (IMT-2000 )specification. However, according to different embodiments, the features of the timing diagrams 500, 510 may be similarly applied to any of a variety of one or more additional or alternative operations.As shown in the timing diagram 500, the module of the SOC (in this example, a modem) wakes up periodically (for example, every 1280 milliseconds) to implement any necessary paging operations that require access to the main memory of the SOC. A typical paging cycle can last ~20ms, but some embodiments are not limited in this respect. In an embodiment, the modem may include a communication processor, controller, state machine, or other circuit that is active for only certain periods of the 20 ms paging cycle shown in the figure. For example, the modem's processor may need to access the memory for only about 10% of the cycle. However, when it needs to access memory, the processor may not tolerate high latency in transitioning to a power state that accommodates such access.As shown in the timing diagram 510, when the modem's processor (or other circuit) is active, it can assert the PMA_req signal to configure the SOC in the PMA power state. During such a PMA power state, the modem processor may be able to access the main memory with very low latency. When the modem's processor enters the idle state (for memory access), the modem can assert the PMNA_req signal to make the SOC transition to the PMNA power state. The configuration of the PMNA power state can prevent the modem from being able to access the main memory. However, in addition to those power saving measures of the PMA power state, the PMNA power state can also adopt additional power saving measures. By way of illustration and not limitation, the configuration of the PMNA power state may include placing the memory in a self-refresh mode and/or disabling one or more phase-locked loops (PLL), which additionally facilitate clock signaling. During a single 20ms paging cycle, the SOC can transition between the PMA power state and the PMNA power state multiple times.FIG. 6 is a block diagram of an embodiment of a computing system in which power management of SOC can be implemented. System 600 represents a computing device according to any of the embodiments described herein, and may be a laptop computer, desktop computer, server, game or entertainment control system, scanner, copier, printer, or other electronic device. The system 600 may include a processor 620, which provides processing, operation management, and execution of instructions to the system 600. The processor 620 may include any type of microprocessor, central processing unit (CPU), processing core, or other processing hardware to provide processing to the system 600. The processor 620 controls the overall operation of the system 600, and may be or include one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSP), programmable controllers, application-specific integrated circuits (ASIC), programmable Logic device (PLD) or similar, or a combination of such devices.The memory subsystem 630 represents the main memory of the system 600, and provides temporary storage for codes to be executed by the processor 620 or data values to be used in executing routines. The memory subsystem 630 may include one or more memory devices, such as read only memory (ROM), flash memory, one or more kinds of random access memory (RAM), or other memory devices or a combination of such devices. Among other things, the memory subsystem 630 also stores and hosts an operating system (OS) 636 to provide a software platform for executing instructions in the system 600. In addition, other instructions 638 from the memory subsystem 630 are stored and executed to provide the logic and processing of the system 600. The OS 636 and instructions 638 are executed by the processor 620.The memory subsystem 630 may include a memory device 632, where it stores data, instructions, programs, or other items. In one embodiment, the memory subsystem 630 resides on the SOC 690 of the system 600 and includes a memory controller 634 for providing access to the memory 632 for modules that also reside on the SOC 690. The SOC 690 may include some or all of the features of the SOC 100. Such modules of SOC 690 may include, for example, processor 620, network interface 650, and/or any of a variety of other such components of system 600. According to the technology discussed herein, the power management unit PMU 695 of the SOC 690 can configure the power state of the SOC in various ways.The SOC 610 is coupled to the bus/bus system 610. The bus 610 is an abstract concept, which represents any one or more separate physical buses, communication lines/interfaces, and/or point-to-point connections connected by suitable bridges, adapters, and/or controllers. Therefore, the bus 610 may include, for example, a system bus, a peripheral component interconnect (PCI) bus, an industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), or the Institute of Electrical and Electronics Engineers ( One or more of the IEEE) standard 1394 bus (collectively referred to as "Firewire"). The bus of the bus 610 may also correspond to an interface in the network interface 650.The system 600 may also include one or more input/output (I/O) interfaces 640, one or more internal mass storage devices 660, and peripheral interfaces 670 coupled to the bus 610. The I/O interface 640 may include one or more interface components through which a user interacts with the system 600 (for example, a video, audio, and/or alphanumeric interface). The network interface 650 provides the system 600 with the ability to communicate with remote devices (eg, servers, other computing devices) through one or more networks. The network interface 650 may include an Ethernet adapter, a wireless interconnection component, a USB (Universal Serial Bus), or other wired or wireless standards-based or dedicated interfaces.The storage device 660 may be or include any conventional media for storing large amounts of data in a non-volatile manner, such as one or more magnetic, solid-state, or optical-based disks or a combination. The storage device 660 holds the code or instructions and data 662 in a permanent state (ie, retains the value despite the power interruption to the system 600). Although the memory 630 is an execution or operation memory for providing instructions to the processor 620, the storage device 660 may generally be regarded as a “memory”. Although the storage device 660 is non-volatile, the memory 630 may include volatile memory (ie, if the power to the system 600 is interrupted, the value or state of the data is uncertain).The peripheral interface 670 may include any hardware interface not specifically mentioned above. Peripherals generally refer to devices that are dependently connected to the system 600. A dependent connection is a connection in which the system 600 provides a software and/or hardware platform on which operations are performed and the user interacts with.FIG. 7 is a block diagram of an embodiment of a mobile device in which power management of an SOC can be implemented. The device 700 represents a mobile computing device, such as a computing tablet, a mobile phone or a smart phone, a wireless-enabled e-reader, or other mobile devices. It will be understood that some of the components are shown in general, and not all components of such a device are shown in device 700.The device 700 may include a processor 710, which performs main processing operations of the device 700. The processor 710 may include one or more physical devices, such as a microprocessor, an application processor, a microcontroller, a programmable logic device, or other processing components. The processing operations performed by the processor 710 include execution of an operating platform or operating system on which applications and/or device functions are executed. The processing operations include operations related to I/O (input/output) of a human user or other devices, operations related to power management, and/or operations related to connecting the device 700 to another device. The processing operations may also include operations related to audio I/O and/or display I/O.In one embodiment, the device 700 includes an audio subsystem 720, which represents hardware (eg, audio hardware and audio circuits) and software (eg, drivers, codecs) components associated with providing audio functions to the computing device. Audio functions can include speaker and/or headphone output, as well as microphone input. The device for such a function may be integrated into the device 700 or connected to the device 700. In one embodiment, the user interacts with the device 700 by providing audio commands that are received and processed by the processor 710.The display subsystem 730 represents hardware (for example, display devices) and software (for example, drivers) components that provide visual and/or tactile displays to the user to interact with the computing device. The display subsystem 730 may include a display interface 732, which may include a specific screen or hardware device for providing a display to the user. In one embodiment, the display interface 732 includes logic separate from the processor 710 to perform at least some processing related to display. In one embodiment, the display subsystem 730 includes a touch screen device that provides both output and input to the user.The I/O controller 740 represents hardware devices and software components related to user interaction. The I/O controller 740 is operable to manage the hardware that is part of the audio subsystem 720 and/or the display subsystem 730. In addition, the I/O controller 740 illustrates connection points for additional devices connected to the device 700, whereby the user can interact with the system. For example, devices that can be attached to the device 700 can include microphone devices, speakers or stereo systems, video systems or other display devices, keyboard or keypad devices, or devices for use with specific applications such as card readers or other devices. Other I/O devices.As mentioned above, the I/O controller 740 may interact with the audio subsystem 720 and/or the display subsystem 730. For example, input through a microphone or other audio device may provide input or commands to one or more applications or functions of the device 700. In addition, instead of display output or in addition to display output, audio output can also be provided. In another example, if the display subsystem includes a touch screen, the display device also acts as an input device, which may be at least partially managed by the I/O controller 740. There may also be additional buttons or switches on the device 700 to provide I/O functions managed by the I/O controller 740.In one embodiment, the I/O controller 740 manages devices such as accelerometers, cameras, light sensors or other environmental sensors, gyroscopes, global positioning system (GPS), or other hardware that may be included in the device 700. Input can be part of direct user interaction, as well as providing environmental input to the system to affect its operation (for example, filtering of noise, adjusting the display for brightness detection, applying a flash to the camera, or other features).In one embodiment, the device 700 includes a power management 750 that manages battery power usage, battery charging, and features related to power saving operations. The memory subsystem 760 may include a memory device 762 for storing information in the device 700. The memory subsystem 760 may include non-volatile (the state does not change if the power to the memory device is interrupted) and/or volatile (the state is undetermined if the power to the memory device is interrupted) memory devices. The memory 760 may store application data, user data, music, photos, documents or other data, as well as system data related to the execution of applications and functions of the system 700 (whether long-term or temporary).In one embodiment, the memory subsystem 760 includes a memory controller 764 (which may also be considered as part of the control of the system 700). The device 700 may include a SOC 705 that includes a memory controller 764 and one or more modules that access the memory 762 in various ways via the memory controller 764 (for example, it includes a processor 700, a modem 778, and/or the like). The SOC 705 may include some or all of the features of the SOC 100. The power management 750 may configure different power states of the SOC 705 at different times in various ways, where the power states include PMA power states and PMNA power states as discussed herein.Connectivity 770 may include hardware devices (for example, wireless and/or wired connectors and communication hardware) and software components (for example, drivers, protocol stacks) for enabling the device 700 to communicate with external devices. The device may be a separate device such as other computing devices, wireless access points, or base stations, as well as peripherals such as headsets, printers, or other devices.Connectivity 770 may include multiple different types of connectivity. For generalization, device 700 is illustrated with cellular connectivity 772 and wireless connectivity 774-via illustrative dipole antenna 776, for example. Cellular connectivity 772 generally refers to cellular network connectivity provided by wireless carriers, such as via GSM (Global System for Mobile Communications) or change or derivative, CDMA (code division multiple access) or change or derivative, TDM (time division multiplexing) or Changes or derivatives, LTE (Long Term Evolution-also known as "4G") or other cellular service standards. The wireless connectivity 774 refers to wireless connectivity that is not cellular, and may include a personal area network (for example, Bluetooth), a local area network (for example, WiFi), and/or a wide area network (for example, WiMax), or other wireless communication. Wireless communication refers to the transfer of data through the use of modulated electromagnetic radiation through non-solid media. Wired communication occurs through solid-state communication media.The peripheral connection 780 includes hardware interfaces and connectors, and software components (for example, drivers, protocol stacks) for peripheral connections. It will be understood that the device 700 can be both a peripheral device to other computing devices ("to" 782) and a peripheral device ("from" 784) connected to it. The device 700 generally has a “docking” connector for connecting to other computing devices for, for example, managing (eg, downloading and/or uploading, changing, synchronizing) the content on the device 700 and the like. In addition, the docking connector may allow the device 700 to connect to certain peripherals, which allows the device 700 to control, for example, the output of content to audiovisual or other systems.In addition to dedicated docking connectors or other dedicated connection hardware, the device 700 can make peripheral connections 780 via common or standards-based connectors. Common types can include Universal Serial Bus (USB) connectors (which can include any of many different hardware interfaces), DisplayPort (which includes MiniDisplayPort (MDP)), High Definition Multimedia Interface (HDMI), Firewire, or other types .In one implementation, the SOC circuit includes: a plurality of modules including a first module, each of the plurality of modules includes a corresponding circuit configured to request access to the memory; a memory controller, which is coupled to each of the plurality of modules ; And a power management unit, which includes a circuit configured to receive one or more signals indicating that any access to the memory by the plurality of modules during the task of the first module is to be an access of the first module. In response to the one or more signals, the power management unit causes the SOC circuit to transition to one of a first power state and a second power state, wherein the first power state enables data communication between the memory and the first module and prevents the memory Data communication with any module except the first module among multiple modules. The first module exchanges data to perform the operation of the task, which includes the first module exchanges data with the memory via the memory controller, and the power management unit further executes the transition between the first power state and the second power state, where the transition is caused by Any change between enabling communication between the memory and the plurality of modules and preventing communication between the memory and the plurality of modules relates to a change in the communication between the memory and the first module.In an embodiment, the SOC includes memory. In another embodiment, the memory clock signal is provided to the memory during the first power state, and the memory clock signal is prevented from being provided to the memory during the second power state. In another embodiment, the clock signal is provided to the first module during the first power state and during the second power state. In another embodiment, one module of the plurality of modules other than the first module is coupled to the power rail during power states other than the first power state and the second power state of the on-chip system, and one of the plurality of modules The one module is decoupled from the power rail during one of the first power state and the second power state.In another embodiment, each of the plurality of modules is coupled to receive power via the corresponding power rail during active power states other than the first power state and the second power state, and wherein only among the plurality of modules The first module is coupled to receive power via the corresponding power rail during the first power state. In another embodiment, only the first module of the plurality of modules is coupled to receive power via the corresponding power rail during the second power state. In another embodiment, the memory controller is coupled to receive power during the first power state. In another embodiment, the memory controller is coupled to receive power during the second power state.In another embodiment, only the first module of the plurality of modules includes a circuit coupled to request one of the first power state and the second power state. In another embodiment, during the first power state, the memory is configured to receive a memory refresh signal from the memory controller. In another embodiment, performing the transition between the first power state and the second power state includes changing the power gating to the first module, the memory controller, or the memory. In another embodiment, performing the transition between the first power state and the second power state includes changing the clock gating of the first module, the memory controller, or the memory.In another implementation, a computer-readable storage medium has instructions stored thereon that, when executed by one or more processing units, cause the one or more processing units to perform a method, which includes receiving one or more signals , The one or more signals indicate that any access to the memory by the plurality of modules during the task of the first module of the plurality of modules of the system on chip (SOC) is to become the access of the first module, and in response to the one or A plurality of signals transition to one of the first power state of the SOC and the second power state of the SOC, wherein the first power state enables data communication between the memory and the first module and prevents the memory from removing the first power state from the plurality of modules Data communication between any modules other than one module. The method further includes an operation of exchanging data to perform a task during the first power state, which includes exchanging data between the first module and the memory via a memory controller of the SOC. The method further includes performing a transition between the first power state and the second power state, wherein any between enabling communication between the memory and the plurality of modules and preventing communication between the memory and the plurality of modules due to the transition The change is about a change in the communication between the memory and the first module.In an embodiment, the SOC includes memory. In another embodiment, wherein the memory clock signal is provided to the memory during the first power state, and wherein the memory clock signal is prevented from being provided to the memory during the second power state. In another embodiment, wherein the clock signal is provided to the first module during the first power state and during the second power state.In another implementation, the method includes receiving one or more signals indicating that any access to the memory by the plurality of modules during the task of the first module of the plurality of modules of the system-on-chip (SOC) is to become the first module's Access, and in response to the one or more signals, transition to one of the first power state of the SOC and the second power state of the SOC, wherein the first power state enables data communication between the memory and the first module and prevents the memory Data communication with any module except the first module among multiple modules. The method further includes an operation of exchanging data to perform a task during the first power state, which includes exchanging data between the first module and the memory via a memory controller of the SOC. The method further includes performing a transition between the first power state and the second power state, wherein any between enabling communication between the memory and the plurality of modules and preventing communication between the memory and the plurality of modules due to the transition The change is about a change in the communication between the memory and the first module.In an embodiment, the memory clock signal is provided to the memory during the first power state, and the memory clock signal is prevented from being provided to the memory during the second power state. In another embodiment, the clock signal is provided to the first module during the first power state and during the second power state. In another embodiment, one module of the plurality of modules other than the first module is coupled to the power rail during power states other than the first power state and the second power state of the SOC, and the one of the plurality of modules The module is decoupled from the power rail during one of the first power state and the second power state. In another embodiment, each of the plurality of modules is coupled to receive power via the corresponding power rail during active power states other than the first power state and the second power state, and only the first module of the plurality of modules The coupling is to receive power via the corresponding power rail during the first power state.In another implementation, the system includes a system-on-chip (SOC) circuit, which includes: a plurality of modules including a first module, each of the plurality of modules includes a corresponding circuit configured to request access to the memory; memory control A device, which is coupled to each of the plurality of modules; and a power management unit, which includes a circuit configured to receive one or more signals, the one or more signals indicating that the plurality of modules are paired during the task of the first module Any access to the memory will become the access of the first module. In response to one or more signals, the power management unit causes the SOC circuit to transition to one of a first power state and a second power state, wherein the first power state enables data communication between the memory and the first module and prevents the memory from communicating with Data communication among multiple modules except the first module. The operation of the first module to exchange data to perform tasks includes the first module to exchange data with the memory via the memory controller. The power management unit further performs a transition between the first power state and the second power state, wherein the communication between the enabling memory and the plurality of modules and preventing the communication between the memory and the plurality of modules caused by the transition Any changes are related to changes in the communication between the memory and the first module. The system further includes a dipole antenna for exchanging wireless communication based on the operation of the SOC circuit. In an embodiment, the SOC includes memory. In another embodiment, only the first module of the plurality of modules includes a circuit coupled to request one of the first power state and the second power state.This article describes the technology and architecture used to manage the power of the system-on-chip. In the above description, for the purpose of explanation, many specific details are set forth in order to provide a comprehensive understanding of certain embodiments. However, the embodiments can be practiced without these specific details, which will be obvious to those skilled in the art. In other instances, the structures and devices are shown in the form of block diagrams to avoid making the description difficult to understand.Reference in the specification to "one embodiment" or "an embodiment" means that a specific feature, structure, or characteristic described in conjunction with the embodiment is included in at least one embodiment of the present invention. The appearances of the phrase "one embodiment" in various places in the specification do not necessarily all refer to the same embodiment.Some parts of the detailed description of this article are presented in terms of algorithms and symbolic representations regarding the operation of data bits in a computer memory. These algorithm descriptions and representations are used by those skilled in the computing field to most effectively convey the essence of their work to others in the field. The algorithm is here and generally conceived as a self-consistent sequence of steps leading to the desired result. These steps are those that require physical manipulation of physical quantities. Usually, though not required, these quantities take the form of electrical or magnetic signals that can be stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient to refer to these signals as bits, values, elements, symbols, characters, items, numbers, or the like, sometimes mainly for reasons of common usage.However, it should be borne in mind that all these and similar terms are to be associated with the appropriate physical quantities and are only convenient labels applied to these quantities. Unless specifically stated otherwise (as is apparent from the above discussion), it is to be appreciated that throughout the description, discussions using terms such as "processing" or "calculation" or "operation" or "determining" or "display" or the like are used. Refers to the actions and processes of a computer system or similar electronic computing equipment, which manipulates data expressed as physical (electronic) quantities in the registers and memory of the computer system and converts it into similarly expressed as computer system memory or registers or other such The information storage, transmission or display of other data of physical quantities within the device.Certain embodiments also relate to devices for performing the operations herein. The apparatus may be specially constructed for the required purpose, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such computer programs can be stored in computer-readable storage media, such as but not limited to any type of disk, including floppy disks, optical disks, CD-ROM and magneto-optical disks, read-only memory (ROM), random access memory (RAM) such as dynamic RAM (DRAM), EPROM, EEPROM, magnetic or optical card, or any type of medium suitable for storing electronic instructions and coupled to a computer system bus.The algorithms and displays presented herein do not inherently involve any particular computer or other device. Various general-purpose systems can be used with programs in accordance with the teachings herein, or can prove convenient to construct more specialized devices to perform the required method steps. The structure required for a variety of these systems will be apparent from the description of this article. In addition, certain embodiments are not described with reference to any specific programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of such embodiments as described herein.In addition to the above, various modifications can be made to the disclosed embodiments and their implementation without departing from their scope. Therefore, the descriptions and examples herein should be interpreted in an illustrative rather than restrictive sense. The scope of the present invention should only be measured by referring to the appended claims.
A graphics memory includes a plurality of memory partitions. A memory controller organizes tile data into subpackets that are assigned to subpartitions to improve memory transfer efficiency. Subpackets of different tiles may be further assigned to subpartitions in an interleaved fashion to improve memory operations such as fast clear and compression.
What is claimed is:1. A method of organizing tile data in a partitioned graphics memory having a plurality of partitions, comprising:organizing tile data as an array of subpackets of information, wherein each subpacket has a tile location and a data size corresponding to that of a memory transfer data size of subpartitions of said partitioned graphics memory;for a first tile associated with one particular partition having a first subpartition and a second subpartition, pairing a first set of subpackets having a first set of tile locations with said first subpartition and pairing a second set of subpackets having a second set of tile locations with said second subpartition, wherein tile data may be accessed with a memory transfer data size less than that associated with a partition; andfor a second tile associated with said one particular partition, pairing a first set of subpackets having said second set of tile locations with said first subpartition and pairing a second set of subpackets having said first set of tile locations with said second subpartition;wherein corresponding tile locations in said first tile and said second tile are paired with different subpartitions.2. The method of claim 1, further comprising:for a data transfer operation associated with said first tile, generating a first ordered list for transferring subpackets associated with said first subpartition and generating a second ordered list for transferring subpackets associated with said second subpartition;for each memory access to said one particular partition associated with said first tile, accessing said first subpartition and said second subpartition according to said first ordered list and said second ordered list.3. The method of claim 1, further comprising: performing a memory transfer operation to said one particular partition to simultaneously access corresponding tile locations in said first tile and said second tile.4. The method of claim 3, further comprising: performing a fast clear operation on said first tile and said second tile.5. The method of claim 3, further comprising: performing a compression operation on said first tile and said second tile.6. The method of claim 5, further comprising: storing compressed tile data in one subpacket of each of said first tile and said second tile.7. The method of claim 6, further comprising: storing compressed data in an odd number of subpackets of each of said first tile and said second tile.8. The method of claim 3, wherein said first tile and said second tile correspond to nearby tiles.9. The method of claim 3, wherein tiles stored in a partition are assigned as either odd tiles or even tiles, wherein said first tile is an even tile and said second tile is an odd tile.10. The method of claim 1 wherein each subpacket corresponds to data for at least one pixel.11. The method of claim 1, wherein each subpartition comprises a DRAM.12. A tiled graphics memory, comprising:a plurality of memory partitions, each partition having at least two subpartitions for storing data, each partition having an associated first memory access size and each subpartition having an associated second memory access size; anda memory controller configured to organize tile data into subpackets of information having said second memory access size, said memory controller assigning a tile to one selected partition and pairing each subpacket of said tile with one of said at least two subpartitions.13. The tiled graphics memory of claim 12, wherein said memory controller is configured to generate a mask list for each subpartition of said tile to determine an order with which subpackets of a said tile are transferred.14. The tiled graphics memory of claim 13, wherein said memory controller is configured to interleave subpartition locations within said one selected partition for corresponding tile locations of a first set of tiles and a second set of tiles.15. The tiled graphics memory of claim 14, wherein said memory controller is adapted to perform a fast clear operation.16. The tiled graphics memory of claim 12, wherein each partition has a first subpartion and a second subpartition, said memory controller for said tile pairing a first set of subpackets having a first set of tile locations with said first subpartition of said one selected partition and pairing a second set of subpackets having a second set of tile locations with said second subpartition of said one selected partition.17. The tiled graphics memory of claim 16, wherein said memory controller generates a first ordered list for transferring subpackets associated with said first subpartition and a second ordered list for transferring subpackets associated with said second subpartition so that for each memory access to said one selected partition said memory controller accesses said first subpartition and said second subpartition according to said first ordered list and said second ordered list.18. The tiled graphics memory of claim 16, wherein for a second tile associated with said one selected partition said memory controller pairs a first set of subpackets having said second set of tile locations with said first subpartition and pairs a second set of subpackets having said first set of tile locations with said second subpartition so that corresponding tile locations in said first tile and said second tile are paired with different subpartitions.19. The tiled graphics memory of claim 18, further comprising: performing a memory transfer operation to said one selected partition to simultaneously access corresponding tile locations in said first tile and said second tile.20. The tiled graphics memory of claim 19, further comprising: performing a fast clear operation on said first tile and said second tile.21. The tiled graphics memory of claim 19, further comprising: performing a compression operation on said first tile and said second tile.22. The tiled graphics memory of claim 21, further comprising: storing compressed tile data in one subpacket of each of said first tile and said second tile.23. The tiled graphics memory of claim 21, further comprising: storing compressed data in an odd number of subpackets of each of said first tile and said second tile.24. The tiled graphics memory of claim 19, wherein said first tile and said second tile correspond to nearby tiles.25. The tiled graphics memory of claim 19, wherein tiles are assigned as either odd tiles or even tiles, wherein said first tile is an even tile and said second tile is an odd tile.26. The tiled graphics memory of claim 12, wherein each subpacket corresponds to data for at least one pixel.27. The tiled graphics memory of claim 12, wherein each subpartition comprises a DRAM.
FIELD OF THE INVENTIONThe present invention relates generally to a memory system in which the memory appears as a unified memory, but is comprised of a plurality of partitions. More particularly, the present invention is directed to improving the efficiency of memory accesses in a partitioned graphics memory.BACKGROUNDIn current graphics subsystems, the speed and number of graphical processing elements has increased enough to make the graphics memory subsystem a barrier to achieving high performance. FIG. 1 illustrates a graphics memory controller 100 of the prior art. The memory controller 10 acts as a switch to determine which of several graphics processing clients 12, 14, 16, 18, 20 can access the memory storage array 22, which is organized as a single, i.e., monolithic, partition. Typical graphics processing elements that are the memory clients include the host processor, a texture engine, a z-buffer engine, a color engine, a 2D-graphics engine, a 3D-graphics engine and a display engine. Each client 12, 14, 16, 18, and 20 requests one or more cycles of the memory storage array 22 which transfers in each cycle a data quantity equal to the size of the data bus 15 of the array.The size of the memory data bus 15 sets the size of the minimum access that may be made to the graphics memory subsystem. Monolithic memory subsystems for the various graphics clients have evolved to use a wider memory data bus for increased throughput. However, this leads to inefficient accesses for some of the graphics processing elements of the graphics subsystem that may not need to access data requiring the full size of data bus 15.Memory buses in current architectures are now typically 128 bits physically, and 256 bits (32 bytes), effectively, when the minimum data transfer requires both phases of single clock cycle. (Hereinafter, when referring to the size of the data bus, the effective size, rather than physical size is meant.) This size of the memory data bus sets the size of the minimum access that may be made to the graphics memory subsystem.For some devices that make use of the graphics memory, 32 bytes is an acceptable minimum. However, the conventional minimum size of 32 bytes is inefficient because there are memory accesses for some of the clients 12, 14, 16, 18, and 20 that do not require the full minimum memory access size. In particular, as geometry objects become smaller and finer, a minimum access of 32 bytes has more data transfer bandwidth than is needed by the various graphics engines used to process graphical objects. One measure of inefficiency of the access is the ratio of used pixels to fetched pixels. As the size of the memory bus increases or the minimum access increases, this ratio becomes smaller. A small ratio implies a large amount of wasted memory throughput in the graphics memory subsystem. It is desirable to avoid this wasted throughput without altering the view to the memory clients of memory as a single unit.In one proposed solution to the problems of this wasted throughput, it has been suggested to provide a memory system that includes a plurality of memory partitions. Such a multiple partition memory system is described in detail in U.S. patent application Ser. No. 09/687,453, entitled "Controller For A Memory System Having Multiple Partitions," commonly assigned to the assignee of the present invention, the contents of which are hereby incorporated by reference.FIG. 2 shows a high-level block diagram of the system 200 proposed in U.S. patent application Ser. No. 09/687,453. A memory array 24 has a number of independently operable partitions 26a, 26b, 26c, 26d, each with a respective bus 28a, 28b, 28c, and 28d and a bus 27a, 27b, 27c, and 27d having a width w that is preferably a smaller transfer size than the single prior art bus 15 in FIG. 1. In one embodiment, there are four independent partitions P0, P1, P2, and P3 (elements 26a, 26b, 26c, 26d) each with a bus width one quarter the size of the non-partitioned bus, i.e., each with a 64 bit bus. Each of the memory system clients 12, 14, 16, 18, and 20 is connected to memory controller 30. Memory controller 30 includes a number of queues 32, 34, 36, 38, 40, 42, 44, 46 that connect to the partitions 26a, 26b, 26c, and 26d of memory array 24. Control logic (not shown in FIG. 2) determines the one or more partitions to which a request should be routed and the one or more partitions from which a response (read data) to a request should be obtained to maintain the appearance of a non-partitioned memory for the clients. Additionally, the control logic in the memory controller arbitrates among the various clients according to a priority assigned to each of the clients.A drawback of the partitioned memory system proposed in U.S. patent application Ser. No. 09/687,453 is that the hardware requirements are larger than desired. Monolithic memory subsystems for the various graphics clients are evolving to use an increasingly larger memory data bus for increased throughput. For example, there is a trend in the graphics industry to increase the burst transfer length (BL) (e.g., from BL 4 to BL 8), which has the effect of increasing the minimum access size. Moreover, as described in the embodiment in U.S. patent application Ser. No. 09/687,453, each memory partition has its own hardware for controlling read and write operations to that partition in a coordinated fashion with the read/write operations to the remaining partitions. Thus, for each new partition added in the system the hardware implementation requirements increase as well. It would therefore be beneficial to have a partitioned memory system providing efficient memory access while not requiring a large increase in the hardware to implement a virtually unified memory architecture with high throughput.SUMMARY OF THE INVENTIONA partitioned graphics memory provides that each partition of memory can be constituted by subpartitions. A tile is organized as data sections ("subpackets") having a data size corresponding to a subpacket for performing a memory transfer with a subpartition.In one embodiment, each subpacket of a tile is assigned to a designated subpartition. The assignment may be further selected to facilitate data transfer operations. In one embodiment, a mask list is generated to select subpackets from the subpartitions of a partition.In one embodiment, subpacket designations are swapped between different tiles to facilitate memory transfer operations, such as a fast clear or compression operation. In one embodiment, corresponding subpacket locations of two tiles have interleaved subpartition memory locations to permit a single memory access to a partition to access corresponding subpackets of the two tiles.In one embodiment, each of the multiple subpartitions for a given partition shares the same controller hardware thereby expanding the bus width for a given partition without a corresponding expansion of controlling hardware.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a block diagram of a prior art memory system for a graphics system.FIG. 2 is a block diagram of a prior art partitioned memory.FIG. 3 is a block diagram of a partitioned memory system in accordance with one embodiment of the present invention.FIG. 4 illustrates pairing of data subpackets for data transfer to a partition in accordance with one embodiment of the present invention.FIG. 5 illustrates subpacket interleaving of adjacent tiles in accordance with one embodiment of the present invention.FIG. 6 illustrates a circuit for transferring data into memory subpartitions in accordance with one embodiment of the present invention.FIG. 7 is a table illustrating exemplary data signals at locations within the circuit of FIG. 6.FIG. 8 is a block diagram of a portion of a memory controller in accordance with one embodiment of the present invention.DETAILED DESCRIPTIONFIG. 3 is a block diagram of one embodiment of a partitioned memory system 300 of the present invention. A memory structure 325 has a plurality of memory partitions 320a, 320b, 320c, and 320d. Each memory partition is further divided into at least two subpartitions. In an exemplary embodiment, each partition, also referred to as P0, P1, P2, and P3, is comprised of two subpartitions. For example, partition P0 has subpartition P00 and P01. Partition P1 has subpartitions P10 and P11. Partition P2 has subpartitions P20 and P21. Finally, partition P3 has subpartitions P30 and P31.A memory controller 310 controls access to the respective subpartitions of each of the partitions 320a, 320b, 320c, and 320d that make up a virtually unified memory structure. The memory transfer access size of a full partition is a first size "a packet size" and the memory transfer access size of each subpartition is a second smaller size "a subpacket size." For example, in one embodiment, a main bus 390 having a bus width (BW) further branches into partition buses 380 each having a bus width BW/4. In turn, each sub-partition receives half of the partition bus bandwidth, or BW/8. In one embodiment of the present invention, the bus width for the memory controller is 256 bit memory arrangement where each of eight subpartitions includes a 32 pin wide dynamic random access memory (DRAM).It is desirable to have DRAM access footprints in each memory sub-partition organized as rectangles (or preferably squares) of pixels or texels in X, Y (or u, v, p) with respect to a surface. This corresponds to operating on tiles of information rather than lines of information. In a tiled graphics memory, a data representing a 3D surface is organized in memory as an array of tiles, with each tile corresponding to a portion of a representation of a surface. As an example, each tile may correspond to an array of pixels, such as a group of eight pixels. In turn, data representing a 3D surface corresponds to an array of tiles. In one embodiment, tile data for a particular tile is stored in one of the partitions. However, the tile data is further organized into the subpartitions of that tile for efficient memory access during read and write operations. As will be described below in more detail, in some embodiments at least some nearby tiles are also preferably stored in the same partition.Memory controller 310 includes at least one processing operation for which it is sub-partition aware. Referring to FIG. 4, each tile 400 is subdivided into data sections 405, with each data section corresponding, in some embodiments to data associated with one pixel. Each data section 405 has a data size corresponding to a memory transfer size of a sub-partition, i.e., a subpacket size. Consequently, throughout the following discussion, each data section of a tile 400 will be referred to as a subpacket 405. Organizing tile data into subpackets permits memory controller 310 to organize data read and write operations with the minimum memory footprint associated with the subpacket size of the subpartitions. Also, it will be understood that a memory access to a single partition has a corresponding data packet size. Additional background on a tiled memory having data sections arranged as subpackets is described in the patent application of James Van Dyke et al., entitled "System And Method For Packing Data In A Tiled Graphics Memory," filed on Dec. 17, 2003, U.S. Patent Application Ser. No. 10/740,229, which is commonly owned by the assignee of the present invention, the contents of which are hereby incorporated by reference.In one memory transfer to a partition by memory controller 310, each subpartition of the partition may be accessed. Within one partition having two subpartitions, such as P0, data sections may be read or written from both two sub-partitions (e.g., subpartitions P00 and P01 of partition P0) in single memory transfer to the partition. In one embodiment, memory controller 310 generates a mask to indicate which subpackets 405 of a tile should be paired together as a packet for a given read transfer or write transfer of a partition.FIG. 4 illustrates a sample pairing of subpackets 405 of a tile 400 with associated subpartitions in which data for the data sections is read or written. In this illustrative example, each tile 400 has a total of eight subpackets. A subpartition designation is assigned to each subpacket in tile 400 to associate each subpacket with a subpartition. In this arrangement, "A" and "B" represent a subpartition designation for a given partition (for example "A" corresponding to P00 and "B" corresponding to P01 of partition P0). Thus, data subpackets 0, 1, 2, 3, 4, 5, 6, and 7 of a tile are each assigned either an "A" designation or a "B" designation (i.e., A0, A3, A4, A7, B1, B2, B5, and B6). The A subpartition and the B subpartition each have a memory transfer data size corresponding to half the packet size for performing a memory transfer with the entire partition.In one embodiment, memory controller 310 generates a transfer list for determining the order with which subpackets of a tile are transferred to and from the subpartitions. It is possible for the memory controller 310 to access an A subpacket and a B subpacket within tile 400 with a single access to the partition (e.g., a single data packet to the partition within which tile 400 resides). Sub-partitions A and B have a degree of addressing independence. This allows simultaneous access of one A, B pair of data subpackets A and B from those marked A0, A3, A4, A7 and B1, B2, B5, B6 in FIG. 4. In one embodiment, an ordered list is generated for each subpartition to identify the order with which memory transfer operations, such as read or write operations, will take place with respect to the A, B pair of subpartitions. The ordered list may, for example, be implemented as mask information, such as a mask list, for each subpartition.In one embodiment, memory controller 310 generates a 4 bit mask for each sub-partition For example, an A mask list has associated with it a mask field of elements 0, 3, 4 and 7 represented as A [xxxx] for memory transfer operations with the A subpartition. A B mask list has a mask field of elements 2, 1, 6, and 5 represented as B [yyyy] for memory transfer operations with the B subpartition. As an illustrative example, assume that the mask list generated by memory controller 310 for the A subpartition is A[1001] while the mask list generated for the B subpartition is B[1101], where 1 in each instance indicates the existence of a subpacket for that entity, then the subpartition transfer arrangement will take place in the following order. Transfer 0 will include A0 and B2. Transfer 1 would include A7 and B 1. The final data transfer would be for B5 alone since the A mask list does not identify a third subpacket. Since sub-partition A only accesses the subpackets A0, A3, A4, A7 and sub-partition B can only access subpackets B1, B2, B5, B6, only A and B accesses can be paired.In one embodiment of the present invention, an eight subpacket tile includes 8 subpackets each of 16 Bytes such that a horizontal access of two subpackets results in 32 byte by 1 horizontal line (such as subpackets A0 and B1). Alternatively, an arbitrary small square of 16 bytes by 2 lines can be accessed when a vertical access is undertaken. For example, the ordered list may call for an access of A0 and B2.As previously discussed, data associated with a representation of a 3D surface is organized in memory as an array of tiles. A single partition may provide memory for many nearby tiles. In some embodiments, tile data within a partition is further arranged to facilitate memory operations to be performed simultaneously on two or more tiles associated with a partition. FIG. 5 illustrates two tiles 500 with individual tiles T0 and T1, which are designated as an even tile and an odd tile, respectively. The tiles are both associated with a single partition that has an A subpartition and a B subpartition as previously described in regards to an individual tile. The A, B designations are swapped in corresponding tile locations of the odd tile and the even tile, which can also be described as an interleaving process, since corresponding tile locations of odd and even tiles are associated, in an interleaved manner, with the two different subpartition memory locations. For example, while representative location 505 of odd tile T1 is paired with a B subpartition, a corresponding tile location 555 of even tile T0 is paired with an A subpartition. Thus, as can be seen in FIG. 5, all corresponding tile locations of even numbered tile T0 and odd numbered tile T1 have their subpartition designations swapped. Tile T0 has subpartition A with mask bits 0, 3, 4 and 7 and the B subpartition with mask bits 1, 2, 5, and 6. However tile T1 has subpartition B having mask bits 0, 3, 4 and 7 while subpartition A has mask bits 2, 1, 6 and 5.The swapped subpartition designations illustrated in FIG. 5 permits a single A, B memory access of a partition to be used to access corresponding subpackets of odd and even tiles stored within the partition. Thus, for example, if subpacket "0" is used for a tile operation, then by interleaving the A, B designations as shown in FIG. 5 a single A, B memory transfer can be used to access both "0" subpackets 505 and 555 of an odd tile and an even tile which are interleaved in the manner described above. One application of this interleaving is in a fast clear operation. In a fast clear operation, blocks of data subpackets are written as part of a clearing operation. Interleaving the subpackets permits a fast clear of two tiles at once in which a write is simultaneously performed to corresponding subpackets of each of the two tiles for a fast clear of respective tiles. By alternating this arrangement over odd and even tiles, it is possible in a 32B partition for an 8 x fast clear operation to simultaneously write a 16B subpacket A0 of tile T0 and a 16B subpacket B0 of tile T1 in a single 32B access.The alternate sub-packet interleaving of odd and even tiles illustrated in FIG. 5 also allows pairing of A and B sub-partition accesses of compressed data (compressed data will typically reside in the same numbered subpacket) in two nearby tiles. If an 8 x compression scheme is used for a fast clear format, a fast clear compressed tile may be represented by 16B in the "0" position. Thus, for example, if a 8 x compression permits all of the tile data to be compressed form in the "0" position, a single A, B memory transfer may be used to access the compressed data in tile T0 and tile T1.Interleaving of odd and even tiles also supports various forms of data compression. Data is compressed typically for the reduction of memory bandwidth requirements to transfer a given amount of information. Compressed data may be stored in a small number of subpackets, less than the number of an entire tile. By subpacket interleaving as previously described, the likelihood of having A, B subpackets available that may be paired increases for a given locality of rendering. Subpackets from A, B subpartitions from different tiles may then be easily paired. Subpartitioning combining A, B subpackets across tile boundaries also allows the compressed data to occupy the size of only one subpacket or an odd number of subpackets. This allows higher or variable compression ratios for a given tile size.Pairing of subpackets of different nearby odd and even tiles is not restricted to compressed data. Uncompressed or compressed and uncompressed subpacket data may be paired. The pairing between subpackets of different tiles may also be selected to increase DRAM data transfer efficiency. In this embodiment, the pairing is selected to pair tiles across tile boundaries to reduce memory transfers. These may include, for example, pairing subpackets based upon a memory location attribute or upon a data operation attribute. For example, nearby tiles may have subpacket interleaving in horizontal or vertical directions.As previously described, in one embodiment each subpartition includes a DRAM. The DRAM of a subpartition is addressable by column, bank, and row. In one embodiment, tile addressing is organized such that subpackets within a tile share the same DRAM bank address and the same DRAM row address. In some embodiments, the tile addressing is further organized such that subpackets within a tile also share some of the DRAM column address. Moreover, operations to all subpartitions within a partition may be identical to facilitate the use of a common memory controller. For example, subpartitions within a partition may receive identical commands not restricted to read, write, precharge, activate, mode register set (MRS), and extended mode register set (EMRS), such that all subpartitions within a partition may be served by one common memory controller. Moreover, all subpartitions within a partition may share the same DRAM bank address and the same DRAM row address. In some embodiments, all subpartitions may share some of the DRAM column address. An additional benefit of this organization of tile addressing is that it also facilitates reduced chip I/O to the DRAMs of the subpartitions because some address and command pins of the subpartitions are identical and may be shared.Controller 310 may use one or more rules for determining an efficient A, B pairing between subpartitions from different tiles for reducing memory transfers. One rule that may be applied is that paired A, B subpartitions have the same DRAM bank address with the partition. Another rule that may be applied is that paired A, B subpartitions are from the same DRAM row address within the partition. Still another rule that may be applied is that paired A, B subpartitions may share some of the column address on any DRAM address of the partition. Still yet another rule that may be applied is that paired subpartitions A, B both are performing a read or write operation of tiles.Embodiments of the present invention also include other applications of interleaving. In one embodiment, subpartitions are interleaved within a tile for A, B, subpartition load balancing. In an alternate embodiment, a tile is assigned to only one subpartition and alternating or nearby tiles are assigned to the other subpartitions. A benefit of this alternate embodiment is that it permits a single tile access to access only one subpartition DRAM, allowing more independence between the subpartition DRAMs.FIG. 6 illustrates an embodiment of a circuit 600 for transferring data from an internal bus to subpartitions A, B. In this embodiment, each subpartition stores a contiguous 16B of data. The DRAM in subpartition A holds the contiguous 4 byte data quantities a, b, c, d and expects data presented in that order from bus 602. Subpartition B holds the contiguous 4 byte data quantities e, f, g, h and expects data presented in that order from bus 603. Data bus 601 is 16 bytes wide and presents data in two successive 16B quantities first as abcd, and on the second transfer as efgh. DRAMs transfer data on the rising and falling edges of the DRAM clock, hence the use of muxes 608 and 609. A means is therefore required to rearrange the data from bus 601 for output to buses 602 and 603.FIG. 7 is a table demonstrating how circuit 600 rearranges data from input bus 601 to subpartition output buses 602 and 603. At time 0, 16 bytes of data abcd is presented to input bus 601. Mux select 606 is "0" and steers the lower 8 bytes data a, b into register 604.At time 1, input data 601 is 16 bytes efgh. Register 604 contains 8 bytes data a, b, and register 605 contains 8 bytes data c, d. Mux select 606 is "1" and steers register 605 8 bytes data c, d through mux 610 to the input of register 604. Mux select 606 is "1" and steers 8 bytes input data ef through mux 611 into the input of subpartition B mux 609. Register 604 provides subpartition A mux 608 with 8 bytes data ab. Select 607 is "0", causing muxes 608 and 609 to steer 4 bytes data a and 4 bytes data e into subpartition buses 602 and 603 respectively.At time 2, select 607 is "1", causing muxes 608 and 609 to steer 4 bytes data b and 4 bytes data f into subpartition buses 602 and 603 respectively.At time 3, register 604 contains 8 bytes data cd, and register 605 contains 8 bytes data gh. Select 606 is "0" and steers 8 bytes data gh into subpartition B mux 609. Register 604 provides 8 bytes data cd into subpartition A mux 608. Mux select 607 is "0" and steers 4 bytes data c through subpartition A mux 608 onto subpartition A bus 602. Mux select 607 is "0" and steers 4 bytes data g through subpartition B mux 609 onto subpartition B bus 603.At time 4, mux select 607 is "1" and steers 4 bytes data d through subpartition A mux 608 onto subpartition A bus 602. Mux select 607 is "1" and steers 4 bytes data h through subpartition B mux 609 onto subpartition B bus 603.FIG. 8 is a functional block diagram illustrating a control module 800 for pairing and unpairing subpackets. Control module 800 is disposed within memory controller 310 (not shown). Control module 800 is illustrated as servicing sub-partitions within a single partition, but could be replicated for each of the numerous partitions that make up the unified memory. As is illustrated in FIG. 8, control module 800 can include a plurality of queues such as client request queues 830, 831 and 832, each providing temporary storage for incoming data from a given client. Arbiter 840 selects a given queue to have its request satisfied utilizing techniques already known to those skilled in the art.To the extent that a given client is subpartition aware (for example, a raster operations (ROP) client may be a subpartition-aware client) the arbiter 840 passes the mask list information on to state machine 850 and arranges for the generation of address control operations consistent with the data subpacket ordering required by the masks. At the same time write data can be supplied to a write data rotation element which operates on the principles described above so as to place the data from the selected client into an appropriate order for transmission to the memory partition whereby subpackets for each subpartition are paired together.The write data and address control information are supplied to subcontroller 870 that then operates to control the passage of the write data to the respective memory subpartitions of memory partition 880. Similarly, the subcontroller 870 using address control information generated by the state machine can provide for access to the memory partition whereby subpackets from respective subpartitions of the partition are accessed at the same time and provided as read data to read data rotation device 865. Read data rotation device 865 provides the opposite operation of the write data rotation. Namely, it takes the paired data subpackets and then sends them out in a given order as required by the output queue 890 for transfer to a given client.However, it will be understood in regards to the operation of memory controller 310 that in some embodiments a subpartition aware client may submit and receive data in the order specified by the subpartition mask lists. In other embodiments, memory controller 310 may accept memory requests and perform the pairing itself.One application of the present invention is in improving the efficiency of memory operation in graphics memories. In 3-D graphics, elements are represented by geometric shapes having particular characteristics, such as a triangle. Because the footprint of a triangle (or other polygon) is of irregular orientation, shape, and size, the area of the triangle on the memory access tile grid may partially cover tiles. For example, a vertex of a triangle may cover a portion of a memory access tile, but not the entire tile. As a result, memory accesses to these partially covered tiles transfer unwanted data, resulting in wasted memory bandwidth and loss of memory system efficiency. By reducing the memory access footprint in accordance with the present invention, memory transfers may more closely outline the needed triangle area and reduce the transfer of unwanted data. This also has the effect of reducing the number of memory accesses needed to retrieve just that level of information that is desirable.The architecture of the present invention provides the memory controller with the capability of tightly interleaving data to memory subpartitions so as to create a unified memory arrangement with improved efficiency in data accesses. That is, even as the bus width for the memory data bus expands, by providing interleaved access at a finer granular level, it is possible to assure that there is a more full or complete use of the overall bus width, that is, that bus width is not wasted when smaller atoms of data need to be accessed, as for instance in connection with tile data processing.Another benefit of embodiments of the present invention is that in some embodiments its permits a more efficient use of wider data bus widths by subpartitioning memories and then interleaving data accesses to and from such memories utilizing masked list information where a given client that is seeking such access is aware of the subpartitioning structure.Another benefit of the present invention is that in some embodiments it reduces the hardware complexity of a highly partitioned graphics memory. Each of the multiple subpartitions for a given partition shares the same controller 800 hardware thereby expanding the bus width for a given partition without a corresponding expansion of controlling hardware.Another benefit of the present invention is that in some embodiments is that it permits a reduction in the data transfer atom, or minimum data size, for compression of tile data. This benefits compression in several different ways. First, a small atom size permits higher compression ratios. Second, a smaller atom size permits a reduction in the data size of compressed tiles.The present invention may also reduce package address pin count because most of the address pins between subpartition A and B DRAMs are logically identical and may be physically shared. In an embodiment with eight subpartition subpackets per tile that pairs subpackets between A and B subpartitions within and the same tile, and adjacent tiles, only three column address bits are required to be unique to each subpartition. Additionally, the present invention allows the use of DRAMs with larger minimum burst transfer length while minimizing the data transfer atom.While an exemplary embodiment includes two subpartitions per partition, more generally it will be understood that embodiments of the present invention may include more than two subpartitions per partition.While a single embodiment of the present invention has been described in connection with this application, variations on this embodiment would be readily understood by one of ordinary skill in the art. For instance, there could be an alternate number of partitions different than four (e.g., greater than four or less than four). Moreover, there could be an alternate number of total subpartitions to construct the unified memory. Additionally, a non-partitioned or a single partitioned memory system may be subpartitioned. In addition, the number of additional address lines in connection with the implementation may vary depending on the DRAM architecture employed.The foregoing description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the invention. However, it will be apparent to one skilled in the art that specific details are not required in order to practice the invention. Thus, the foregoing descriptions of specific embodiments of the invention are presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed; obviously, many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, they thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the following claims and their equivalents define the scope of the invention
Systems and methods of managing power provide for applying a voltage from a voltage regulator to a component of a computing system and reducing the voltage based on a power saving parameter that is dedicated to the component. The reduction can be in conjunction with the entry of the component into a low power state such as a standby state or an off state, where the power saving parameter defines a voltage such as a minimum operating voltage or minimum sustainable voltage for the component, respectively. In one embodiment, the component is a central processing unit.
CLAIMS What is claimed is: 1. A method comprising: applying a voltage from a voltage regulator to a component of a computing system; and reducing the voltage based on a power saving parameter that is dedicated to the component. 2. The method of claim 1 , wherein reducing the voltage includes: receiving a notification of a power saving event; and selecting the power saving parameter based on the notification. 3. The method of claim 2, further including: receiving the power saving parameter from either a manufacturing process, a basic input/output system (BIOS) process or an operating system-directed power management (OS-PM) process; storing the power saving parameter to a memory device; determining a control value for the power saving parameter; and applying the control value to the voltage regulator to reduce the voltage. 4. The method of claim 2, further including calculating the power saving parameter based on one or more operational values. 5. The method of claim 4, further including: receiving the operational values from either a manufacturing process, a basic input/output system (BIOS) process or an operating system-directed power management (OS-PM) process; storing the operational values to a memory device; retrieving the operational values from the memory device; and extrapolating from the operational values to the power saving parameter. 6. The method of claim 4, further including: measuring the operational values in accordance with a self-test of the component; storing the operational values to a memory device; retrieving the operational values from the memory device; and extrapolating from the operational values to the power saving parameter. 7. The method of claim 2, further including: measuring the power saving parameter in accordance with a self-test of the component; and storing the power saving parameter to a memory device. 8. The method of claim 2, wherein reducing the voltage further includes: determining a control value for the power saving parameter; and applying the control value to the voltage regulator to reduce the voltage. 9. The method of claim 2, wherein receiving the notification includes receiving a notification of the component entering an off state, the power saving parameter including a minimum sustainable voltage. 10. The method of claim 2, wherein receiving the notification includes receiving a notification of the component entering an idle state, the power saving parameter including a minimum operational voltage. 11. The method of claim 2, further including modifying the power saving parameter based on a deterioration of the component. 12. The method of claim 1 , wherein the reducing includes reducing the voltage based on a feedback signal from the voltage regulator. 13. The method of claim 1 , wherein the applying includes applying a core voltage to a central processing unit. 14. An apparatus comprising: a controller to apply a voltage from a voltage regulator to a component of a computing system and reduce the voltage based on a power saving parameter that is dedicated to the component. 15. The apparatus of claim 14 further including multiplexing logic to receive a notification of a power saving event and select the power saving parameter based on the notification. 16. The apparatus of claim 15, further including a memory device, the apparatus to receive the power saving parameter from either a manufacturing process, a basic input/output system (BIOS) process or an operating system- directed power management (OS-PM) process and store the power saving parameter to the memory device. 17. The apparatus of claim 15, further including calculation logic to calculate the power saving parameter based on one or more operational values. 18. The apparatus of claim 17, further including a memory device, the apparatus to receive the operational values from either a manufacturing process, a basic input/output system (BIOS) process or an operating system-directed power management (OS-PM) process, and store the operational values to the memory device, the calculation logic to retrieve the operational values from the memory device and extrapolate from the operational values to the power saving parameter. 19. The apparatus of claim 17, further including: test logic to measure the operational values in accordance with a self-test of the component; and a memory device, the apparatus to store the operational values to the memory device, the calculation logic to retrieve the operational values from the memory device and extrapolate from the operational values to the power saving parameter. 20. The apparatus of claim 15, further including: test logic to measure the power saving parameter in accordance with a self-test of the component; and a memory device, the apparatus to store the power saving parameter to the memory device. 21. The apparatus of claim 15, wherein the multiplexing logic is to determine a control value for the power saving parameter and apply the control value to the voltage regulator to reduce the voltage 22. The apparatus of claim 15, wherein the power saving event is to include the component entering an off state and the power saving parameter is to include a minimum sustainable voltage. 23. The apparatus of claim 15, wherein the power saving event is to include the component entering an idle state and the power saving parameter is to include a minimum operational voltage. 24. The apparatus of claim 15, wherein the apparatus is to modify the power saving parameter based on a deterioration of the component. 25. A system comprising: a power supply subsystem; a central processing unit (CPU); and a voltage regulator coupled to the power supply subsystem and the CPU, the voltage regulator having a controller to apply a core voltage from the voltage regulator to the CPU and reduce the voltage based on a power saving parameter that is dedicated to the CPU. 26. The system of claim 25, wherein the CPU includes multiplexing logic to receive a notification of a power saving event and select the power saving parameter based on the notification. 27. The system of claim 26, wherein the CPU further includes a memory device, the CPU to receive the power saving parameter from either a manufacturing process, a basic input/output system (BIOS) process or an operating system-directed power management (OS-PM) process and store the power saving parameter to the memory device. 28. The system of claim 26, wherein the CPU further includes calculation logic to calculate the power saving parameter based on one or more operational values. 29. The system of claim 28, further including a memory device, the CPU to receive the operational values from at lease one of a manufacturing process, a basic input/output system (BIOS) process or an operating system-directed power management (OS-PM) process, and store the operational values to the memory device, the calculation logic to retrieve the operational values from the memory device and extrapolate from the operational values to the power saving parameter. 30. The system of claim 28, wherein the CPU further includes: test logic to measure the operational values in accordance with a self-test of the CPU; and a memory device, the CPU to store the operational values to the memory device, the calculation logic to retrieve the operational values from the memory device and extrapolate from the operational values to the power saving parameter. 31. The system of claim 26, wherein the CPU further includes: test logic to measure the power saving parameter in accordance with a self-test of the CPU; and a memory device, the CPU to store the power saving parameter to the memory device. 32. The system of claim 26, wherein power saving event is to include the CPU entering an off state and the power saving parameter is to include a minimum sustainable voltage. 33. The system of claim 26, wherein the power saving event is to include the CPU entering an idle state and the power saving parameter is to include a minimum operational voltage. 34. A method comprising: receiving a power saving parameter from either a manufacturing process, a basic input/output system (BIOS) process or an operating system-directed power management (OS-PM) process; storing the power saving parameter to a memory device; applying a voltage from a voltage regulator to a component of a computing system; receiving a notification of a power saving event; selecting the power saving parameter based on the notification; determining a control value for the power saving parameter; and applying the control value to the voltage regulator to reduce the voltage based on the power saving parameter, the power saving parameter being dedicated to the component. 35. The method of claim 34, wherein receiving the notification includes receiving a notification of the component entering an off state, the power saving parameter including a minimum sustainable voltage. 36. The method of claim 34, wherein receiving the notification includes receiving a notification of the component entering an idle state, the power saving parameter including a minimum operational voltage. 37. The method of claim 34, wherein the applying includes applying a core voltage to a central processing unit.
CONTROLLING STANDBY POWER OF LOW POWER DEVICESBACKGROUNDTechnical Field[0001] One or more embodiments of the present invention generally relate to power management. In particular, certain embodiments relate to reducing the voltage supplied to a computing system component.Discussion[0002] As the trend toward advanced central processing units (CPUs) with more transistors and higher frequencies continues to grow, computer designers and manufacturers are often faced with corresponding increases in power and energy consumption. Furthermore, manufacturing technologies that provide faster and smaller components can at the same time result in increased leakage power. Particularly in mobile computing environments, increased power consumption can lead to overheating, which may negatively affect performance, and can significantly reduce battery life. Because batteries typically have a limited capacity, running the components of a mobile computing system may drain the capacity more quickly than desired.[0003] Some modern mobile computing systems take into consideration the dynamic nature of computer applications in order to conserve battery capacity. For example, many computer applications cause the CPU to consume relatively high power at high performance for short periods of time, while requiring relatively low power operation the rest of the time (e.g., idle while waiting for user input). By limiting the high frequency and high voltage operation of the CPU to the time periods in which high performance is needed, the computing system can conserve a significant amount of power. For example, when the CPU anticipates being idle, the CPU can instruct the voltage regulator to drop the core voltage to a minimum operating voltage. Similarly, when the CPU is to be turned off, the core voltage can be further dropped to a minimum sustainable voltage that is programmed into the voltage regulator during the CPU manufacturing or board assembly process. The minimum sustainable voltage maintains the internal state of the CPU. Since active and leakage power are closely related to voltage, reducing the voltage can enable greater power savings, lower temperatures and longer battery life. While the above approach has been acceptable under certain circumstances, there still remains considerable room for improvement.[0004] In particular, manufactured components tend to exhibit slightly different characteristics from one part to the next. For example, two CPU parts resulting from the same manufacturing process may have different minimum sustainable voltages. Conventional power management approaches, however, select a "worst case" minimum sustainable voltage for all CPUs of a given type and use this value to program the voltage regulator. Thus, a non-optimal minimum sustainable voltage is shared among all instances of a given computing system component. The same is true for other power saving parameters such as the minimum operating voltage. As a result, the majority of parts use non-optimal power saving parameters, which often results in missed power saving opportunities. Furthermore, conventional approaches do not permit the CPU to change the preset minimum value and therefore have limited ability to tailor the voltage regulator to individual components rather than a group of components.BRIEF DESCRIPTION OF THE DRAWINGS[0005] The various advantages of the embodiments of the present invention will become apparent to one skilled in the art by reading the following specification and appended claims, and by referencing the following drawings, in which:] FIG. 1 is a block diagram of an example of a power management apparatus according to one embodiment of the invention;[0007] FIG. 2 is a block diagram of an example of a plurality of computing system components having dedicated power saving parameters according to one embodiment of the invention; [0008] FIG. 3A is a block diagram of an example of a computing system component according to one embodiment of the invention;[0009] FIG. 3B is a block diagram of an example of a voltage regulator according to one embodiment of the invention; [0010] FIG. 4A is a block diagram of an example of a computing system component according to an alternative embodiment of the invention; [0011] FIG. 4B is a block diagram of an example of a voltage regulator according to an alternative embodiment of the invention;] FIG. 5 is a block diagram of an example of a mobile computing system according to one embodiment of the invention;[0013] FIG. 6 is a flowchart of an example of a method of managing power according to one embodiment of the invention;[0014] FIGS. 7A through 7D are flowcharts of examples of processes of reducing voltage according to embodiments of the invention; and[0015] FIG. 8 is a flowchart of an example of a process of selecting a power saving parameter according to one embodiment of the invention.DETAILED DESCRIPTION [0016] FIG. 1 shows an apparatus having a controller 20, which applies a voltage from a voltage regulator 22 to a component 24 of a computing system (not shown). While certain embodiments will be described with regard to a computing system component that is a central processing unit (CPU), the embodiments of the invention are not so limited. Indeed, the component 24 could include core logic, a dynamic random access memory (DRAM), a modem, a hard disk drive (HDD), a compact disk read only memory (CDROM), or any other computing system component for which power management is an issue of concern. Notwithstanding, there are a number of aspects of CPUs for which the embodiments of the invention are well suited. Similarly, the voltage regulator 22 may include a switching regulator having a metal oxide semiconductor field effect transistor (MOSFET) driver, a switching transistor stack, a bulk capacitor, etc., but other types of voltage regulators can be used without parting from the spirit and scope of the embodiments of the invention. The voltage regulator 22 and computing system component 24 may be on the same or separate dies. [0017] In conjunction with a transition of the component 24 to a low power state, the controller 20 is able to reduce the voltage applied to the component 24 based on a power saving parameter 26 that is dedicated to the component 24. The power saving parameter 26 may be a minimum sustainable voltage, which maintains the internal state of the component 24, a minimum operating voltage, and so on. Thus, the power saving parameter 26 might be a voltage level such as 62OmV or 53OmV. The power saving parameter 26 could also be a power optimized value that is not necessarily the minimum voltage level. For example, it may be determined that although the actual minimum sustainable voltage for a part is 70OmV, a voltage of 75OmV should be used because such a voltage is optimal from the perspective of some other power saving parameter. [0018] By dedicating the power saving parameter 26 to the component 24, the illustrated approach enables the parameter 26 to be closely tailored to the individual characteristics of the component 24. This point is more clearly demonstrated in FIG. 2, which shows a plurality of computing system components 28 (28a-28n) having a corresponding plurality of power saving parameters 30 (30a-30n), where each power saving parameter 28 is dedicated to its corresponding component 28. Thus, the components 28 do not share a power saving parameter that is non-optimal to one or more of the individual components. For example, the component 28b may be able to support a minimum voltage that is lower than the minimum voltage supported by the component 28a. In such a case, the power saving parameter 30b would reflect the lower minimum voltage, which in turn would enable reduced leakage current and greater power savings for the component 28a. Simply put, each computing system component 28 is able to have a low power mode that is based on its own internal characteristics. [0019] Turning now to FIG. 3A, an architecture 32 is shown in which a voltage regulator 34 has a controller 36 that applies a voltage from the voltage regulator 34 to a computing system component 38 and reduces the applied voltage based on a power saving parameter that is dedicated to the component 38. In the illustrated example, the component 38 includes multiplexing logic 40 that receives a notification of a power saving event and identifies the power saving parameter based on the notification. The notification, which may be provided to the multiplexing logic 40 by way of a component sleep signal 42, a platform sleep signal 44, etc., could inform the multiplexing logic 40 of the component 38 entering an idle (e.g., standby) state, an off state or some other type of low power state. If the power saving event corresponds to the component 38 entering an idle state, the multiplexing logic 40 might identify a minimum operating voltage as the power saving parameter. Alternatively, if the power saving event corresponds to the component 38 entering an off state, the multiplexing logic 40 could identify a minimum sustainable voltage as the power saving parameter. In either case, the identified voltage is dedicated to the component 38 and reflects the actual characteristics of the component 38 rather than a "worst case" value that is shared among multiple components.[0020] The multiplexing logic 40 determines a control value 39 for the power saving parameter and applies the control value 39 to the voltage regulator 34 to reduce the voltage. The control value 39 can essentially establish a reference voltage to be used to control the internal switching of the voltage regulator 34. [0021] The illustrated component 38 also includes a memory device 46, where the component 38 receives the power saving parameter from a process such as a manufacturing process 48, a basic input/output system (BIOS) process 50, an operating system-directed power management (OS-PM) process 52, etc., and stores the power saving parameter to the memory device 46. One example of the manufacturing process 48 would be a component manufacturing process in which the component 38 is fabricated and tested, and power saving parameter data is written to the memory device 46 based on the testing results. Another example of the manufacturing process 48 could be a board assembly process in which the component 38 is tested at the time of its assembly together with other components on a circuit board and the power saving parameter data is stored to the memory device 46.[0022] The memory device 46 could be a register, programmable fuse, erasable programmable read only memory (EPROM/Flash), or any other suitable type of memory device. It should be noted that, depending upon the circumstances, multiple power saving parameters could be stored to the memory device 46, where the multiplexing logic 40 selects the appropriate parameter based on the notification received. It should also be noted that using the memory device 46 to store the power saving parameter data provides much greater flexibility than conventional approaches because the voltage regulator 34 can be effectively be programmed long after the manufacturing process. Indeed, the control value 39 can be based on a power saving parameter that changes throughout the life cycle of the architecture 32. For example, the OS-PM process 52 could determine that the power saving parameter should be increased due to deterioration of the component 38 over time. In such a case, the power saving parameter can be readily modified by storing a different value to the memory device 46. [0023] FIG. 8 shows a process of identifying a power saving parameter at 54. The process 54 could be implemented in the multiplexing logic 40 (FIG. 3A) of the computing system component using any suitable hardware and/or software programming technique. For example, the process 54 may be implemented as an application specific integrated circuit (ASIC), as a set of instructions to be stored on a machine readable medium, or any combination thereof. In particular, the illustrated processing block 56 provides for determining whether the component is entering an off state based on the notification signal. If so, the minimum sustainable voltage for the component is selected at block 58. Processing block 60 provides for determining whether the component is entering an idle state based on the notification signal. If so, the minimum operating voltage is selected at block 62. Other minimum and/or power optimized voltages could also be used. [0024] FIG. 3B demonstrates that an alternative architecture 64 can be used in which the multiplexing logic 40 and memory device 46 are incorporated into a voltage regulator 66. In this embodiment, although the voltage regulator 66 still reduces the voltage applied to a computing system component 68 based on a power saving parameter that is dedicated to the component 68, the data relating to the power saving parameter is stored on the voltage regulator 66 rather than the computing system component 68. The power saving parameter value can be received from a manufacturing process 48, a BIOS process 50 or an OS-PM process 52, and can be subsequently changed as already discussed. The power saving parameter can also be received from a process 70 of the computing system component 68. As also already discussed, the voltage reduction is in response to a notification that may be provided to the multiplexing logic 40 by way of a component sleep signal 42, a platform sleep signal 44, etc. [0025] Turning now to FIG. 4A, an alternative approach to determining the power saving parameter is shown at architecture 32'. !n particular, the architecture 32' includes a computing system component 38' with calculation logic 72 to retrieve one or more operational values from a memory device 46' and calculate the power saving parameter based the operational values. The operational values could be test results, fabrication process parameters, etc. For example, multiple voltages could be applied to the component 38', where one or more units of the component 38' are tested for failure. In the case of a CPU, one such unit might be a cache system (not shown), which is known to be one of the first units to fail if the core voltage of the CPU is too low. Thus, a set of operational values might reflect the test results: 20OmV, fail; 30OmV; fail; 40OmV, fail; 52OmV, pass... In such a case, the calculation logic 72, could extrapolate from the operational values to identify a minimum sustainable voltage that is between 40OmV and 52OmV. Alternatively, the calculation logic 72 could merely select the lowest pass value (e.g., 52OmV) as the minimum sustainable voltage. [0026] While the operational values could be received from an external process such as a manufacturing process 48', a BIOS process 50' or an OS-PM process 52', as already discussed, an alternative approach would be to provide the component 38' with test logic 74 to measure the operational values in accordance with a self-test of the component 38'. The self-test could be similar to a power on self-test (POST), which is a diagnostic testing sequence run by the BIOS as the system's power is initially turned on. Although the POST typically determines whether the system's random access memory (RAM), disk drives, peripheral devices and other hardware components are working properly, the self-test could provide for testing of the CPU to determine the above-described operating values. The self-test could also include an iterative process in which increasingly higher voltages are applied to the component 38' until the component 38' passes. The voltage resulting in the successful iteration can be used as the power saving parameter.[0027] The illustrated calculation logic 72 can also determine the power saving parameter based on a feedback signal 76, which is derived from the voltage received from the voltage regulator 34. Such a closed-loop approach would further enhance the reliability of the architecture 32'. [0028] It should also be noted that the test logic 74 could alternatively measure the power saving parameter directly. In such a case, the power saving parameter can be provided to the multiplexing logic 40 from either the test logic 74 or the memory device 46', where the calculation logic 72 would not be necessary. [0029] FIG. 4B shows another alternative embodiment in which an architecture 64' has a voltage regulator 66' with the calculation logic 72, memory device 46' and test logic 74 discussed above. In such a case, the operational values could be obtained internally from the test logic 74 or externally from a computing system component process 70', manufacturing process 48', BIOS process 50', OS-PM process 52', etc. The calculation logic 72 may also base the power saving parameter on a feedback signal 76' from the computing system component 68'. For example, the test logic 74 could use the feedback signal 76' to determine whether a given voltage has resulted in a failure. [0030] Turning now to FIG. 5, a particular implementation is shown in which many of the above-described features are incorporated into a mobile computing system 78. The mobile computing system 78 may be a notebook personal computer (PC), a personal digital assistant (PDA), a wireless "smart" phone and so on. The illustrated system 78 includes a power supply subsystem 80, a CPU 82 and a voltage regulator 34 coupled to the power supply subsystem 80 and the CPU 82. The power supply subsystem 80 includes an alternating current (AC) adaptor 86, which converts an AC voltage into a direct current (DC) voltage, a battery 88, which provides a DC voltage, and a selector 90, which selects between the AC adaptor 86 and the battery 88 as a source of power for the mobile computing system 78. The voltage regulator 34 steps the voltage from the selector 90 down to the desired core voltage Vcc.[0031] The illustrated system 78 is similar to that of the architecture 32 (FIG. 3A) described above in that the voltage regulator 34 has a controller 36 that is capable of applying a voltage such as the core voltage to the CPU 82 and reducing the core voltage based on a power saving parameter that is dedicated to the CPU 82. In this embodiment, the CPU 82 includes the multiplexing logic 40, which receives a notification of a power saving event by way of a CPU sleep signal 42' or a platform sleep signal 44' and identifies the power saving parameter based on the notification. The multiplexing logic 40 determines the control value 39 for the identified power saving parameter and applies the control value 39 to the voltage regulator 34 to reduce the core voltage. In the illustrated example, data relating to the minimum control parameter can be received from a BIOS process 50 or OS-PM process 52 associated with the platform, or a manufacturing process 48 that is separate from the platform. As already noted, the voltage regulator 34 and CPU 82 may be incorporated into the same die or different dies. While the illustrated system 78 most nearly resembles the architecture 32 (FIG. 3A), the system 78 could be readily modified to more closely reflect the architecture 64 (FIG. 3B), the architecture 32' (FIG. 4A), the architecture 64' (FIG. 4B), or any combination thereof.[0032] Turning now to FIG. 6, a method 92 of managing power is shown. The method 92 can be implemented in a computing system using any suitable hardware and/or software programming technique. For example, the method may be implemented as fixed functionality hardware, an application specific integrated circuit (ASIC), as a set of instructions to be stored on a machine readable medium, or any combination thereof. In particular the method 92 provides for applying a voltage from a voltage regulator to a component of a computing system at processing block 94. The voltage is reduced at block 96 based on a power saving parameter that is dedicated to the component. As already noted, the power saving parameter could include a power saving voltage level such a minimum/power optimized sustainable voltage or a minimum/power optimized operating voltage. [0033] FIG. 7A shows one approach to reducing a voltage to a computing system component in greater detail at block 98. Block 98 can therefore be readily substituted for block 96 (FIG. 6) discussed above. In the illustrated example, reducing voltage can include "online" processes as well as "offline" processes. For example, block 100 demonstrates that one approach is to receive the power saving parameter from a process such as a manufacturing process, a BIOS process, an OS-PM process, etc., offline. The power saving parameter is stored to a memory device at block 102, which can also be performed offline. [0034] The remaining processes of the illustrated block 98 are conducted online. In particular, block 104 provides for receiving a notification of a power saving event and block 106 provides for selecting a power saving parameter in response to the notification. A control value for the power saving parameter is determined at block 108 and the control value is applied to the voltage regulator at block 110 to reduce the voltage. [0035] Turning now to FIG. 7B, an alternative approach to reducing the voltage to the computing system component is shown at block 112. Thus, block 112 can also be substituted for block 96 (FIG. 6) discussed above. It can be seen that the online processes have not changed from the discussion of block 98 (FIG. 7A), but that the offline processes have changed. For example, block 114 provides for receiving one or more operational values rather than the power saving parameter itself. The operational values are stored to a memory device at block 116 and retrieved from the memory device at block 118. Block 120 provides for extrapolating from the operational values to the power saving parameter. [0036] FIG. 7C illustrates yet another approach to reducing the voltage to the computing system at block 122, which can be substituted for block 96 (FIG. 6) discussed above. This embodiment is identical to the block 112 (FIG. 7B) discussed above, except that the operational values are measured in accordance with a self-test of the component at block 124. [0037] FIG. 7D shows an additional approach to reducing the voltage to the computing system component at block 126, where block 126 can also be substituted for block 96 (FIG. 6) discussed above. In particular, block 126 is identical to block 98 (FIG. 7A), except that the power saving parameter is measured at block 128 in accordance with a self-test of the component rather than received from a process such as a manufacturing, BIOS or OS-PM process.[0038] The techniques described herein therefore provide a unique approach to power management that enables greater power conservation, longer battery life, lower temperatures and enhanced performance. For example, dedicating power saving parameters to computing system components allow each component to be optimized based on its own characteristics. Furthermore, enabling the computing system component to communicate the appropriate power saving parameter to the voltage regulator provides greater flexibility in achieving maximum power savings. [0039] Those skilled in the art can appreciate from the foregoing description that the broad techniques of the embodiments of the present invention can be implemented in a variety of forms. Therefore, while the embodiments of this invention have been described in connection with particular examples thereof, the true scope of the embodiments of the invention should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, specification, and following claims.
In a multi-processor system, an executable software image including an image header and a segmented data image is scatter loaded from a first processor to a second processor. The image header contains the target locations for the data image segments to be scatter loaded into memory of the second processor. Once the image header has been processed, the data segments may be directly loaded into the memory of the second processor without further CPU involvement from the second processor.
1.A communication method between a primary processor and a secondary processor in a multiprocessor system, the method comprising:Initializing communication with the primary processor by the secondary processor;Directing, by the secondary processor, the primary processor to transmit an image header of an executable software image, the executable software image including the image header and at least one data segment;Receiving, by the secondary processor, the image header and at least one data segment from the primary processor;Directly storing, by the secondary processor, the at least one data segment of the executable software image in a target location of a system memory of the secondary processor, the target location being determined by the secondary processor The image header assignment.2.The method of claim 1 wherein said image header comprises an image size and a location of image data in the memory.3.The method of claim 1, further comprising transmitting, by the secondary processor, a transmission request for each of the at least one data segment to the primary processor.4.The method of claim 1 wherein said instructions comprise transmitting, by said secondary processor, to said primary processor a message comprising image identification, data offset, and data length.5.The method of claim 1 further comprising setting, by said secondary processor, a receive buffer for incoming data segments as a destination address in said system memory of said secondary processor.6.A multiprocessor device comprising:Primary processor;a secondary processor coupled to the primary processor, wherein the secondary processor is configured to:Initializing communication with the primary processor;Directing the primary processor to transmit an image header of an executable software image, the executable software image including the image header and at least one data segment;Receiving the image header and at least one data segment from the primary processor;The at least one data segment of the executable software image is directly stored in a target location of a system memory of the secondary processor, the target location being assigned by the secondary processor in accordance with the image header.7.The multiprocessor device of claim 6 wherein said image header comprises an image size and a location of image data in the memory.8.The multiprocessor device of claim 6, wherein the secondary processor is further configured to transmit a transmission request for each of the at least one data segment to the primary processor.9.The multiprocessor device of claim 6, wherein to instruct the primary processor to transmit an image header of an executable software image, the secondary processor is further configured to transmit to the primary processor including image recognition , data offset, and data length messages.10.The multiprocessor device of claim 6, wherein the secondary processor is further configured to set a receive buffer for incoming data segments to be in the system memory of the secondary processor Destination address.11.A non-transitory computer readable medium having non-volatile program code recorded thereon, the non-volatile program code comprising:Program code for initializing communication with a primary processor in the multiprocessor system by a secondary processor in a multiprocessor system;Program code for instructing, by the secondary processor, the primary processor to transmit an image header of an executable software image, the executable software image including the image header and at least one data segment; Program code for receiving, by the secondary processor, the image header and the at least one data segment from the primary processor;Program code for directly storing, by the secondary processor, the at least one data segment of the executable software image in a target location of a system memory of the secondary processor, the target location being The processor is to be assigned according to the image header.12.The non-transitory computer readable medium of claim 11, wherein the image header comprises an image size and a location of the image data in the memory.13.The non-transitory computer readable medium of claim 11, the non-volatile program code further comprising means for transmitting, by the secondary processor, to the primary processor for the at least one data The program code of the transfer request for each data segment in the fragment.14.The non-transitory computer readable medium of claim 11, wherein the program code for instructions comprises for transmitting, by the secondary processor to the primary processor, image recognition, data offset, and The program code for the message of the data length.15.The non-transitory computer readable medium of claim 11, the non-volatile program code further comprising: a receiving buffer for the incoming data segment to be set by the secondary processor as Program code for a destination address in the system memory of the secondary processor.16.A communication method between a primary processor and a secondary processor in a multiprocessor system, the method comprising:Receiving, by the primary processor, a request to transmit an image header of an executable software image, the executable software image including the image header and at least one data segment;Transmitting, by the primary processor, the image header and at least one data segment to the secondary processor, the at least one data segment of the executable software image being directly stored in a system memory of the secondary processor a target location in the destination, the target location being assigned according to the image header.17.The method of claim 16 wherein said image header comprises an image size and a location of image data in the memory.18.The method of claim 16 further comprising receiving, by said primary processor, a transmission request sent to said primary processor for each of said at least one data segment.19.The method of claim 16 further comprising receiving, by said primary processor, a message from said secondary processor comprising image recognition, data offset, and data length.20.The method of claim 19 further comprising transmitting data based on said message.21.A multiprocessor device comprising:Primary processor;a secondary processor coupled to the primary processor, wherein the primary processor is configured to:Receiving a request to transmit an image header of an executable software image, the executable software image including the image header and at least one data segment;Transmitting the image header and the at least one data segment to the secondary processor, the received at least one data segment of the executable software image being directly stored in a target in a system memory of the secondary processor A location, the target location being assigned according to the image header.22.The multiprocessor device of claim 21 wherein said image header comprises an image size and a location of image data in the memory.23.The multiprocessor device of claim 21 wherein the primary processor is further configured to receive a transmission request sent to the primary processor for each of the at least one data segment.24.The multiprocessor device of claim 21 wherein the primary processor is further configured to receive a message from the secondary processor including image recognition, data offset, and data length.25.The multiprocessor device of claim 24 wherein the primary processor is further configured to transmit data based on the message.26.A non-transitory computer readable medium having non-volatile program code recorded thereon, the non-volatile program code comprising:Program code for receiving, by a primary processor in a multiprocessor system, a request to transmit an image header of an executable software image, the executable software image including the image header and at least one data segment;Transmitting, by the primary processor, the image to a secondary processor of the multiprocessor systema program code of a header and at least one data segment, the at least one data segment of the executable software image being directly stored in a target location in a system memory of the secondary processor, the target location being according to the image target Head allocation.27.The non-transitory computer readable medium of claim 26, wherein the image header comprises an image size and a location of the image data in the memory.28.A non-transitory computer readable medium according to claim 26, wherein said non-volatile program code further comprises for receiving, by said primary processor, said primary processor for said at least one The program code of the transmission request for each data segment in the data segment.29.The non-transitory computer readable medium of claim 26, wherein the non-volatile program code further comprises for receiving, by the primary processor, image recognition, data offset from the secondary processor Program code for moving and data length messages.30.The non-transitory computer readable medium of claim 29, wherein the non-volatile program code further comprises program code for transmitting data based on the message.
Executable software images are directly distributed from the primary processor to one or more secondary processors in a multiprocessor systemInformation about the divisional applicationThis application is a divisional application. The parent case of the divisional application is the application date is March 22, 2011, the application number is 201180014509.6, and the invention name is "distribute the executable software image directly from the main processor to one or more in a multiprocessor system. Invention patent application for secondary processor.Cross-reference to related applicationsThis application claims US Provisional Patent Application No. 61/316,369, filed on March 22, 2010, in the name of MALAMANT et al., and on April 14, 2010, by GUPTA et al. U.S. Provisional Patent Application No. 61/324,035, filed in the name of U.S. Patent Application No. 61/324,122, filed on April 14, 2010, in the name of Gupta et al., and April 19, 2010 The disclosure of U.S. Provisional Patent Application Serial No. 61/325,519, the entire disclosure of which is hereby incorporated by reference in its entirety in its entirety in the the the the the the the the the the the theTechnical fieldThe following description relates generally to multiprocessor systems and, more particularly, to multiprocessor systems in which a primary processor is coupled to one or more other processors (referred to herein as "secondary") in the system. a non-volatile memory of executable software images of the processor), each of the one or more other processors being coupled to a dedicated volatile memory, wherein the segmented format (eg, using a direct scatter loading process) The executable software image is efficiently transferred from the primary processor to the secondary processor.Background techniqueThe processor executes the software code to perform the operations. The processor may require some software code to be executed for booting, commonly referred to as boot code. In a multi-processor system, each processor may require a corresponding boot code for booting. As an example, in a smart phone device including an application processor and a modem processor, each of the processors may have a corresponding boot code for booting.There is a problem with a large number of devices (such as smart phones) incorporating multiple processors, such as stand-alone application processor chips integrated with separate modem processor chips. Flash/non-volatile memory components can be used for each of the processors, as each processor has a non-volatile memory (eg, persistent storage) that can execute images and file systems. For example, the boot code of the processor can be stored to a corresponding non-volatile memory of the processor (eg, flash memory, read only memory (ROM), etc.), and upon power up, the processor immediately corresponds to it The non-volatile memory loads boot code software for execution. Thus, in this type of architecture, it is not required to load executable software (eg, boot code of a processor) from another processor in the system to the processor.However, adding dedicated non-volatile memory to each processor can take up more board space, increasing board size. Some designs may use a combination of random access memory (RAM) and flash memory (where RAM and flash devices are stacked in a single package to reduce size) to reduce board size. While multi-chip packaging solutions do reduce the required board footprint to some extent, the approach can increase cost.In some multiprocessor systems, it may be required to load software from one processor to another. For example, assume that a first processor in a multiprocessor system is responsible for storing boot code for one or more other processors in the system to its non-volatile memory; wherein, after power up, The task of one processor is to load the corresponding boot code into other processors, in contrast to the non-volatile memory in which the boot code resides in other processors. In this type of system, software (eg, boot images) is downloaded from a first processor to other processors (eg, to a volatile memory of other processors), and thereafter received by the processor for downloading Image to guide.Typically, the software image to be loaded is a binary multi-fragment image. For example, a software image can include a header followed by multiple segments of the code. When loading a software image from an external device (eg, from another processor) to a target device (eg, a target processor), there may be an intermediate step in which the binary multi-fragment image is transferred to system memory, and then slightly It is then transferred to the target location by the bootloader.In a system in which a software image is loaded from a first "primary" processor to a target "secondary" processor, one way to perform this loading is to allocate a temporary buffer into which each packet is received, and each A packet will have associated packet header information as well as a payload. In this case, the payload will be the actual image data. From the temporary buffer, some processing can be done on the payload, and then the payload will be copied to the final destination. The temporary buffer will be somewhere in system memory, such as internal random access memory (RAM) or double data rate (DDR) memory.Therefore, in the case of using an intermediate buffer, data being downloaded from the primary processor to the secondary processor is copied into the intermediate buffer. In this manner, the buffer is used to receive portions of the image data from the primary processor, and the image data can be diffused from the buffer into a memory of the secondary processor (eg, a volatile memory).The primary processor and its non-volatile memory storing the boot image for the secondary processor can be implemented on a different chip than the chip on which the secondary processor is implemented. Thus, in order to transfer data from a non-volatile memory of a primary processor to a secondary processor (eg, to a volatile memory of a secondary processor), packet-based communication may be used, where the packet header is included Transfer to each packet of the secondary processor. The packet is stored in an intermediate buffer, and then some processing of the received packet is required for the data to be stored where it is needed (e.g., within the volatile memory of the secondary processor).Summary of the inventionThe present invention provides a multiprocessor system. The system includes a secondary processor having a system memory and a hardware buffer for receiving at least a portion of an executable software image. The secondary processor includes a scatterloader controller for loading the executable software image directly from the hardware buffer to the system memory. The system also includes a primary processor coupled to the memory. The memory stores the executable software image for the secondary processor. The system further includes an interface communicatively coupling the primary processor and the secondary processor, the executable software image being received by the secondary processor via the interface.The invention also provides a method. The method includes receiving, at a secondary processor, an image header for an executable software image for the secondary processor from a primary processor via an inter-chip communication bus, the executable software image being stored coupled to the The memory of the main processor. The executable software image includes the image header and at least one data segment. The method also includes processing, by the secondary processor, the image header to determine at least one location within a system memory to which the secondary processor is coupled to store the at least one data segment. The method also includes receiving, at the secondary processor, the at least one piece of data from the primary processor via the inter-chip communication bus. Still further, the method includes loading, by the secondary processor, the at least one data segment directly into the determined at least one location within the system memory.The present invention provides an apparatus. The apparatus includes means for receiving, at a secondary processor, an image header for an executable software image of the secondary processor from a primary processor via an inter-chip communication bus, the executable software image being stored Coupled to the memory of the primary processor. The executable software image includes the image header and at least one data segment. The apparatus also includes means for processing, by the secondary processor, the image header to determine at least one location within a system memory to which the secondary processor is coupled to store the at least one data segment. The apparatus further includes means for receiving, at the secondary processor, the at least one piece of data from the primary processor via the inter-chip communication bus. Still further, the apparatus includes means for loading the at least one data segment directly into the determined at least one location within the system memory by the secondary processor.The present invention provides a multiprocessor system. The system includes a primary processor coupled to a first non-volatile memory. The first non-volatile memory is specifically coupled to the primary processor and stores a file system for the primary processor and executable images for the primary and secondary processors. The system also includes a secondary processor coupled to the second non-volatile memory. The second non-volatile memory is specifically coupled to the secondary processor and stores configuration parameters and file systems for the secondary processor. The system further includes an interface communicatively coupling the primary processor and the secondary processor, the executable software image being received by the secondary processor via the interface.The present invention provides a multiprocessor system. The system includes a primary processor coupled to a first non-volatile memory. The first non-volatile memory is specifically coupled to the primary processor and stores executable images and file systems for the primary and secondary processors. The system also includes a secondary processor. The system further includes an interface communicatively coupling the primary processor and the secondary processor, the executable software image being received by the secondary processor via the interface.The present invention provides a method comprising transmitting executable software images for the secondary processor from a memory coupled to the primary processor, via communicative coupling of a primary processor to a secondary processor An interface to send the executable software image. The method also includes receiving the executable software image at the secondary processor. The method further includes executing the executable software image at the secondary processor.DRAWINGSFor a more complete understanding of the teachings of the present invention, reference should now be made to1 is an illustration of an exemplary apparatus in which aspects of the present invention may be implemented.2 is an illustration of an exemplary apparatus in which aspects of the invention may be practiced.3 is an illustration of an operational flow of an exemplary loading process for loading an executable image from a primary processor to a secondary processor in accordance with an aspect of the present invention.4 is a flow chart illustrating a scatter loading method in accordance with an aspect of the present invention.FIG. 5 is a block diagram showing an exemplary wireless communication system in which embodiments of the present invention may be advantageously employed.Detailed waysThe word "exemplary" is used herein to mean "serving as an example, instance, or illustration." It is not necessary to interpret any aspect described herein as "exemplary" as being preferred or advantageous over other aspects.Certain aspects disclosed herein relate to a multiprocessor system in which one primary processor is coupled to a non-executable image of one or more other processors (referred to herein as "secondary" processors) in the storage system. Volatile memory. In this multiprocessor system, each of the secondary processors can be connected to a dedicated volatile memory for storing executable images, runtime data, and (optionally) file system images.The executable images are typically stored in a segmented format in which each segment can be loaded into a different memory region. The target memory locations of the executable segments may or may not be contiguous with respect to each other. One example of a multi-fragment image format is an Executable and Linked Format (ELF) that allows an executable image to be decomposed into multiple segments, and each of these segments can be loaded into a different system memory location.In one exemplary aspect, a direct scatter-loading technique for loading a segmented image from a non-volatile memory of a primary processor to a volatile memory of a secondary processor is disclosed. As discussed further below, direct scatter loading techniques avoid the use of temporary buffers. For example, in one aspect, instead of using packet-based communication in which images are transmitted via packets each containing a respective header, the raw image data is loaded from the primary processor to the secondary processor. In another aspect, a header containing information for determining target location information for the data is used.Exemplary multi-processor architecture with centralized non-volatile memory - reduced localized non-volatile memory for file systems1 illustrates a block diagram of a first multiprocessor architecture 102 in which a primary processor (application processor 104) hosts a primary (large) non-volatile memory 106 (eg, NAND flash memory) and a second processor ( For example, modem processor 110) has a secondary (reduced or minimized) non-volatile memory 114 (e.g., a NOR flash memory).In communication device architecture 102, application processor 104 is coupled to primary non-volatile memory 106 and application processor volatile memory 108 (e.g., random access memory). Modem processor 110 is coupled to secondary non-volatile memory 114 and modem processor volatile memory 112. Inter-processor communication bus 134 allows communication between application processor 104 and modem processor 110.Modem executable image 120 for modem processor 110 may be stored in AP non-volatile memory 106 along with application processor (AP) executable image 118 and AP file system 116. Application processor 104 may load its AP executable image 118 into application processor volatile memory 108 and store it as AP executable image 122. The application processor volatile memory 108 can also be used to store AP runtime data 124.Modem processor 110 has dedicated secondary (reduced or minimized) non-volatile memory 114 (e.g., NOR flash) for storage of its file system 128. The reduced (or reduced) non-volatile memory 114 this time is smaller and less expensive than a flash device capable of storing the runtime modem executable image 120 and file system 128.After the system is powered up, the modem processor 110 immediately executes its primary boot loader (PBL) from the hardware boot ROM 126 (smaller read-only on-chip memory). The modem PBL can be adapted to download the modem executable 120 from the application processor 104. That is, modem processor 110 requests modem executable image 120 (initially stored in primary non-volatile memory 106) from application processor 104. The application processor 104 retrieves the modem executable image 120 and provides it to the modem processor 110 via an interprocessor communication bus 134 (e.g., an interchip communication bus). Modem processor 110 stores modem executable image 132 directly into modem processor RAM (random access memory) 112 to the final destination without copying the data into a temporary buffer in modem processor RAM 112. The interprocessor communication bus 134 can be, for example, a HSIC bus (a USB-based high speed interchip), an HSI bus (MIPI high speed synchronous interface), an SDIO bus (a secure digital I/O interface), a UART bus (a universal asynchronous receiver/ Transmitter), SPI bus (serial peripheral interface), I2C bus (between integrated circuits), or any other hardware interface suitable for inter-chip communication available on both modem processor 110 and application processor 104.Once the modem executable image 120 is downloaded into the modem processor RAM 112 and verified, it is maintained as a modem executable image 132. Additionally, modem processor volatile memory 112 may also store modem runtime data 130. The modem boot ROM code 126 can then jump into the modem executable image 132 and begin executing the master modem program from the modem processor RAM 112. Any persistent (non-volatile) data (such as radio frequency (RF) calibration and system parameters) can be stored in the modem using a secondary (reduced or minimal) non-volatile memory 114 attached to the modem processor 110. File system 128.Exemplary multi-processor architecture with centralized non-volatile memory - no local non-volatile memory for file systems2 illustrates a block diagram of a second multi-processor architecture 202 in which a primary processor (application processor 204) hosts a primary (large) non-volatile memory 206 (eg, a NAND flash memory). The primary non-volatile memory 206 can store a modem executable image 214 and/or a modem file system 220 for a secondary processor (modem processor 210). The secondary processor (modem processor 210) can be configured to request the modem executable image 214 and/or the modem file system 220 from the primary processor 204. The primary processor 204 then retrieves the requested modem executable image 214 and/or modem file system 220 from the non-volatile memory 206 and provides it to the secondary processor 210 via the interprocessor communication bus 234.In this architecture 202, application processor 204 is coupled to non-volatile memory 206 and application processor volatile memory 208 (e.g., random access memory). Modem processor 210 is coupled to modem processor volatile memory 212 without its own non-volatile memory. Modem processor volatile memory 212 stores file system image 228, modem executable image 236, and modem runtime data 230. Inter-processor communication bus 234 allows communication between application processor 204 and modem processor 210.All executable images 214 and file system 220 for modem processor 210 may be stored in non-volatile memory 206 along with AP executable image 218 and AP file system 216. Application processor 204 can load its AP executable image 218 into application processor volatile memory 208 and store it as AP executable image 222. Application processor volatile memory 208 can also be used to store AP runtime data 224. The modem file system can be encrypted with the private key of the modem processor for privacy protection and to prevent subscriber identity cloning.After the system is powered up, the modem boot ROM code 226 immediately downloads the modem executable image 214 and the modem file system 220 from the application processor 204 into the modem processor volatile memory 212. Any read access to modem file system 228 may be provided from modem processor volatile memory 212 during normal operation. Any write access is also performed in the modem processor volatile memory 212. Additionally, there may be background processes running on modem processor 210 and application processor 204 to cause the contents of file system 228 in modem processor volatile memory 212 and modem files stored on non-volatile memory 206. System 220 is synchronized.The primary and secondary processors can periodically synchronize the file system in the volatile memory of the secondary processor with the corresponding file system in the primary non-volatile memory. The initial write to modem file system 228 can initiate a timer (e.g., a ten minute timer) in modem processor 210. All writes to file system 228 are coalesced into modem processor volatile memory 212 while this timer is running. Upon expiration of the timer, the modem processor 210 immediately copies the file system image 228 from the volatile memory 212, encrypts it, and alerts the application processor 204 that new data is available. The application processor 204 reads the encrypted copy and writes it to the non-volatile memory 206 into the modem file system 220. Application processor 204 then signals modem processor 210 that the write operation is complete. If the sync operation fails, the current version of the modem file system can be used. Synchronization can occur periodically (eg, every ninety seconds) or after some time after the modem writes to its file system. To prevent crashes from situations such as sudden power removal, two copies of modem file system 220 may be stored.Modem processor 210 may also initiate a "flush" operation of file system image 228 to non-volatile memory 206 of the application processor. This can occur for a variety of reasons, including phone power down, and sending an acknowledgment message to the network to indicate acceptance and storage of incoming SMS messages.The file system read operation on modem processor 210 is serviced from modem processor volatile memory 212, which reflects the current state of the modem file system. Because read operations are more frequent than write operations, and write operations tend to occur in the form of active "bursts", the overall system load and power consumption can be reduced.The application processor 204, modem processor 210, and bootloader have specific measures in place to ensure that at least one full file system image is available in the non-volatile memory 206 whenever there is. This provides immunity to power loss and accidental reset scenarios.The application of the concepts disclosed herein is not limited to the exemplary systems shown above, but can be used with various other multi-processor systems as well.Zero copy transport streamAspects of the present invention provide techniques for efficiently loading executable software images from a non-volatile memory of a primary processor to a volatile memory of a secondary processor. As mentioned above, the conventional loading process requires an intermediate step in which the binary multi-segmented image is buffered (eg, transferred to system memory) and then later dispersed into the target location (eg, by Boot loader). Aspects of the present invention provide techniques for mitigating the intermediate buffering steps required in conventional loading processes. Thus, aspects of the present invention avoid additional memory copy operations, thereby improving performance (e.g., reducing the time required to boot a secondary processor in a multi-processor system).As discussed further below, one exemplary aspect of the present invention uses a direct scatter-loading technique for loading executable software images from a non-volatile memory of a primary processor to a volatile memory of a secondary processor. Certain aspects of the present invention also enable simultaneous image transfer with post-transfer data processing (e.g., verification), which can further improve efficiency, as discussed further below.In one aspect, the host primary processor does not process any information or extract any information from the actual image data, it only sends the image data as "raw" data to the target without any packet headers attached to the packet. Because the target secondary processor initiates a data transfer request, it knows exactly how much data to receive. This enables the host to send data without a packet header and enables the target to receive and store the data directly. In this aspect, the target requests data from the host when needed. The first data item requested by the target is the image header for a given image transfer. Once the target has processed the image header, it knows the location and size of each piece of data in the image. The image header also specifies the destination address of the image in the target memory. With this information, the target can request data for each segment from the host and pass the data directly to the appropriate location in the target memory. A hardware controller for the inter-chip communication bus on the application processor can add its own low-level protocol header, which will be processed and stripped by the modem processor. These low-level headers are transparent to software running on both processors.In one aspect of the invention, the loading process is divided into two phases, as illustrated in the exemplary flow shown in FIG. 3 shows a primary processor 301 (which may be the application processor 104 or 204 of FIG. 1 or 2 with its non-volatile memory 106 or 206) and a secondary processor 302 (which may have its volatile memory) A block diagram of the modem processor 110 or 210 of FIG. 1 or 2 of 112 or 212. In FIG. 3, an exemplary software image for the secondary processor 302 is stored to the non-volatile memory of the primary processor 301. As shown in this example, the exemplary software image 303 is a multi-segment image that includes an image header portion and a plurality of data segments (shown as data segments 1 through 5 in this example). Primary processor 301 and secondary processor 302 may be located on different physical silicon chips (ie, on different chip packages) or may be on the same package.In the first phase of the exemplary loading process of FIG. 3, image header information is communicated to the secondary processor 302. The main processor 301 retrieves a segment of the data image starting with the image header from the non-volatile memory of the main processor 306. The primary processor 301 parses the image headers to load individual image segments from the non-volatile memory of the primary processor 306 to the system memory of the primary processor 307. The image header contains information to identify where the modem image executable data is ultimately placed into the system memory of the secondary processor 305. When receiving actual executable data, the secondary processor 302 uses the header information to program the receive address of the scatterloader/direct memory access controller 304. The data segments are then sent from system memory 307 to primary hardware transport mechanism 308. The segments are then transmitted from the hardware transport mechanism 308 of the primary processor 301 to the hardware transport mechanism 309 of the secondary processor 302 via an inter-chip communication bus 310 (eg, an HS-USB cable). The transmitted first segment may be an image header containing information used by the secondary processor to locate the data segment into a target location in the system memory of the secondary processor 305. The image header may contain information to determine target location information for the data.In one aspect, the target location is not predetermined, but is determined by software executing in the secondary processor as part of the scatter loading process. Information from the image header can be used to determine the target location. In this aspect, the bootloader of the secondary processor first requests an image header from the primary processor (the primary processor CPU does not process the image header at all). The secondary processor knows how the data segments are distributed in non-volatile memory by looking at the image headers (in addition to the RAM address/size, the headers also contain image files relative to each segment in non-volatile memory). The relative position of the beginning). Subsequent requests for data fragments are driven by the secondary processor.In another aspect, the primary processor can place the fragment by parsing the image header and then programming the controller of the secondary processor to place subsequent data segments in a specified address specified in the image header. Enter the volatile memory of the secondary processor. This may involve additional hardware to allow this external control of the controller of the secondary processor.The image header typically contains a list of fragment start addresses and sizes that define where each of the segments should be loaded in the system memory 305 of the secondary processor. The secondary processor 302 includes a hardware transport mechanism 309 (e.g., a USB controller) and the hardware transport mechanism 309 includes a scatter loader controller 304. In the second phase of the loading process, the bootloader programs the engine of the inter-chip connection controller to receive the incoming data and distribute it to the secondary processing based on the header information received in the first phase. The corresponding target memory area 305 of the device.In the case of a USB or HSIC bus, each segment of the image can be transmitted as a single USB transfer over the inter-chip communication bus 310. Knowing the size and destination address of the fragment allows the software to program the scatter-loader controller 304 of the secondary processor 302 for direct transfer of the entire fragment to the least software intervention by the secondary processor 302. Target memory location (within system memory 305). This can result in increased performance on the USB/HSIC bus when the segment is quite large (eg, over 1 megabyte (MB)).As shown in FIG. 3, the image segments are not necessarily placed into successive locations within the system memory 305 of the secondary processor. Rather, the segments can be spread across different locations in the memory. The exemplary loading process of FIG. 3 enables a copy of the secondary processor's software (ie, image 303) to be sent directly from the primary processor 301 to the final fragment destination on the secondary processor's system memory 305.The image header is loaded from the main processor 301 to the scatter loader controller 304 of the secondary processor 302. The image header provides information as to where the data segment will be located in system memory 305. The scatterloader controller 304 thus transfers the image segments directly to their respective target locations in the system memory 305 of the secondary processor. That is, once the CPU of the secondary processor processes the image headers in its memory 305 and programs the scatter-loader controller 304, the scatter-loader controller 304 knows exactly that the image fragments need to go to the secondary processor. Where within the system memory 305, and thus the hardware scatterloader controller 304, is then programmed accordingly to transfer the data segments directly into their target destination. In the example of FIG. 3, the scatter-loader controller 304 receives the image segments and distributes them to different locations in the system memory 305. In one aspect, the executable software image is loaded into the system memory of the secondary processor without storing the entire executable software image in a hardware buffer of the secondary processor.Therefore, in the above aspect, no additional memory copy operations occur in the secondary processor. Therefore, conventional techniques for using a temporary buffer for the entire image and packet header handling are bypassed to facilitate a more efficient direct loading process. Thus, the exemplary loading process of FIG. 3 does not require the intermediate buffer operations traditionally required to load software images from the primary processor to the secondary processor. Instead of scatter-loading from a temporary buffer that holds the entire image, the exemplary loading process of Figure 3 allows image segments to be directly scatter-loaded directly from their hardware to system memory to their respective target destinations. Once the image header is processed, the executable image is directly scatter-loaded into the target memory, bypassing further CPU intervention.Conventionally, when an external interface is involved (eg, as used to transfer image data from a primary processor to a secondary processor), a mechanism is needed to deliver the data so that both processors know that the actual data is What and how to read the data. Typically, data to be transmitted via an external interface is packetized with each packet containing a header that describes the data contained within the packet. For example, in a Transmission Control Protocol/Internet Protocol (TCP/IP) system in which data is being transmitted via a network, overhead associated with the processing of packet headers occurs.In accordance with certain aspects of the present invention (e.g., as in the example of FIG. 3), raw image data is delivered. For example, instead of delivering each segment of image data with a packet header, the exemplary loading process of FIG. 3 determines the required information about the data from the header associated with the entire image. Thus, the image header can be initially transmitted, and all processing for determining how to store the data to system memory 305 can occur prior to the transmission of the segment (based on the image header), and then the segment is transmitted as raw data, Instead of processing the fragment header of each fragment, the fragment header is processed. Thus, in the example of FIG. 3, raw image data is being transferred from the primary processor to the secondary processor, and then processed by hardware, which can strip any USB packet headers and the like. In this exemplary aspect, the actual data segments are not CPU processed, thereby improving the efficiency of the loading process.According to one aspect of the invention, the above sequence of Figure 3 can be repeated as many times as the number of images being transmitted when multiple images must be loaded into the volatile memory of the same secondary processor. In some aspects, within primary processor 301, transfer from non-volatile memory to system memory can occur in parallel with transmitting data from the primary processor to the secondary processor.In one aspect, after the transfer of each segment is completed, the secondary processor 302 immediately programs the scatterloader controller 304 to transfer the next segment and begin verification of the segment just transmitted. This enables the scatterloader controller 304 to transfer data while the secondary processor 302 is performing the verification. Here, verification generally involves examining the integrity and authenticity of the received data. The details of the verification mechanism are outside the scope of the present invention, and any suitable verification mechanism (including those well known in the art) may be used as may be required in a given implementation. The parallelism mentioned above may also apply to other post-transfer processes that may need to be performed by the secondary processor 302 in a given implementation.Once the last segment of the last image is transmitted and verified, the secondary processor 302 can continue the boot process and execute the transmitted image.In one aspect, the modem (secondary) processor 110 executes a boot loader from an embedded boot read only memory (ROM). In this aspect, executing the boot ROM from hardware eliminates the need for flash memory or devices on the modem side. The ROM code can be executed by the silicon itself.4 is a flow chart illustrating a scatter loading method in accordance with an aspect of the present invention. As shown in block 402, the secondary processor receives an image header for the executable software image for the secondary processor from the primary processor via an inter-chip communication bus, the executable software image being stored coupled to the primary processor In the memory, the executable software image includes an image header and at least one data segment. As shown in block 404, the secondary processor processes the image header to determine at least one location within the system memory to which the secondary processor is coupled to store at least one piece of data. As shown in block 406, the secondary processor receives at least one piece of data from the primary processor via the inter-chip communication bus. As shown in block 408, the secondary processor loads the at least one piece of data directly into the determined at least one location within system memory.In one aspect, an apparatus includes means for receiving an executable image, means for processing an image header, means for receiving a data fragment, and means for loading a data fragment. These devices may include a primary processor 301, a secondary processor 302, an inter-bus communication bus 310, a memory 305 or 307, a non-volatile memory 306, a controller 304, or a hardware transport mechanism 308 or 309. In another aspect, the aforementioned device may be a module or any device configured to perform the functions recited by the aforementioned devices.In view of the above, the software image of the secondary processor can be loaded from the primary processor via an interconnect (such as HS-USB or high speed interconnect) instead of loading the software directly from the non-volatile memory connected to the secondary processor. image. The secondary processor may not be directly connected to the non-volatile memory. Thus, aspects of the present invention can reduce the time it takes to boot a secondary processor in a multiprocessor system during which secondary processor images are transmitted from the primary processor. This reduction is achieved by avoiding additional memory copy operations and implementing simultaneous image transfer with background data processing (eg, verification).FIG. 5 is a block diagram showing an exemplary wireless communication system 500 in which embodiments of the present invention may be advantageously employed. For purposes of illustration, FIG. 5 shows three remote units 520, 530, and 550, and two base stations 540. It will be appreciated that a wireless communication system can have many more remote units and base stations. Remote units 520, 530, and 550 include IC devices 525A, 525C, and 525B, which include the disclosed MRAM. It will be appreciated that other devices may also include the disclosed MRAMs, such as base stations, switching devices, and network devices. 5 shows forward link signals 580 from base station 540 to remote units 520, 530, and 550, and reverse link signals 590 from remote units 520, 530, and 550 to base station 540.In FIG. 5, remote unit 520 is shown as a mobile telephone, remote unit 530 is shown as a portable computer, and remote unit 550 is shown as a fixed location remote unit in a wireless local loop system. For example, the remote unit can be a mobile phone, a handheld personal communication system (PCS) unit, a portable data unit (eg, a personal data assistant), a GPS enabled device, a navigation device, a set top box, a music player, a video player, An entertainment unit, a fixed location data unit (eg, a meter reading device), or any other device that stores or retrieves data or computer instructions, or any combination thereof. Although FIG. 5 illustrates a remote unit in accordance with the teachings of the present invention, the invention is not limited to these exemplary illustrated units. Embodiments of the invention may be suitably employed in any device that includes an MRAM.For firmware and/or software implementations, the methods may be implemented with modules (eg, procedures, functions, and the like) that perform the functions described herein. Any machine readable medium tangibly embodying instructions can be used to implement the methods described herein. For example, the software code can be stored in a memory and executed by a processor unit. The memory can be implemented within the processor unit or external to the processor unit. As used herein, the term "memory" refers to any type of long-term, short-term, volatile, non-volatile, or other memory, and is not limited to any particular type of memory or memory, or memory storage thereon. media type.If implemented in firmware and/or software, the functions may be stored as one or more instructions or code on a computer readable medium. Examples include a computer readable medium encoded with a data structure and a computer readable medium encoded with a computer program. Computer readable media includes physical computer storage media. The storage medium can be any available media that can be accessed by a computer. By way of example and not limitation, the computer-readable medium can comprise a RAM, ROM, EEPROM, CD-ROM or other optical disk storage device, disk storage device or other magnetic storage device, or can be used in the form of an instruction or data structure Any other medium storing the program code and accessible by the computer; as used herein, the magnetic disk and the optical disk comprise a compact disk (CD), a laser disk, an optical disk, a digital versatile disk (DVD), a floppy disk, and a Blu-ray disk, wherein A magnetic disk typically reproduces data magnetically, while an optical disk optically reproduces data with a laser. Combinations of the above should also be included within the scope of computer readable media.In addition to being stored on a computer readable medium, instructions and/or data may be provided as signals on a transmission medium contained in the communication device. For example, a communication device can include a transceiver with signals indicative of instructions and data. The instructions and data are configured to cause one or more processors to perform the functions recited in the claims.Although specific circuits have been set forth, those skilled in the art will appreciate that not all of the disclosed circuits are required to practice the invention. Moreover, some well known circuits have not been described in order to maintain the attention of the present invention.Having described the invention and its advantages, it is to be understood that various changes, substitutions and changes may be made herein without departing from the scope of the invention. For example, relational terms such as "above" and "below" are used with respect to a substrate or an electronic device. Of course, if the substrate or electronic device is inverted, the upper side becomes lower and vice versa. Additionally, if oriented laterally, the upper and lower sides may refer to the sides of the substrate or electronic device. Further, the scope of the present application is not intended to be limited to the specific embodiments of the process, machine, article, composition, means, method, and steps described in the specification. As will be readily appreciated by those skilled in the art from this disclosure, the functions that are present or later developed to perform substantially the same functions as the corresponding embodiments described herein or substantially the same results can be utilized in accordance with the present invention. Process, machine, product, material composition, means, method or step. Accordingly, the appended claims are intended to include such a process, a machine, an article, a material composition, means, method or method.
The invention discloses a configurable reduced memory boot-up. Systems, apparatuses, and methods may provide techniques for enabling a first set of rows in a memory module based on a battery state and a user interface during a boot sequence, and disabling a second set of rows in the memory module based on the battery state and the user interface during the boot sequence. The technique may also generate a mapping between the system address space and a first set of tiles in the first set of rows and exclude a second set of tiles in the first set of rows from the mapping.
1.A computing system comprising:network controller;a processor coupled to the network controller; anda memory module coupled to the processor, the memory module including a set of instructions that, when executed by the processor, cause the processor to:During the boot sequence, the first bank set in the memory module is enabled based on battery status and user interface,during the boot sequence, disabling a second bank set in the memory module based on the battery state and the user interface,generating a mapping between the system address space and the first set of blocks in the first set of rows, andA second set of blocks in the first row set is excluded from the mapping.2.The computing system of claim 1, further comprising a memory controller, wherein the instructions, when executed by the memory controller, cause the memory controller to:monitoring write activity relative to the first set of blocks, andFlushing in one or more blocks in the first set of blocks is disabled based on the write activity.3.The computing system of claim 1, wherein the user interface includes a configuration object structure that defines one or more of the following: memory attributes, memory configurations, performance configurations, user interface elements configuration, storage configuration, or hot-plug configuration.4.The computing system of claim 1, wherein the battery state is used to indicate a residual battery state of charge below a normal threshold and above a reduced memory enable threshold, and wherein the mapping is used with a low battery mapping scheme Associated.5.The computing system of any one of claims 1 to 4, wherein the instructions, when executed by the processor, further cause the processor to:collecting telemetry data during a configurable minimum memory boot mode, wherein the telemetry data is used to associate with the first rank set and the second rank set,detecting a change in the state of the battery,enabling the second row set in response to the change, andThe second set of blocks is incorporated into the map in response to the change.6.6. The computing system of claim 5, wherein the change is used to indicate that the residual battery state of charge is above a normal threshold.7.A semiconductor device comprising:one or more substrates; andlogic coupled to the one or more substrates, wherein the logic is implemented at least in part in one or more of configurable logic or fixed function hardware logic, coupled to the one or more substrates The logic of each substrate is used to:During the boot sequence, enabling the first rank set in the memory module based on the battery status and the user interface;during the boot sequence, disabling a second bank set in the memory module based on the battery state and the user interface;generating a mapping between the system address space and the first set of blocks in the first set of rows; andA second set of blocks in the first row set is excluded from the mapping.8.The apparatus of claim 7, wherein the logic coupled to the one or more substrates is to:monitoring write activity relative to the first set of blocks; andFlushing in one or more blocks in the first set of blocks is disabled based on the write activity.9.8. The device of claim 7, wherein the user interface includes a configuration object structure defining one or more of the following: memory properties, memory configuration, performance configuration, user interface element configuration , storage configuration, or hot-plug configuration.10.8. The device of claim 7, wherein the battery state is used to indicate that a residual battery state of charge is below a normal threshold and above a reduced memory enable threshold, and wherein the mapping is used to relate to a low battery mapping scheme link.11.11. The apparatus of any one of claims 7 to 10, wherein the logic coupled to the one or more substrates is to:collecting telemetry data during a configurable minimum memory boot mode, wherein the telemetry data is used to associate with the first rank set and the second rank set;detecting a change in the state of the battery;enabling the second row set in response to the change; andThe second set of blocks is incorporated into the map in response to the change.12.11. The device of claim 11, wherein the change is used to indicate that the residual battery state of charge is above a normal threshold.13.10. The apparatus of any of claims 7-10, wherein the logic coupled to the one or more substrates includes transistor channel regions positioned within the one or more substrates.14.A method that includes:During the boot sequence, enabling the first rank set in the memory module based on the battery status and the user interface;during the boot sequence, disabling a second bank set in the memory module based on the battery state and the user interface;generating a mapping between the system address space and the first set of blocks in the first set of rows; andA second set of blocks in the first row set is excluded from the mapping.15.The method of claim 14, further comprising:monitoring write activity with respect to the first set of blocks; andFlushing in one or more blocks in the first set of blocks is disabled based on the write activity.16.15. The method of claim 14, wherein the user interface includes a configuration object structure that defines one or more of the following: memory attributes, memory configuration, performance configuration, user interface element configuration , storage configuration, or hot-plug configuration.17.15. The method of claim 14, wherein the battery state is used to indicate that a residual battery state of charge is below a normal threshold and above a reduced memory enable threshold, and wherein the mapping is used to relate to a low battery mapping scheme link.18.The method of any one of claims 14 to 17, further comprising:collecting telemetry data during a configurable minimum memory boot mode, wherein the telemetry data is associated with the first rank set and the second rank set;detecting a change in the state of the battery;enabling the second row set in response to the change; andThe second set of blocks is incorporated into the map in response to the change.19.19. The method of claim 18, wherein the change indicates a residual battery state of charge above a normal threshold.20.A semiconductor device comprising:means for enabling a first rank set in a memory module based on a battery state and a user interface during a boot sequence;means for disabling a second bank of sets in the memory module based on the battery state and the user interface during the boot sequence;means for generating a mapping between a system address space and a first set of blocks in the first set of rows; andMeans for excluding a second set of blocks in the first set of rows from the mapping.21.The apparatus of claim 20, further comprising:means for monitoring write activity relative to the first set of blocks; andMeans for disabling flushing in one or more blocks in the first set of blocks based on the write activity.22.21. The device of claim 20, wherein the user interface includes a configuration object structure defining one or more of the following: memory attributes, memory configuration, performance configuration, user interface element configuration , storage configuration, or hot-plug configuration.23.21. The device of claim 20, wherein the battery state is used to indicate a residual battery state of charge below a normal threshold and above a reduced memory enable threshold, and wherein the mapping is used to relate to a low battery mapping scheme link.24.The apparatus of any one of claims 20 to 23, further comprising:for collecting telemetry data during a configurable minimum memory boot mode, wherein the telemetry data is for a device associated with the first rank set and the second rank set;means for detecting a change in the state of the battery;means for enabling the second rank set in response to the change; andMeans for incorporating the second set of blocks into the mapping in response to the change.25.25. The apparatus of claim 24, wherein the change is used to indicate that the residual battery state of charge is above a normal threshold.
Configurable reduced memory startupCROSS-REFERENCE TO RELATED APPLICATIONSThis application claims the benefit of priority from Indian Provisional Patent Application No. 2020041033777, filed on August 6, 2020.technical fieldEmbodiments generally relate to computer memory. More specifically, embodiments relate to configurable reduced memory boot for efficient Quality of Service (QoS) on computing platforms.Background techniqueAs the demand for more memory from end users continues to grow, modern computing device manufacturers are designing computing devices such as laptops, desktops, server systems, and phones with large amounts of memory. In addition to the increased platform bill of materials (BOM) cost, there are other significant challenges including: increased TCO (total cost of ownership i.e. how to maintain power consumption to operate with large platform memory regardless of usage/demand), energy efficiency certification Challenges (eg, compliance), increased defects from sizable memory volumes, slowed boot times due to bottlenecks associated with memory training, increased residual battery requirements to power up all filled memory banks, etc. These challenges can be problematic both in client devices (eg, with limited battery, form factor) and in servers (TCO, energy efficiency compliance, etc.).Description of drawingsVarious advantages of embodiments will become apparent to those skilled in the art from reading the following specification and appended claims, with reference to the following drawings, wherein:1 is a block diagram of an example of a state machine according to an embodiment;FIG. 2 is a flowchart of an example of a bootstrap flow, according to an embodiment;3 is a flowchart of an example of an operational flow according to an embodiment;4 is a flowchart of an example of an operational flow for disabling flushing of unused rows, according to an embodiment;5 is an illustration of an example of an operational flow for transitioning from Configurable Minimum Memory Boot (CMMS) to normal mode, according to an embodiment;6 is a flowchart of an example of a method of operating a basic input output system (BIOS) in a performance-enhanced computing system, according to an embodiment;7 is a flowchart of an example of a method of operating a memory controller in a performance-enhanced computing system;8 is a flowchart of an example of a method of transitioning to normal mode, according to an embodiment;9 is a block diagram of an example of a performance-enhanced computing system according to an embodiment;10 is an illustration of an example of a semiconductor device according to an embodiment;11 is a block diagram of an example of a processor according to an embodiment; and12 is a block diagram of an example of a multiprocessor-based computing system, according to an embodiment.detailed descriptionExisting solutions can keep the entire system memory in a fully functional mode during system boot or active operation, regardless of active memory usage. Self-refresh mode is the only power saving mode that is widely used when the system transitions to a low power state. Existing solutions lack efficient minimal memory management, resulting in increased TCO, energy efficiency certification issues, and increased DPM (defects per million). Correspondingly, quality may be degraded, booting may be slower, and the efficiency of using the limited residual battery on the mobile device may be inefficient.Embodiments propose a configurable minimal memory startup (CMMS) for efficient QoS (Quality of Service) on a computing platform that addresses the above across client devices, IoT (Internet of Things) components, edge devices and cloud Challenges in the case of large memory configurations. The result is a significant platform improvement and better TCO for the client/partner.Embodiments address the question of whether memory is maximally used by all end users in all scenarios. In some scenarios, memory is fully available to only a few users (eg, memory may not be fully utilized most of the time).CMMS technology involves:- System OS (Operating System) settings or system management settings, providing a user interface for configuring/customizing the CMMS mode. The system administrator/user may provide memory block configuration options to be opened during or after boot for efficient platform boot with limited remaining battery threshold.- Power Delivery System (PMIC/Power Management IC, P-Cell) with the ability to sense residual battery state of charge and current charging rate to determine the minimum platform memory configuration block to open to enable the device in CMMS mode as quickly as possible power up without damaging the device.- Early UEFI (Unified Extensible Firmware Interface) PI (Platform Initialization) drivers monitor mobile device battery on reboot,If the battery is healthy, the technology boots all the hardware and evokes the main mobile OS.else if < normal battery then read EFI_MIN_MEMORY_STARTUP_POLICY(EFI_MIN_MEMORY_STARTUP_POLICY) and only activate elements with corresponding configuration bits asserted.- UEFI is aware of the CMMS mode and discloses to the driver the appropriate memory resource availability in the platform. Accordingly, the entire system stack from FW (firmware) to UI (user interface) is dynamically customized to be in CMMS mode.- Supports seamless dynamic transition from CMMS mode to main OS operating mode where full features and devices can be supported without rebooting once sufficient battery and thermal thresholds are met.The following is an example configuration when running in an earlier environment:Shut down if battery < critical and battery is not charging;Boot into charging mode if battery < critical and battery is charging;Boot to MPS (Minimum Power Start) mode if battery > CMMS_required;Boot to normal mode if battery > normal_boot_required.CMMS technology involves the system identifying critical memory blocks to support fast boot based on UEFI BIOS usage pattern heuristics for various CMMS profiles.For use cases where power is more critical and additional memory is not critical, CMMS technology provides maximum power savings and extended battery life. CMMS technology also enables extended battery life in low battery scenarios. Also, energy efficiency certification can be obtained using CMMS technology. Other benefits include: overall improved TCO savings, faster boot, optimal boot based on system needs and scalable memory configuration (eg, dynamic switching from CMMS to normal mode), wrong memory hot-plugging, etc.Figure 1 shows the system architecture and state machine 21 for the proposed CMMS mode. In the illustrated example, the power-on event transitions the system from the shutdown state 20 to the first state 22 in which the UEFI determines the remaining battery level. If the residual battery level is below a critical threshold, the system transitions from the first state 22 back to the shutdown state 20 . If the residual battery level is above the critical threshold, but below the normal boot threshold and above the minimum power enable threshold, the system transitions from the first state 22 to the second state 24 corresponding to the CMMS configuration (eg, the PMIC/P unit automatically ground to turn off power to the power rail). When the battery level exceeds both the critical threshold and the normal boot threshold, the system transitions from the second state 24 to the third state 26 corresponding to the normal boot configuration. The system may also transition from the first state 22 to the third state 26 in response to the battery level exceeding both the critical threshold and the normal boot threshold. In an embodiment, the drop of the battery level below a critical threshold causes the system to transition from the second state 24 or the third state 26 to the shutdown state 20 .FIG. 2 shows a high-level bootstrap operational flow 30 (eg, where the middle box represents new and advantageous functionality). In the illustrated example, a power-on event occurs at block 32 where the BIOS takes control and turns on minimum power mode at block 34 . In an embodiment, block 36 limits memory mapped BIOS usage to a few segments. Additionally, block 38 may keep unused memory segments in PASR (Partial Array Self-Refresh) mode, disable refresh, or completely power down unused memory segments. Block 40 may continue with the remainder of the pre-boot phase, where block 42 leads to charging the OS.Figure 3 shows a process 50 from a user interface to UEFI to configure a platform PMIC for CMMS mode. In the illustrated example, thermal and power startup configuration settings are established at block 52 . The illustrated block 54 performs thermal and power management, which may involve exchanging thermistor values with a PMIC block 56 , which is coupled to a battery and charging unit 58 . Block 54 sends a hot "credit" (eg, burst disable) message to P-unit block 60 . In an embodiment, block 60 powers up one or more IP (intellectual property) blocks (eg, functional domains) based on boot mode, and sets the IP block(s) to operate at a specified frequency. P-unit block 60 may send a power status message to power management block 62 .In one example, PMIC block 56 exchanges boot mode information with UEFI 64 (64a-64g). UEFI block 64a determines the remaining battery level, where a determination may be made at UEFI block 64b as to whether the battery level is sufficient for a normal mode of operation. If so, UEFI block 64c sets the boot mode to normal full power mode, and UEFI block 64d exposes the appropriate IP block configuration (eg, based on the selected boot mode) to the OS and/or driver. If UEFI block 64b determines that the battery level is insufficient for the normal operating mode, UEFI block 64e determines whether the battery level is sufficient for CMMS startup. If the battery level is sufficient for CMMS boot, UEFI block 64f may set the boot mode to CMMS and flow proceeds to UEFI block 64d. If UEFI block 64e determines that the battery level is insufficient for CMMS startup, UEFI block 64g may boot to charge the OS.FIG. 5 shows an example bootstrap flow 70 in CMMS mode relative to normal mode, and illustrates the transition from CMMS to normal mode. In the illustrated example, pre-EFI initialization (PEI) 72 uses a residual battery and minimum power strategy. Additionally, a transient system load (TSL) and runtime (RT) sequence 74 runs the final OS bootloader, final OS environment, and OS current applications.Example CMMS modes involve the following power saving configurations:Disable flushing of unused rows/blocksMemory reference code (MRC, eg, memory initialization code) may disable refresh of unused rank (eg, train unused rank, but keep refresh disabled). Because unused tiers are not used for booting, flushing is only enabled when switching to the OS. Such functionality can also be implemented with changes to the memory controller (MC). For example, the MC monitors the traffic going to each block/row. According to JEDEC (Joint Electron Device Engineering Council), a block is a block of memory within a DRAM (Dynamic Random Access Memory) chip, while a bank is a block of memory on a module (for example, formerly known as a two-sided module or two-block module The modules can now be referred to as two-row modules). If a block/bank has not yet encountered a single write command, the MC can intelligently save power by not issuing a flush to such a block/bank due to no valid content in such a block/bank.Changes to the scheduler logic in the memory controller (MC) can keep track of writes to blocks and/or banks and enable flushing and Self refresh. In existing solutions, SW (software) control for enabling/disabling refresh may be available at the bank level. With the proposed changes, the MC can decide to control refresh at block granularity, providing greater power savings (e.g. system implementations can employ two different mapping schemes - one for low battery/high power saving scenarios, and The other is for general boot/performance scenarios).A power saving memory mapping scheme may choose to map contiguous blocks of DRAM space from only a few blocks in the rank to the system address space. In this case, the memory controller schedules "writes" (write operations) to only those blocks mapped to system space. All other blocks will be free and no writes will occur relative to these blocks. The modified memory controller scheduler logic disables flushing of blocks that have not seen any writes. Thus, when a block is actively used, more power is saved compared to existing solutions - more power saving comes from the subset of blocks in the row not being flushed.Power control electronic switches can be added to the power supply path to the DIMMs or to various banks in the platform. Such a switch enables a SW (eg, BIOS) to completely disable power to unused ranks within a DIMM (dual inline memory module) or the DIMM as a whole (if supported by the platform implementation).Power down the bank or DIMM completelyFor a memory-off configuration, DRAM devices (and thus banks) can be powered down (eg, using platform controls). For DIMMs, the DIMM specification can be changed to provide independent power control for each rank. Platform changes can be made to independently control the power to each DIMM for DIMM-level power down. A disadvantage of this approach may be that a JEDEC initialization sequence may be required to initialize the DRAM at power-up (eg, some minimal training may be done based on DIMM type). Such methods may thus involve more latency. One of the potential options to mitigate high latency is to cache initialization vectors and reuse cached vectors across configuration modes/profiles.In order to share policy information with the platform, the following policy objects can be defined:#define EFI_Minimum_Storage_Boot_Policy_GUID\0xbd8f7aa5,0xa7f5,0x46b5,0x80,0x7f,0xb6,0x58,0x6b,0xd,0x2f,0xaa);Configuration object structure:typedef struct{MEMORY_PROPERTIES props;MEMORY CONFIGURATION configs;PERFORMANCE_CONFIGURATION Perf;UI_ELEMENTS_CONFIGURATION ui;STORAGE_CONFIGURATION Storage;HOTSWAP_CONFIGURATION hotswap;}EFI_MIN_MEMORY_STARTUP_POLICY;(type definition struct {memory_attribute attribute;memory configuration configuration;performance_config_performance;ui_elements_config ui;storage_config storage;hotplug_config hotplug;} efi_min_memory_boot_policy;)In an embodiment, the entire OS exposes the above configuration (Figure 1) through a friendly UI that ultimately executes the UEFI->SetVariable() call, so that when doing a degraded reboot, the CMMS driver uses this policy to decide what hardware to activate and How to parameterize the user interface.More specifically, FIG. 4 illustrates method 80 in which power-up and BIOS start-up occurs at block 82 . In an embodiment, block 84 checks the battery status, wherein a determination may be made at block 86 as to whether the battery status indicates a critical level. If the battery status does not indicate a critical level, then the illustrated block 88 proceeds to normal boot with full performance configuration. Otherwise, block 90 may determine if there are multiple ranks or more than one DIMM. If so, block 92 enables selection of DIMM/rank enable based on the configured policy (eg, only one DIMM per rank). Block 92 may also disable power to other banks/DIMMs. The illustrated block 94 uses the BIOS to perform DDR training and/or memory initialization. If it is determined at block 90 that neither rank nor more than one DIMM is present, method 80 may bypass block 92 and proceed directly to block 94 .Block 96 may provide for contiguously mapping the memory into as few blocks as possible. In an embodiment, block 98 enables normal memory operations and refreshes in the memory controller. Once the BIOS is complete, block 100 switches control to charging the OS and battery threshold software. Additionally, at block 102, the memory controller may monitor writes to the block, where a determination is made at block 104 as to whether a write has occurred relative to the monitored block. If so, at block 106 the memory controller enables flushing of blocks that have encountered data writes. The illustrated method 80 then returns to block 102 . If it is determined at block 104 that a write has occurred with respect to the monitored block, method 80 may bypass block 106 and proceed directly to block 102 . Blocks 92, 96, 102, 104 and 106 (which do not exist in conventional systems) provide significant performance advantages.FIG. 6 illustrates a method 110 of operating a BIOS in a performance-enhanced computing system. The method 110 may be implemented in one or more modules as a collection of logic instructions stored in, for example, random access memory (RAM), read only memory (ROM), programmable ROM (PROM), firmware, flash memory etc., stored in configurable logic such as, for example, Programmable Logic Array (PLA), Field Programmable Gate Array (FPGA), Complex Programmable Logic Device (CPLD) , stored in fixed-function logic hardware using circuit technologies such as, for example, Application Specific Integrated Circuit (ASIC), Complementary Metal Oxide Semiconductor (CMOS), or Transistor-Transistor Logic (TTL) technology, or stored in in any combination.For example, computer program code for implementing the operations shown in the methods may be written in any combination of one or more programming languages, including object-oriented programming languages such as JAVA, SMALLTALK, C++, etc., and The "C" programming language or a conventional procedural programming language like a programming language. Additionally, logical instructions may include assembler instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, state setting data, configuration data for integrated circuits, Central processing units/CPUs, microcontrollers, etc.) are native electronic circuits and/or individualized status information of other structural components.The illustrated process block 112 provides for enabling a first rank set in the memory module based on the battery status and the user interface during the boot sequence. In one example, block 114 disables the second bank of sets in the memory module during the boot sequence based on battery status and user interface. In an embodiment, the battery state indicates that the residual battery state of charge is below a normal threshold and above a reduced (eg, minimum) memory enable threshold. In one example, the user interface includes a configuration object structure that defines one or more of the following: memory attributes, memory configurations, performance configurations, user interface element configurations, storage configurations, or hot-plug configurations. The illustrated processing block 116 also provides for generating a mapping between the system address space and the first set of blocks in the first set of rows, and block 118 extracts the second set of blocks in the first set of rows from the mapping. exclude. In an embodiment, the mapping is associated with a low battery mapping scheme.7 illustrates a method 120 of operating a memory controller in a performance-enhanced computing system. The method 120 may be implemented in one or more modules as a collection of logic instructions stored in a machine-readable or computer-readable storage medium, such as RAM, ROM, PROM, firmware, flash memory, etc., by Stored in configurable logic such as, for example, PLA, FPGA, CPLD, etc., in fixed-function logic hardware using circuit technology such as, for example, ASIC, CMOS, or TTL technology, or the above in any combination. Illustrated block 122 provides for monitoring write activity relative to the first set of blocks, wherein block 124 disables flushing in one or more blocks in the first set of blocks based on the write activity.FIG. 8 shows a method 130 of transitioning to normal mode. The method 130 may be implemented in one or more modules as a collection of logic instructions stored in a machine-readable or computer-readable storage medium, such as RAM, ROM, PROM, firmware, flash memory, etc., by Stored in configurable logic such as, for example, PLA, FPGA, CPLD, etc., in fixed-function logic hardware using circuit technologies such as, for example, ASIC, CMOS, or TTL technology, or the above in any combination.The illustrated process block 132 provides for detecting a change in battery state. Block 132 may also provide: collecting telemetry (eg, usage) data during the CMMS mode, wherein the telemetry data is associated with the first row set and the second row set. Such an approach can further enhance scalability by supporting future enhanced deployments. In an embodiment, block 134 enables the second row set in response to the change, wherein block 136 incorporates the second row set into the map in response to the change. In an embodiment, the change indicates that the residual battery state of charge is above a normal threshold.FIG. 9 shows a computing system 150 including executable program instructions 170 that, when executed by one or more of the host processor 152 , the graphics processor 160 , or the input/output module (IO) 158 , cause the computing system to 150 performs one or more aspects of method 110 (FIG. 6), method 120 (FIG. 7), and/or method 130 (FIG. 8) already discussed. In an embodiment, instructions 170 are retrieved from memory modules 156 (eg, DIMMs) and/or mass storage 168 . Additionally, graphics processor 160, host processor 152, and/or IO module 158 are incorporated into a system-on-chip (SoC) 162, which is also coupled to display 164 and/or (wireless, wired) Network controller 166 . The illustrated system 150 also includes a battery 157 .FIG. 10 shows semiconductor packaging equipment 172 . The illustrated device 172 includes one or more substrates 174 (eg, silicon, sapphire, gallium arsenide) and logic 176 (eg, transistor arrays and other integrated circuit/IC components) coupled to the substrate(s) 174 ). The logic 176 may be implemented at least in part in configurable logic or fixed function logic hardware. In one example, logic 176 implements one or more aspects of method 110 (FIG. 6), method 120 (FIG. 7), and/or method 130 (FIG. 8) as discussed.In one example, logic 176 includes transistor channel regions positioned (eg, embedded) within substrate(s) 174 . Thus, the interface between logic 176 and substrate(s) 174 may not be abrupt junctions. Logic 176 may also be considered to include epitaxial layers grown on the initial wafer of substrate(s) 174 .FIG. 11 illustrates a processor core 200 according to one embodiment. Processor core 200 may be a core for any type of processor, such as a microprocessor, embedded processor, digital signal processor (DSP), network processor, or other device for executing code. Although only one processor core 200 is illustrated in FIG. 11 , the processing element may alternatively include more than one processor core 200 illustrated in FIG. 11 . The processor core 200 may be a single-threaded core, or for at least one embodiment, the processor core 200 may be multi-threaded in that each of its cores may include more than one hardware thread context (or "logical processor").FIG. 11 also illustrates memory 270 coupled to processor core 200 . Memory 270 may be any of a wide variety of memories (including various levels of a memory hierarchy) known or otherwise available to those skilled in the art. Memory 270 may include one or more instructions of code 213 to be executed by processor core 200, where code 213 may implement method 110 (FIG. 6), method 120 (FIG. 7), and/or method 130 (FIG. 8) discussed above one or more aspects of . Processor core 200 follows a program sequence of instructions indicated by code 213 . Each instruction may enter the front end portion 210 and be processed by one or more decoders 220 . Decoder 220 may generate as its output micro-operations, such as fixed-width micro-operations in a predefined format, or may generate other instructions, micro-instructions, or control signals that reflect the original code instructions. The illustrated front end portion 210 also includes register renaming logic 225 and scheduling logic 230, which generally allocates resources and queues operations corresponding to conversion instructions for execution.Processor core 200 is shown to include execution logic 250 having a set of execution units 255-1 through 255-N. Some embodiments may include several execution units dedicated to a particular function or set of functions. Other embodiments may include only one execution unit or one execution unit that may perform a particular function. The illustrated execution logic 250 performs the operations specified by the code instructions.After completing execution of the operations specified by the code instructions, backend logic 260 retires the instructions of code 213 . In one embodiment, processor core 200 allows out-of-order execution but requires in-order retirement of instructions. Retirement logic 265 may take various forms as known to those skilled in the art (eg, a reorder buffer, etc.). In this manner, the processor core 200 is at least in the code 213 with respect to the output generated by the decoder, the hardware registers and tables utilized by the register renaming logic 225, and any registers (not shown) modified by the execution logic 250. Transformed during execution.Although not shown in FIG. 11 , the processing elements may include other elements that are on-chip with the processor core 200 . For example, the processing elements may include memory control logic along with the processor core 200 . The processing elements may include I/O control logic and/or may include I/O control logic integrated with memory control logic. The processing element may also include one or more caches.Referring now to FIG. 12, shown is a block diagram of an embodiment of a computing system 1000 according to an embodiment. Shown in FIG. 12 is a multiprocessor system 1000 that includes a first processing element 1070 and a second processing element 1080 . Although two processing elements 1070 and 1080 are shown, it is to be understood that embodiments of system 1000 may also include only one such processing element.System 1000 is illustrated as a point-to-point interconnect system, with a first processing element 1070 and a second processing element 1080 coupled via a point-to-point interconnect 1050 . It should be understood that any or all of the interconnects illustrated in FIG. 12 may be implemented as multidrop buses rather than point-to-point interconnects.As shown in FIG. 12, each of processing elements 1070 and 1080 may be a processor that includes a first processor core and a second processor core (ie, processor cores 1074a and 1074b, and processor cores 1084a and 1084b). multi-core processor. Such cores 1074a, 1074b, 1084a, 1084b may be configured to execute instruction code in a manner similar to that discussed above in connection with FIG. 11 .Each processing element 1070, 1080 may include at least one shared cache 1896a, 1896b. Shared caches 1896a, 1896b may store data (eg, instructions) utilized by one or more components of the processor, such as cores 1074a, 1074b, and 1084a, 1084b, respectively. For example, shared caches 1896a, 1896b may locally cache data stored in memory 1032, 1034 for faster access by components of the processor. In one or more embodiments, shared caches 1896a, 1896b may include one or more intermediate level caches such as level 2 (L2), level 3 (L3), level 4 (L4), or other level cache), last level cache (LLC), and/or combinations thereof.Although shown as having only two processing elements 1070, 1080, it is to be understood that the scope of the embodiments is not so limited. In other embodiments, one or more additional processing elements may be present in a given processor. Alternatively, one or more of the processing elements 1070, 1080 may be elements other than processors, such as accelerators or field programmable gate arrays. For example, the additional processing element(s) may include the same additional processor(s) as the first processor 1070, additional processor(s) that are heterogeneous or asymmetric to the first processor 1070, accelerators such as For example, a graphics accelerator or digital signal processing (DSP) unit), a field programmable gate array, or any other processing element. Various differences may exist between the processing elements 1070, 1080 in a range of quality metrics including architecture, microarchitecture, thermal, power consumption characteristics, and the like. These differences can effectively manifest themselves as asymmetries and heterogeneities among the processing elements 1070 , 1080 . For at least one embodiment, each processing element 1070, 1080 may reside in the same die package.The first processing element 1070 may further include memory controller logic (MC) 1072 and point-to-point (P-P) interfaces 1076 and 1078 . Similarly, second processing element 1080 may include MC 1082 and P-P interfaces 1086 and 1088 . As shown in Figure 12, MCs 1072 and 1082 couple the processors to respective memories, namely memory 1032 and memory 1034, which may be portions of main memory locally attached to the respective processors. Although MC 1072 and MC 1082 are illustrated as being integrated into processing elements 1070, 1080, for alternative embodiments, the MC logic may be discrete logic external to processing elements 1070, 1080, rather than being integrated therein.The first processing element 1070 and the second processing element 1080 may be coupled to the I/O subsystem 1090 via P-P interconnects 1076, 1086, respectively. As shown in FIG. 12 , I/O subsystem 1090 includes P-P interfaces 1094 and 1098 . Additionally, the I/O subsystem 1090 includes an interface 1092 that couples the I/O subsystem 1090 with the high-performance graphics engine 1038 . In one embodiment, a bus 1049 may be used to couple the graphics engine 1038 to the I/O subsystem 1090 . Alternatively, point-to-point interconnects may couple these components.In turn, I/O subsystem 1090 may be coupled to first bus 1016 via interface 1096 . In one embodiment, the first bus 1016 may be a Peripheral Component Interconnect (PCI) bus, or may be a bus such as a PCI Express (PCI Express) bus or another third-generation I/O interconnect bus, but The scope of the embodiments is not limited thereto.As shown in FIG. 12, various I/O devices 1014 (eg, biometric scanners, speakers, cameras, sensors) can be coupled to the first bus 1016 along with a bus bridge 1018, which can connect the first bus 1016 is coupled to the second bus 1020 . In one embodiment, the second bus 1020 may be a low pin count (LPC) bus. In one embodiment, various devices may be coupled to the second bus 1020 including, for example, a keyboard/mouse 1012, communication device(s) 1026, and code such as a disk drive or other mass storage device that may include code 1030 of the data storage unit 1019. The illustrated code 1030 may implement one or more aspects of the method 110 (FIG. 6), method 120 (FIG. 7), and/or method 130 (FIG. 8) already discussed. Additionally, audio I/O 1024 may be coupled to second bus 1020 and battery 1010 may provide power to computing system 1000 .Note that other embodiments are contemplated. For example, instead of the point-to-point architecture of FIG. 12, the system may implement a multidrop bus or another such communication topology. Also, the elements of FIG. 12 may alternatively be partitioned using more or fewer integrated chips than shown in FIG. 12 .Additional notes and examples:Example 1 includes a computing system including: a network controller; a processor coupled to the network controller; and a memory module coupled to the processor, the memory module including a set of instructions The instructions, when executed by the processor, cause the processor to: enable a first rank set in the memory modules during a boot sequence based on the battery state and the user interface, and disable the memory modules during the boot sequence based on the battery state and the user interface generating a mapping between the system address space and the first set of blocks in the first set, and excluding the second set of blocks in the first set from the mapping.Example 2 includes the computing system of example 1, further comprising a memory controller, wherein the instructions, when executed by the memory controller, cause the memory controller to monitor write activity relative to the first set of blocks, and based on the Write activity disables flushing in one or more blocks in the first set of blocks.Example 3 includes the computing system of example 1, wherein the user interface includes a configuration object structure that defines one or more of the following: memory attributes, memory configuration, performance configuration, user interface element configuration , storage configuration, or hot-plug configuration.Example 4 includes the computing system of Example 1, wherein the battery state is used to indicate that the residual battery state of charge is below a normal threshold and above a reduced memory enable threshold, and wherein the mapping is used to associate with a low battery mapping scheme .Example 5 includes the computing system of any of Examples 1-4, wherein the instructions, when executed by the processor, further cause the processor to: collect telemetry data during a configurable minimum memory boot mode, wherein the telemetry data for associating with the first rank set and the second rank set; detecting a change in battery state; enabling the second rank set in response to the change; and incorporating the second block set into the map in response to the change middle.Example 6 includes the computing system of example 5, wherein the change is used to indicate that the residual battery state of charge is above a normal threshold.Example 7 includes a semiconductor device comprising one or more substrates and logic coupled to the one or more substrates, wherein the logic is at least partially implemented in configurable logic or fixed function hardware logic In one or more of the one or more substrates, logic coupled to the one or more substrates is used to: during a boot sequence, enable the first bank of sets in the memory module based on the battery status and the user interface; during the boot sequence, based on the battery state and user interface while disabling the second set of blocks in the memory module; generating a mapping between the system address space and the first set of blocks in the first set; and changing the second set of blocks in the first set from Excluded from mapping.Example 8 includes the apparatus of example 7, wherein logic coupled to the one or more substrates is to: monitor write activity relative to the first set of blocks; and disable the first set of blocks based on the write activity A refresh in one or more of the blocks.Example 9 includes the apparatus of example 7, wherein the user interface includes a configuration object structure that defines one or more of the following: memory attributes, memory configuration, performance configuration, user interface element configuration, Storage configuration or hot-plug configuration.Example 10 includes the device of example 7, wherein the battery state is used to indicate that the remaining battery state of charge is below a normal threshold and above a reduced memory enable threshold, and wherein the mapping is used to associate with a low battery mapping scheme.Example 11 includes the apparatus of any of Examples 7-10, wherein logic coupled to the one or more substrates is to collect telemetry data during a configurable minimum memory boot mode, wherein the telemetry data for associating with the first rank set and the second rank set; detecting a change in battery state; enabling the second rank set in response to the change; and incorporating the second block set into the map in response to the change.Example 12 includes the device of example 11, wherein the change is to indicate that the residual battery state of charge is above a normal threshold.Example 13 includes the apparatus of any of Examples 7-12, wherein the logic coupled to the one or more substrates includes transistor channel regions positioned within the one or more substrates.Example 14 includes at least one computer-readable storage medium including a set of instructions that, when executed by a computing system, cause the computing system to: during a boot sequence, enable a first memory module in a memory module based on a battery state and a user interface. a set of banks; during the boot sequence, disabling a second set of banks in the memory modules based on battery status and user interface; generating a mapping between the system address space and the first set of blocks in the first set of banks; and The second block set in a row set is excluded from the map.Example 15 includes at least one computer-readable storage medium as described in Example 14, wherein the instructions, when executed, further cause the computing system to: monitor write activity relative to the first set of blocks; and based on the write activity Disable flushing in one or more blocks in the first block set.Example 16 includes at least one computer-readable storage medium of Example 14, wherein the user interface includes a configuration object structure that defines one or more of the following: memory properties, memory configuration, performance configuration, user interface element configuration, storage configuration, or hot-plug configuration.Example 17 includes the at least one computer-readable storage medium of Example 14, wherein the battery state is used to indicate that the remaining battery state of charge is below a normal threshold and above a reduced memory enable threshold, and wherein the mapping is used to communicate with associated with the low battery mapping scheme.Example 18 includes at least one computer-readable storage medium of any of Examples 14-17, wherein the instructions, when executed, further cause the computing system to: collect telemetry data during a configurable minimum memory boot mode , wherein the telemetry data is used to correlate with the first and second tier sets; detect a change in battery state; enable the second tier set in response to the change; and set the second block set in response to the change incorporated into the map.Example 19 includes the at least one computer-readable storage medium of Example 18, wherein the change is to indicate that the residual battery state of charge is above a normal threshold.Example 20 includes a method comprising: during a boot sequence, enabling a first rank set in a memory module based on a battery state and a user interface; A second rank set; generating a mapping between the system address space and the first block set in the first rank set; and excluding the second block set in the first rank set from the mapping.Example 21 includes the method of Example 20, further comprising: monitoring write activity relative to the first set of blocks; and disabling flushing in one or more blocks in the first set of blocks based on the write activity.Example 22 includes the method of example 20, wherein the user interface includes a configuration object structure that defines one or more of the following: memory attributes, memory configuration, performance configuration, user interface element configuration, Storage configuration or hot-plug configuration.Example 23 includes the method of example 20, wherein the battery state is used to indicate that the residual battery state of charge is below a normal threshold and above a reduced memory enable threshold, and wherein the mapping is used to associate with a low battery mapping scheme.Example 24 includes the method of any one of Examples 20-23, further comprising collecting telemetry data during a configurable minimum memory boot mode, wherein the telemetry data is used to integrate with the first rank and the second rank associating; detecting a change in battery state; enabling the second set of rows in response to the change; and incorporating the second set of blocks into the map in response to the change.Example 25 includes the method of example 24, wherein the changing is to indicate that the residual battery state of charge is above a normal threshold.Example 26 includes an apparatus for performing the method of any of Examples 20-25.Embodiments are suitable for use with all types of semiconductor integrated circuit ("IC") chips. Examples of these IC chips include, but are not limited to, processors, controllers, chipset components, programmable logic arrays (PLAs), memory chips, network chips, systems on chips (SoCs), SSD/NAND controller ASICs, and the like. Additionally, in some of the drawings, signal conductors are represented by lines. Some lines may be different to indicate more constitutive signal paths, may have numerical labels to indicate the number of constitutive signal paths, and/or may have arrows at one or more ends to indicate primary information flow. However, this should not be interpreted in a limiting manner. Rather, such added detail may be used in conjunction with one or more exemplary embodiments to facilitate easier understanding of the circuit. Any signal line represented, with or without additional information, may in fact include one or more signals that may travel in multiple directions and may be implemented with any suitable type of signal scheme, such as Digital or analog lines, fiber optic lines, and/or single-ended lines implemented using differential pairs.Example sizes/models/values/ranges may have been given, but the embodiments are not so limited. As fabrication techniques (eg, photolithography) mature over time, smaller size devices are expected to be fabricated. Additionally, well-known power/ground connections to IC chips and other components may or may not be shown in the figures for simplicity of illustration and discussion and to avoid obscuring certain aspects of the embodiments. Further, arrangements may be shown in block diagram form in order to avoid obscuring the embodiments, and also in view of the fact that the details of implementation with respect to such block diagram arrangements are highly dependent on the computing system in which the embodiments are to be implemented, ie, this Class details should be well within the purview of those skilled in the art. Where specific details (eg, circuits) are set forth to describe example embodiments, it will be apparent to those skilled in the art that embodiments may be practiced without or with variations of these specific details. The description is therefore to be regarded as illustrative rather than restrictive.The term "coupled" may be used herein to denote any type of direct or indirect relationship between the components in question, and may apply to electrical, mechanical, fluidic, optical, electromagnetic, electromechanical or other connections. In addition, the terms "first," "second," etc. may be used herein to facilitate discussion only and have no particular temporal or chronological significance unless otherwise stated.As used in this application and the claims, a list of items joined by the term "one or more of" may mean any combination of the listed items. For example, the phrase "one or more of A, B, or C" can mean A; B; C; A and B; A and C; B and C;Those skilled in the art will appreciate from the foregoing description that the broad techniques of the embodiments can be implemented in a variety of forms. Thus, although embodiments have been described in connection with specific examples thereof, the true scope of the embodiments should not be limited thereto, as other modifications will become apparent to those skilled in the art after a study of the drawings, specification, and appended claims .
A method and apparatus to reduce the amount of required memory and instruction cycles when implementing Fast Fourier Transforms (FFTs) on a computer system is described. The invention optimizes FFT software using in-place bit reversal (IPBR) implemented on a processor capable of bit reversed incrementation. Alternative embodiments implement the invention for out of place bit reversal (OOPBR) and on processors that do not support special instructions for bit reversed incrementation. The invention only generates unique bit-reversed address pairs and avoids generation of self-reversed addresses. To optimize the invention for in place bit reversal, every non-self bit reversed address in the input array is generated only once, while making simple, computationally efficient increments away from the previous pair of bit reversed addresses. The address pair generator can independently advance only one address in each address pair, and bit reversal of one address uniquely defines the other address.
An address pair generator for reordering the elements of a 2^(log2N) length input array in bit reversed order, comprising:a processor having memory capacity;generating a sequence of address pairs through said processor from said input array such that each of said address pairs generated is unique; andgenerating no self-reversed addresses of said input array into said sequence.The address pair generator of claim 1, wherein:said generator reverses the order of subsequences.The address pair generator of claim 1, wherein:said generator interchanges the order of at least one of the first and the second addresses of said address pairs.The address pair generator of claim 1, further comprising:exchanging elements referenced by the two addresses of each said address pair in said sequence that results in an in place mapping of said input array elements in bit reversed order.The address pair generator of any one of claims 1 to 4, wherein:said unique generation of said address pairs includes no redundant address in any location of said sequence.The address pair generator of any one of claims 1 to 5, wherein:said sequence of address pairs is generated using bit reversed address incrementation without an alignment restriction.The address pair generator of claim 1, further comprising:advancing each said address pair with the steps of:storing a primary or secondary bit reversed address pair;performing a discrete set of moves to advance each of said address pairs such that each pair remains mutually bit reversed after each said move;controlling the order of said moves such that said sequence of address pairs are formed from resultant values of said primary address pair after application of each of said moves to said primary address pair.An address pair generator for reordering the elements of a 2^(log2N) length input array in bit reversed order, comprising:three sets of array element addresses;said first set having a corresponding bit reversed address in said second set, and said third set containing all of the self reversed addresses from said array element addresses;advancing through said first set elements to define the first address of each said address pair;defining the second address of each said address pair using the appropriate complimentary bit reversed increment; anda sequence of unique address pairs generated such that no said self-reversed address appears in said sequence.A method for address pair generation for reordering the elements of a 2^(log2N) length input array in bit reversed order, comprising:generating a sequence of address pairs from said input array such that each of said address pairs is unique; andgenerating no self-reversed addresses of said input array into said sequence.A method for address pair generation for reordering the elements of a 2^(log2N) length input array in bit reversed order, comprising:organizing said array element addresses into three sets, said first set having a corresponding bit reversed address in said second set, and said third set containing all of the self reversed addresses from said array element addresses;advancing through each of said first set elements to define the first address of each said address pair;defining the second address of each said address pair using the appropriate complimentary bit reversed increment;generating a sequence of unique address pairs such that no said self-reversed address appears in said bit reversed address sequence.A method for address pair generation using out of place bit reversal for reordering the elements of a 2^(log2N) length input array in bit reversed order, comprising:removing the start address from the said generated address pair sequence;generating self-reversed address pairs with a second address generator; andtransferring data only once for each self reversed address pair.A method for creating an address pair generator for reordering the elements of a 2^(log2N) length input array in bit reversed order, comprising:plotting a plurality of first and second addresses of sequential address pair values onto a graph wherein a first axis of coordinates represents the most significant bits and a second axis of coordinates represents least significant bits for each address of said address pair, and wherein each graph coordinate represents a unique address of said elements of said input array;defining said address pair generator by defining a path that systematically steps through a plurality of said coordinates.
FIELD OF THE INVENTIONThe present invention relates to implementing Fast Fourier Transforms (FFTs) on a computer system and, more particularly, to optimizing FFT software using in-place bit reversal (IPBR) implemented on a processor capable of bit reversed incrementation, and optimizing FFT software using out of place bit reversal (OOPBR) on processors that do not support special instructions for bit reversed incrementation.BACKGROUND OF THE INVENTIONAlgorithms that perform discrete transforms such as Fast Fourier Transforms (FFTs) are well known. The Fourier transform is a mathematical operator for converting a signal from a time-domain representation to a frequency-domain representation. The inverse Fourier transform is an operator for converting a signal from a frequency-domain representation to a time-domain representation. The Discrete Fourier Transform (DFT) may be viewed as a special case of the continuous form of the Fourier transform. The DFT determines a set of spectrum amplitudes and phases or coefficients from a time-varying signal defined by samples taken at discrete time intervals.As is well known, in the mid-1960's techniques were developed for more rapid computation of the discrete Fourier transform. These techniques became known as the fast Fourier transform (FFT), first described in a paper by J.W. Cooley and J.W. Tukey, entitled "An Algorithm for the Machine Calculation of Complex Fourier Series," Mathematics of Computation (1965), Vol. 19, No. 90, pp. 297-301. Some patents in the field of processing FFTs include U.S. Patent No. 3,673,399 to Hancke et al for FFT PROCESSOR WITH UNIQUE ADDRESSING; U.S. Patent No. 6,035,313 to Marchant for a MEMORY ADDRESS GENERATOR FOR AN FFT; U.S. Patent No. 6,247,034 B1 to Nakai et al for a FAST FOURIER TRANSFORMING APPARATUS AND METHOD, VARIABLE BIT REVERSE CIRCUIT, INVERSE FAST FOURIER TRANSFORMING APPARATUS AND METHOD, AND OFDM RECEIVER AND TRANSMITTER; U.S. Patent No. 4,823,297 to Evans for a DIGIT-REVERSAL METHOD AND APPARATUS FOR COMPUTER TRANSFORMS; U.S. Patent No. 5,329,474 to Yamada for an ELEMENT REARRANGEMENT METHOD FOR FAST FOURIER TRANSFORM; U.S. Patent No. 5,473,556 to Aguilar et al for DIGIT REVERSE FOR MIXED RADIX FFT; and U.S. Patent No. 4,977,533 to Miyabayashi et al for a METHOD FOR OPERATING AN FFT PROCESSOR.In performing a fast Fourier transform of the type known as a radix-two dimension-in-time FFT, the size of the transform is successively halved at each stage. In the illustrative circuit described in Figure 2, a 32-point FFT is split into a pair of 16-point FFT's, which are in turn split into four 8-point FFT's, then eight 4-point FFT's, and finally sixteen 2-point FFT's. The resulting computation for a 32-point FFT is shown in the signal flow graph of Figure 2. The quantities on the left hand side of the signal flow graph, ranging from x(0) to x(31) are the sampled inputs to the FFT, while the signals appearing at the right-hand side of the signal flow graph and numbered 0 through 31 are the resulting FFT coefficients. The signal flow graph illustrates that there are five passes or phases of operation, derived from the relationship that the number 32 is two to the fifth power.The convention used in the signal flow graph is that an arrowhead represents multiplication by the complex quantity Wk adjacent to the arrowhead. The small circles represent addition or subtraction as indicated in Figure 2a. If the input to each of the butterfly computational modules shown in Figure 2a is indicated by signal names A and B, and the outputs are indicated by signal names C and D, then the computations performed in the butterfly module are: C = A+BW and D = A-BW. The W values are usually referred to as "twiddle factors" and represent phasors of unit length and an angular orientation which is an integral multiple of 2B/32.An aspect of FFT computation is that the results of each butterfly computation may be stored back in memory in the same location from which the inputs to the butterfly were obtained. More specifically, the C and D outputs of each butterfly may be stored back in the same locations as the A and B inputs of the same butterfly. This FFT computation is referred to as an "in-place" algorithm. Most discrete transforms are executed "in-place" to conserve memory, which in turn reduces system size, power consumption, cost, and allocates memory for other tasks. For such "in-place" FFTs, the reordering required to counteract the effect of the transform decompositions is achieved by a particular permutation of the elements of the data sequence.Bit-reversed address mapping is commonly used in performing radix-2 FFTs. When the radix-2 FFT is computed, data must be rearranged in bit-reversed order. If the FFT is performed entirely by software, the FFT process uses an algorithm to pre-place data in memory in bit-reversed order prior to executing the butterfly computations.Obtaining FFT efficiency is a high priority in the computer processor industry. The FFT algorithm has high intrinsic value and is widely used. The instruction cycle requirement of custom optimized FFT software is the accepted benchmark standard for measuring a processor's computational efficiency. For a specific type of FFT (e.g., in-place, using relocatable data memory, single precision, radix 2, complex, 256 point, unconditional ½ scaling per butterfly, etc.) the number of FFTs/sec executed is a more accurate relative measure of a processor's computational power than MIPs (millions of instructions per second). FFT software requiring fewer resources enhances both the real and projected capabilities of the processor.Because an optimized FFT computation includes bit reversed addressing, many DSPs (Digital Signal Processors) include customized instructions to facilitate an efficient implementation of bit reversed addressing. Typically, this is done by special instructions that allow address registers to be incremented so that carry (or borrow) bits propagate toward less significant bits (backward). For normal addition carry bits must propagate toward more significant bits. The present invention is primarily intended to optimize FFT software implemented on a processor capable of bit-reversed address register incrementing in the described manner. However, the invention also has applications on processors that lack this capability.Reference is made to Table I, listing a binary address, contents of memory before bit reversed ordering, the corresponding bit reversed binary addresses, and contents of memory after bit reversed ordering. Assume an input array is stored in 2^(log2N+M) contiguous words of memory, beginning at start address S_in. The array has 2^log2N elements and each element is stored in 2^M contiguous words of data memory. For example, four words of contiguous memory would accommodate two words of precision for both the real and imaginary part of complex input data elements. An arbitrary address for data memory containing the input array can be expressed in the form, AR1 = S_in+[B_(log2N-1)*2^(log2N-1) + B_(log2N-2)*2^(log2N-2) +... B_0*2^0]*2^M+P(each binary B_k coefficient can be zero or one, and P=0,1,2,...(2^M)-1).The corresponding bit reversed address is obtained by reversing the order of the B_k values: AR2 = bit_rev(AR1) = S_out+[B_0*2^(log2N-1) + B_1*2^ (log2N-2) +... B_(log2N-1)*2^0]*2^M+P.An array has been "bit reversed" after all input data is copied from its original location at address AR1, to its new location at address AR2=bit_rev(AR1). Sequential output array elements are rearranged in bit reversed order relative to the input array. Table I illustrates a bit reversed array for the case log2N = 3, M = S_in = S_out = 0. The sequential addresses in the bit reversed address column are obtained by incrementing the prior address with 100 binary, and propagating any carry bit that results backwards. Self-reversed addresses occur when AR1=bit_rev(AR1). The fourth column in Table I illustrates bit reversed addresses AR1 which equal bit reversed addresses bit_rev(AR1) from either self reversal of AR1 addresses, such as binary AR1 = 1,1,1; and bit reversed sequence addresses that equal some AR1 address other than those that are self reversed, such as bit reversed binary address 0,0,1 equals the bit reversed binary address 1,0,0. For typical processors and software, the output buffer must be "aligned", i.e., S_out for S_in must be a multiple of 2^(log2N+M) for bit reversed address register incrementation to work properly.Out of place bit reversal (OOPBR) refers to the technique of bit reversing an input data array so that the output data array falls elsewhere in data memory, i.e., S_in ≠ S_out, whereas in place bit reversal (IPBR) refers to the technique of re-ordering elements of an input data array in bit reversed order so that the output array overwrites the input array, i.e. S_in = S_out. For some applications, OOPBR may be advantageous if input data is located in slower, hence cheaper, memory, and faster "scratch" or "volatile" memory is available to generate the bit reversed output array. The subsequent FFT operations on the bit reversed array exploit the faster memory. For this case the cycles required may exceed the benchmark OOPBR FFT cycles, because the digital signal processor (DSP) manufacturer will measure the benchmark case with both the input and output OOPBR array in the fastest memory. An FFT using OOPBR may have a hidden cycle penalty beyond the bit reversal itself, when the output is eventually copied back to the location of the input array. Computational processes that use more of the available scratch memory than necessary can lead to future problems when converting to an operating system that permits multiple computational processes to interrupt each other.For other applications, the input data for the FFT is already located in fast data memory. For example, the input data may be arrived at as the result of many computations, and for adequate optimization of MIPs (Millions of Instruction cycles Per Second), the FFT input array may already be in fast memory. In that event, OOPBR increases the amount of fast data memory required by the entire FFT by a factor of two. This is the case because the rest of the FFT embodies an intrinsically in place algorithm, requiring no additional data memory other than the input array itself. In the event that the cycles required for IPBR can be made more competitive relative to OOPBR, for many applications the additional data memory requirement of OOPBR cannot be justified.The second and third columns of Table II illustrate the same sequence of address pairs given in columns one and three of Table I. The conventional IPBR address generator yields these address pairs for N=8. The fourth column indicates which address pairs are needed for IPBR, i.e., unique address pairs referencing data that needs to be swapped. The fourth column of Table II also illustrates that for an array of eight elements, the address pair generator conventionally used for IPBR produces useful address pairs for address pair numbers two and four, which is only two out of eight bit reversed pairs.TABLE II.1000000No, self-reversed2001100YES3010010No, self-reversed4011110YES5100001No, redundant with address pair 26101101No, self-reversed7110011No, redundant with address pair 48111111No, self-reversedA flawed IPBR algorithm is now described to illustrate the problems encountered attempting to optimize IPBR. The first address register is initialized to S_in, and each iteration of this first address register is advanced linearly to reference the next array element in their natural order. A second address register is also initialized to S_in and is incremented each iteration in a bit reversed manner to obtain the corresponding bit reversed version of the first address. Thus a new pair of addresses is generated each iteration, as illustrated by columns 2 and 3 of Table II. After each bit reversed address pair is generated, the contents of memory referenced by the first and second address registers are exchanged. This technique will work for OOPBR. But for IPBR, all the self-reversed address contents are needlessly exchanged once. All the non-self-reversed address contents are erroneously exchanged twice. The first address register at some point references every element in the array, so if the address pair (A, B) is generated, (B, A) is also generated somewhere in the sequence of address pairs. This flawed IPBR approach exchanges data, referenced by any non-self-reversed address and its bit reversed compliment, not once but twice, resulting in an output array that is equivalent to the input array.The conventional IPBR algorithm in the prior art involves a modification of this flawed approach. The conventional IPBR algorithm generates address pairs in a manner identical to the described flawed algorithm. However, instead of always swapping the contents referenced by each address pair that is generated, the swap is only executed if the address generated by linear incrementing is less than the address produced by bit-reversed incrementing. Note the criterion of the first address being less than the second identifies the first occurrences of useful address pairs for IPBR in Table II. This condition for swapping eliminates transferring data from self-reversed addresses and prevents swapping for one of the redundant pairs of non-self-reversed addresses. Implementing the conditional swapping typically requires transferring both address registers into accumulators, subtracting, and conditionally branching. For this reason, typical IPBR implementations require two to ten times as many instruction cycles as OOPBR implementations.The conventional IPBR method is inefficient because it relies on an address pair generator that yields extraneous address pairs.SUMMARY OF THE INVENTIONThe present invention is a method and apparatus to optimize in place bit reversal (IPBR) in computer systems. More particularly, the present invention reduces the amount of required memory and instruction cycles when implementing Fast Fourier Transforms (FFTs) on a computer system. The preferred embodiment optimizes FFT software using IPBR implemented on a processor capable of bit reversed incrementation, such as the Texas Instruments (TI) C54x digital signal processor (DSP). However, alternative embodiments implement the invention for out of place bit reversal (OOPBR) and on processors that do not support special instructions for bit reversed incrementation.The present invention is an address pair generator that yields every non-self-bit-reversed address in the input array only once, thereby avoiding production of extraneous address pairs. To optimize IPBR, every non-self-bit-reversed address in the input array needs to be generated only once, while making simple, computationally efficient increments, or moves, away from the previous pair of bit reversed addresses. The address pair generator of the present invention independently determines, or moves, only one address in each address pair. For any address pair, bit reversal of one address uniquely defines the other address.The present invention facilitates the identification of computationally efficient patterns for sequentially generating a unique set of bit reversed address pairs. Five exemplary new IPBR methods and modifications of these methods are presented. The size of the array to be bit reversed is 2^(log2N). For use on a DSP capable of bit reversed incrementation of address registers but having only one address increment register, optimized program code implementing Method 1 requires minor changes to work for odd and even log2N. For processors with more than one address increment register available, optimized code implementing Method 1 works for all values of log2N. Method 2 further reduces cycles for odd log2N. Method 3 reduces cycles for the even log2N arrays relative to Method 1. Method 4 is similar to Method 1, however Method 4 does not pose any problem for processors with only one address increment register. Method 1 is unique in that it reduces the alignment requirement. Method 5 extends Method 3 to work for odd log2N.Methods 1m, 2m, 3m, and 4m are modifications of Method 1, 2, 3, and 4 respectively. All these modified Methods require only two address registers. The cycle count for Method 2m and Method 2 will be very close, if not identical. The other modified methods require fewer address registers, but increase the number of nested inner loops. Thus Methods 1m, 3m and 4m may reduce or increase cycles relative to their un-modified counterparts, depending on the processor.An application of the present invention is for use as IPBR software that removes the typical input buffer alignment restriction for bit reversed addressing. This application is important because the rest of an FFT can be implemented without any buffer alignment restriction. By giving up some of the cycles this invention saves, the requirement for input buffer alignment is completely removed. Efficient removal of the alignment requirement may require inner loops that always bit reverse increment the same element of the address pair. This can make Methods 1 and 4 the optimal choice for IPBR without an alignment restriction. Method 1 is unique in that even without alignment removal, its inherent alignment requirement is relaxed to 2^(log2N/2 -1) for even log2N and 2^((log2N-1)/2) for odd log2N. All other methods have an inherent 2^(log2N) alignment requirement.The invention also reduces OOPBR cycles for processors that do not support bit reversed address register incrementation and require many cycles to generate a bit reversed address. This OOPBR method removes the start address offset from the address pair sequence generated, and consequently this OOPBR method need not impose any alignment constraints on the input or output buffer.BRIEF DESCRIPTION OF THE DRAWINGSPreferred embodiments of the invention are discussed hereinafter in reference to the drawings, in which:Figure 1 illustrates a decisional flowchart to choose a method of IPBR;Figure 2 is an illustrative signal flow graph of a fast Fourier transform in the prior art;Figure 2a is an illustration of computations made in Figure 2;Figure 3 is an illustrative graph of a conventional IPBR address generation;Figure 4 is an illustrative graph of Method 1 for IPBR address generation;Figure 5 is an illustrative graph of Method 1 for IPBR address generation;Figure 6 is an illustrative graph of Method 4 IPBR address generation scheme for N=64 addresses;Figure 7 is an illustrative graph of Method 3 IPBR address generator;Figure 8 is an illustrative graph of Method 1 IPBR address generation for odd log2N;Figure 9 is an illustrative graph of Method 4 IPBR address generation for odd log2N;Figure 10 is an illustrative graph of Method 2 IPBR address generation for odd log 2N;Figure 11 is an illustrative graph of Method 5 IPBR address generation;Figure 12 is an illustrative graph of Method 2m IPBR address generation;Figure 13 is an illustrative graph of Method 1m IPBR address generation; andFigure 14 is an illustrative graph of Method 4m IPBR address generation.DETAILED DESCRIPTION OF THE INVENTIONThe preferred and alternative exemplary embodiments of the present invention include methods of in place bit reversal (IPBR) that are computationally efficient patterns to generate sequential address pairs for computing fast Fourier transforms in a processor. To decide which of the methods of the present invention is most efficient for a specific application, reference is made to the decisional flowchart of Figure 1. Assume an input array 10 is stored in 2^(log2N+M) contiguous words of memory, beginning at start address S_in. The array has 2^log2N elements and each element is stored in 2^M contiguous words of data memory. For example, four words of contiguous memory would accommodate two words of precision for both the real and imaginary part of complex input data elements.In the present invention, five new IPBR address generators for mapping arrays in bit reversed order are disclosed. Methods 1m, 2m, 3m, 4m are modifications of the respective method. Many of the methods and address generators presented herein have been implemented on a TI C54x digital signal processor (DSP) that is manufactured by Texas Instruments. New methods for out of place bit reversal (OOPBR) address generation are also presented.The present invention discloses methods and devices for organizing array addresses into three sets, A, B, and C, to facilitate the creation of more optimal IPBR address pair generation. Every address in set A has a corresponding bit reversed address in set B. Set C contains all the self reversed addresses. Once these sets are defined, the new address pair generator systematically advances through every element of set A to define the first address of each address pair. Since only one address of each pair is independently defined, by using the appropriate complimentary bit reversed advance, the second address increment is also defined. The three sets of addresses are defined so that simple and efficient means exist for systematically stepping through every address in set A.The method in the present invention for dividing addresses into sets is as follows. For an array of length 2^(log2N), let Q equal the truncated integral quotient of log2N/2. Each array element address in binary form is divided into its Q most significant bits (MSBs), denoted by "x", and its Q least significant bits (LSBs), denoted by "y". For even log2N, there are two ways to uniquely define the set of sets A, B, and C. One way is to divide up the addresses with the bit reversed Q LSBs greater than, less than, or equal to the Q MSBs. The second way is to divide up addresses with the Q LSBs greater than, less than, or equal to the bit reversed Q MSBs.For odd log2N, there are three ways to divide the addresses in three sets listed in Table III. After discarding the middle Q+1 th bit, the first two ways are the same as the even log2N case. The third way is to reverse the inequality in the inequality relationship defining a set, according to whether the Q+1 th middle bit of the address is zero or one. For the purposes of graphically visualizing all array element addresses and recognizing an easy way to step through set A, appropriately append the middle Q+1 th bit to either the x or y axis data. Here it is prefixed to the vertical axis data. For odd log2N, IPBR Method 1 uses the first way for defining the three sets, Method 3 uses the second way, and Method 2 and 5 use the third way.For Q=log2N»1 let x be the Q MSBs and y be the Q LSBs. Let z be the middle Q+1 st bit for odd log2N. For Table III, the bit_rev() operator reverses Q bits.TABLE III.Even and odd log2NFirst waybit_rev(y)>xbit_rev(y) <xbit_rev(y) =xSecond wayy>bit_rev(x)y<bit_rev( x)y=bit_rev( x)Odd log2N onlyThird waybit_rev(y)>x if z=0 bit_rev(y)<x if z=1bit_rev(y) <x if z=0 bit_rev(y) >x if z=1bit_rev(y) =xy>bit_rev(x) if z=0 y<bit_rev(x) if z=1y<bit_rev( x) if z=0 y>bit_rev( x) if z=1y=bit_rev( x)The "filtered" conventional IPBR address pair generator, defined as the conventional IPBR address generator after extraneous pair removal, is segregated using the first way. The "filter" accepts only address pairs with first address, given by "a", that satisfy a<bit_rev(a). For even log2N, define xy as the number with MSBs equal to x and LSBs equal to y. Then a<bit_rev(a) implies xy<bit_rev(xy) and thus x<bit_rev(y) so the bit reversed Q LSBs are greater than the Q MSBs. Thus, this invention includes a criterion equivalent to the conventional IPBR criterion, but uses a more useful form of this criterion earlier in the conceptual process to avoid later extraneous pair removal.An important application of the present invention is in IPBR address generators and methods that remove the typical input buffer alignment restriction for bit reversed addressing. This is important because the remaining FFT process can be implemented without any buffer alignment restriction. By contributing some of the cycles that are conserved by the present invention, software may be added that completely removes the requirement for input buffer alignment. Efficient removal of the alignment requirement may require inner loops that always bit reverse increment the same element of the address pair. This can make Methods 1 and 4 the optimal choice for IPBR without an alignment restriction. Method 1 is unique in that even without being modified for alignment removal, its inherent alignment requirement is relaxed to 2^(log2N/2 -1) for even log2N and 2^((log2N-1)/2) for odd log2N. All other methods have a 2^(log2N) alignment requirement.Referring to Figure 1, the figure is a decisional flowchart providing selections to implement specific methods for address pair generators of the present invention based upon certain information. The address generators of the present invention can perform without an alignment restriction or with merely a relaxed alignment restriction. For performing address pair generation with only a reduction of the alignment constraint 10, Methods 1 and 1m are appropriate 12. If an elimination, instead of reduction, of the alignment constraint 14 is preferred, then Methods 1, 1m, 4, and 4m are appropriate 16.The address generator of the present invention generates bit reversed addresses for an FFT with a size log2N input array 18 for use on a digital signal processor or other processing means capable of performing FFT operations. When only two address registers are available on a processor 20, then IPBR Methods 1, 3, 4, and 5 should be avoided 22. If only one address register is available on a processor 24, then only Methods 1 and 1m should be avoided 26 in processing an FFT. When the operations must work for both an even and odd log2N input array 28, Methods 2, 2m, 3, and 3m should not be used. However, if the input array is only an even log2N or an odd log2N, specific methods can be chosen for optimal reduction of MIPS while processing. To optimize an odd log2N input array 32, Method 2 is the most efficient method in most operations 34. To optimize an even log2N input array 36, Method 3 is the most efficient method in most operations 38.To create the IPBR generators and their modified versions of the present invention, x, y plots are used to plan the path to follow with a method prior to defining the method itself. Specific cases for IPBR methods of the present invention and the conventional method are plotted in Figures 3-14. For all plots, M=S_in=S_out=0. Each IPBR method generates a sequence of address pairs. The first address of an address pair is represented by AR1 and the second address by AR2. Here AR1=bit_rev(AR2) and AR2=bit_rev(AR1). Sequential AR1 and sequential AR2 values are shown in the plots. Each square in the plots, formed by the x and y axis grid, represents the address of a unique element in the input array. In other words, on the graphs every array address is represented by one square. For log2N = 6, the x axis value gives the three most significant bits (MSB) of an address, and the y axis value gives the three least significant bits (LSB) of a six bit address. Address coordinates are offset by (½, ½) to force the plots into the middle of a square made by the plot's grid. The address corresponds to the square's lower left corner coordinates. The first addresses of each bit reversed pair (the AR1s) are graphed using small circles. The second address of the each address pair (the AR2s) are graphed using small squares. Sequential AR1 address values are connected with a dashed line connecting the circles. Sequential AR2 address values are connected with a solid line connecting the small squares.Figure 3 illustrates the sequence of addresses generated using the conventional IPBR method found in the prior art. For the conventional method, the initial address pair is graphed at AR1=0=(0,0) and AR2=0=(0,0). The second address pair is at AR1=(0,1) and AR2=(4,0). For this second address pair note (b indicates binary); AR2=bit_rev(AR1)=bit_rev(1)=bit_rev[(0,1)]=bit_rev( 000 001 b) =100 000 b=(4,0)=32.Note that both a circle and a square symbol land on every grid square in Figure 3. For the conventional method, the address generation scheme "lands" on every square twice. For any array element address x, the address pair AR1=x, AR2=bit_rev(x) occurs in the sequence of address pairs, as well as AR1=bit_rev(x), AR2=x. If one swapped the contents of memory referenced in the bit reversed pair of addresses every time a new pair of addresses is generated, then the data referenced by these redundant bit-reversed pairs would be swapped twice, and data would end up back where it started. The conventional address generation scheme has 3 computational penalties: (1) because every non-self-bit-reversed address is generated twice, twice as many iterations are needed; (2) testing and conditional branching is required to break the degeneracy and swap only once per address; and (3) the self-bit-reversed addresses are also generated by the sequence of address pairs. For example, the address (5,5) corresponds to binary address 101 101 b, which remains the same after bit reversal. Since the memory referenced by a self-bit-reversed address does not need to be exchanged with itself, it wastes additional cycles when the IPBR address generation scheme generates self-bit-reversed addresses.The five IPBR methods of the present invention are defined by sequential increments or "moves" of the two "bit reversed pairs" (AR1, AR2) and (AR3, AR4). For two addresses, A and B, if B=bit_rev(A) then it follows that A=bit_rev(B) and (A,B) form a "bit reversed pair". The array size is 2^log2N. Variable "Q" is defined as the truncated integral quotient of log2N/2, i.e., odd log 2N is (log2N-1)/2 and even log2N is (log2N/2); and where variable "R" is defined as the remainder of log2N/2. Address increments are I0=2^(log2N-1), I1=2^(log2N-Q-1), I2=2^Q, I3=2^(log2N-Q), I4=2^(Q-1), I5=2^(log2N-2). The address increments form four bit reversed pairs, i.e., (1,I0), (I1,I2), (I3, I4), and (2,I5). Bit reversed increments are indicated by a suffix of B. For the bit_rev operator that reverses the order of bits: ARx=ARx+IyB=bit_rev[bit_rev(ARx)+bit_rev(Iy)].An exemplary preferred embodiment of the present invention is Method 1. Method 1 may be implemented for both odd and even log2N input array sizes. This address generation scheme generates only unique address pairs referencing data that needs to be swapped for IPBR, thereby eliminating the testing and conditional branching found in methods of the prior art and eliminating the waste of additional instruction cycles due to IPBR address generation for redundant and self-reversed addresses.Figure 4 illustrates the result of Method 1 for generating bit reversed address pairs. The first pair of bit reversed addresses is AR1=(x1,y1)=(1,0)=[001 000b]=8 and AR2=(x2,y2)=(0,4)=[000 100b]=4. Thus in Figure 4, (1,0) initiates the sequence of first addresses in each sequential address pair generated, and (0,4) initiates the sequence of second addresses. For each address pair, the second address gives the first address bit-reversed. Note that every square (unique address) is generated only once, and no self-bit-reversed addresses are generated. For example, the address generation scheme never lands on the (5,5) square of address 101 101b, which thus has no circle or square symbol in Figure 4. Because the address generation scheme generates only unique address pairs referencing data that needs to be swapped for IPBR, the testing and conditional branching is eliminated.To understand the concept behind Method 1 and subsequent methods of the present invention, it is helpful to bit reverse the y-axis data of Figure 4, which is illustrated in Figure 5. After this mapping, self-reversed addresses all lie on a diagonal line. The plot is split in an imaginary line from (0,0) to (8,8) diagonally through the graph. This divide splits the graph area into two triangles: a top and a bottom triangle. The bit reversed address of every square in the upper triangle is located in the lower triangle. By keeping AR1 in the lower triangle, AR2 in the upper triangle, and systematically stepping through each square (or address), Method 1 avoids all redundant pairs and self-reversed addresses.All the IPBR Methods of the present invention can be modified in three different ways by replacing part or all of the address pair sequence with a "topologically similar" sequence. Variations of the IPBR Methods include 1) x and y axis inversions of the original sequence, and 2) reversing the order of the original subsequences, 3) replacing an (A,B) address pair with (B,A) address pair for arbitrary numbers of terms in sequences.Method 1 uses the first way of defining three sets, so the y axis data is bit reversed. Set A contains all the array element addresses with bit_rev(y)<x, Set B contains addresses with bit_rev(y)>x, and for Set C, bit_rev(y)=x. Method 3 and 4 use the "second" way of bit reversing the x axis data. Set A contains all the addresses with y>bit_rev(x), Set B contains addresses with y<bit_rev(x) and for Set C, y=bit_rev(x). The general technique is illustrated by Fig. 5 for Method 1. For Method 1, Set A is the lower triangle, Set B the upper triangle, and Set C elements lie along the diagonal.Any method can be altered by interchanging the order of the first and second addresses of an address pair, which is a third way of defining sets for bit reversal of the present invention. Such exchanges may be favorable for reducing program code or cycles but should not be thought of as producing a different address pair generator that is not included in this invention. The only difference is that in alternating subsequences, the choice of first and second address is exchanged. Such an exchange does not result in a new address pair, and is therefore an IPBR address pair generator within the scope of the present invention. There are many other methods, not explicitly defined herein, for systematically stepping through set A. For example, the generator could proceed through set A using horizontal lines instead of vertical lines as in Method 1, which advances along vertical lines whenever possible in the lower triangle of Fig. 5.To perform in place bit reversal, Method 1 uses three "moves" defined in Table IV. For odd log2N, I2=I1. For even log2N, I2=I1+I1. This results in different optimized code for even and odd log2N cases on processors with only one address increment register.TABLE IV.AR1=AR1+I 1BAR1=AR 3AR3=AR3+I 3AR2=AR2+I 2AR2=AR 4AR4=AR4+I 4BMethod 1 is implemented with the following steps:Therefore, to implement the operations of Method 1, addresses AR3=S_in, AR4=S_in are initialized. Method 1 iterates from k=(R+1) to (R+1)*((2^Q)-1) in steps of (R+1); performs Move 3; performs Move 2, iterates from j=1 to k-1 in steps of 1; performs Move 1; and then ends iterations of the j loop and then ends iterations of the k loop. The address pair sequence generated for Method 1 is defined by all the values that AR1, AR2 take on after moves that affect these values (not Move 3).Method 4, illustrated in the graph in Figure 6, has initial pair AR1=1, AR2=32. Note x axis data is bit reversed, unlike Method 1 in Figure 4. Figure 7 illustrates Method 3. The first address pair is AR1=(0,1) and AR2=(4,0). For even log2N, this varies from Method 4 by using a zig-zag pattern to step through the same sets, instead of advancing horizontally or vertically when possible.For the segregation into three sets for odd log2N in Figure 8, set A is in the lower triangle, which is systematically covered by the first address of the Method 1 IPBR address pair generator. The self reversed set, C, forms a line with slope 2. The vertical axis data, referred to as zy, has been prefixed by the middle bit z. For the new zy vertical axis in Figure 8, set A is defined by (bit_rev(zy) >>1) <x. This is equivalent to bit_rev(y)<x. Thus bit_rev(y)<x defines set A here.The "second way" described for segregating three sets of addresses for odd log2N is used by Method 4, as illustrated by Figure 9. Next, the "third way" to segregate the addresses into three sets for odd log2N is illustrated in Figures 10 for Method 2 and Figure 11 for Method 5. Method 2 is a special case where Set A is defined as the union of two sets with reversed inequalities, depending on whether the Q+1 st bit is zero or one. This definition of Set A facilitates a special technique exploited by Method 2 for continuing to use the same address advance increment scheme even when faced with the Set A zy-axis vertical boundary. The address sequence "wraps around" while continuing to use the same increment with no special treatment required for handling Set A's boundary. This method used by Method 2 cannot easily be extended to the even log2N case, so Method 2 only works for odd log2N. Method 3 can be extended to work for odd log2N. This is done by Method 5, which reduces to Method 3 for log2N. Combining even and odd log2N capability in Method 5 is awkward, however. For some applications branching to Method 2 and 3 for odd and even log2N will be preferable to Method 5.Processor cycles are further reduced in FFTs with an odd log2N input array with Method 2. Method 2 considers (log2N-1)/2 LSBs and (log2N-1)/2 MSBs for odd log 2N to define sets A, B, and C. Also, the (log2N+1)/2 th middle bit of each binary array element is considered. Method 2 defines set A as the union of the set of elements that have z=1 and LSBs < bit_rev (MSBs) with the set of elements that have z=0 and LSBs > bit_rev (MSBs). Sets A and B inequality criterion are reversed according to whether the middle bit value is one or zero. While Method 2 reduces processor cycles over Method 1, Method 2 also has a 2^(log2N) alignment requirement not found in Method 1. To perform Method 2, three moves are implemented as defined in Table V. The third move combines the operations of the first two moves.TABLE V.AR1=AR1-I0BAR1=AR1-1AR1=AR1-1-I0BAR2=AR2-1AR2=AR2-I0BAR2=AR2-I0B-1Method 2 is implemented with the following steps:Therefore, to implement the operations of Method 2, address registers AR1=S_in+1, AR2=S_in+I0 are initialized. Method 2 iterates from k=1 to 2^(log2N-2)-2^Q in steps of 1; performs Move 1; performs Move 2; and ends the k loop. The Method 2 then performs Move 1; iterates from j=1 to (2^Q) -2 in steps of 1; performs Move 3, and then ends the j loop. The values of AR1, AR2 after initialization and all moves define the Method 2 address pair sequence.Processor cycles may be further reduced over Method 1 for input arrays of an even log2N size by implementing Method 3. Method 3 considers log2N/2 LSBs and MSBs to define the sets A, B, and C for input array elements. Method 3 defines input array element set A by those addresses that have address LSBs > bit_rev(address MSBs). Method 3 may or may not reduce processor cycles over Method 1, depending on the processor. Method 3 also has a 2^(log2N) alignment requirement not found in Method 1. To perform Method 3, four moves are implemented as defined in Table VI.TABLE VI.AR1=AR1+1AR1=AR1+I0BAR1=AR3AR3=AR3+2AR2=AR2+I0BAR2=AR2+1AR2=AR4AR4=AR4+I5BMethod 3 is implemented with the following steps:The sequence of address pairs generated by Method 3 is defined by the AR1, AR2 values after initialization and after all moves except Move 4.Method 4 is similar to Method 1 in that it can be implemented for both odd and even log2N input arrays. Differences between the two include implementation for different processor capabilities and how the methods define input sets of array elements. Method 4 may operate on processors with only one address increment register, whereas Method 1 requires more than one such register. Method 4 considers LSBs and MSBs from Q bits to define the sets A, B, and C for input array elements and defines input array element set A by those addresses that have address LSBs > bit_rev(address MSBs). However, Method 4 does not reduce the alignment requirement.To perform Method 4, three moves are implemented as defined in Table VII.TABLE VII.AR1=AR1+I0BAR1=AR3AR3=AR3+1AR2=AR2+1AR2=AR4AR4=AR4+I0BTo implement the operations of Method 4, the following steps are performed:The address pair sequence for Method 1 is defined by the AR1, AR2 values after Moves 1 and 2. Similarly, the address pair sequence for Method 4 is also defined by the AR1, AR2 values after Moves 1 and 2.Method 5 extends Method 3 to work for odd log2N. Referring to Figure 1, processor cycles may be further reduced over Method 1 for input arrays of an odd log2N size by implementing Method 5. To perform Method 5, five moves are implemented as defined in Table VIII.TABLE VIII.AR1=AR1+1AR1=AR1+I0BAR1=AR3AR3=AR3+2AR1=AR1+1+I0BAR2=AR2+I0BAR2=AR2+1AR2=AR4AR4=AR4+I5BAR2=AR2+1+I0BTo implement the operations of Method 5, the following steps are performed:The sequence of address pairs generated by Method 5 is defined by the AR1, AR2 values after initialization and after all moves except Move 4.All the IPBR Methods of the present invention can be modified by replacing part or all of the address pair sequence with a "topologically similar" sequence. Variations of the IPBR Methods include 1) reversing the order of the original subsequences, 2) x and y axis inversions of the original sequence, and 3) replacing an (A,B) address pair with (B,A) address pair for arbitrary numbers of terms in sequences. By reversing the order of alternating subsequences in Method 1, 3 and 4, Method 1m, 3m and 4m remove the need for auxiliary address registers AR3 and AR4. Thus every sequential "move" advances from the prior address pair location without periodically resetting to stored AR3, AR4 values. Relative to the un-modified methods, Method 1m, 3m and 4m may reduce some cycles (depending on the processor) but will add to program memory. An advantage of Method 1m, 3m and 4m is that they require less address registers to implement. Method 2m does not alter the cycle count, but is exemplary of an x and y axis inversion. Method 2m "inverts" the entire address pair sequence of Method 2.The address generation scheme for Method 2 uses an address increment of AR0=2^(log2N-1). One can modify Method 2 first by changing the starting address pair from AR1=1 and AR2=AR0 to the same address pair after x and y axis inversion, AR1=2*(AR0-1) and AR2=AR0-1. Next, change the sign of all address increments in the address generation scheme. For the original Method 2 all increments (linear and bit reversed) are subtracted; for Method 2m, all increments are added. This results in a valid IPBR address generator for all odd log2N, and the N=32 address pair sequence is given by Figure 12.Note the described modification of Method 2 generates a sequence of address pairs that is topologically similar to the original Method 2 shown previously in Figure 10. The data along both the x and y axis has been inverted. Placing an upside down graph of Figure 12 on top of Figure 10 results in a match. Method 2 is preferable to Method 2m only because of a simpler initialization of the address pair sequence. This invention is inclusive of topologically equivalent address generation schemes and all address generation schemes that vary in some simple or obvious manner from Method 1, 2, 3, and 4. Method 1m keeps the same subsequences shown on horizontal and vertical lines in Figure 5 for Method 1, but connects these subsequences in a different way.A similar modification could be performed on Method 4. For variety, however, Method 4m is formed by reconnecting the horizontal and vertical lines in a different manner. Note that none of the alternatives given in Table IV are satisfied by the entire Method 4m address pair sequence given in Figure 13. However, all of the individual sub-sequences do satisfy Table IV.Any method can be altered by interchanging the order of the first and second addresses of an address pair. Such exchanges may be favorable for reducing program code or cycles but should not be thought of as producing a different address pair generator that is not included in this invention. An example of two address pair generators that give an identical address pair sequence, and vary only in the order of the first and second address for an arbitrary number of address pairs, can be illustrated by plots of Method 1m (Figure 13) and Method 4m (Figure 14). In Figure 13, changing the bit reversed axis from the y-axis to the x-axis results in a sequence of address pairs that is identical to that of Figure 14. The only difference is that in alternating subsequences, the choice of first and second address is exchanged. Such an exchange does not result in a new address pair, and thus a new IPBR address pair generator, outside the scope of the present invention.To perform IPBR, modified Method 1m uses eight "moves" to generate a new AR1, AR2 address pair as defined in Table IX. For moves seven and eight, a new bit reversed pair of address increments is defined: I6=2^(Q-2) and I7=2^(log2N-Q+1).TABLE IX.AR1+=I1BAR1+=I2AR1- =I1BAR1- =I2AR1+=I3AR1+=I4BAR1+=I6BAR1+=I7AR2+=I2AR2+=I1BAR2- =I2AR2- =I1BAR2+=I4BAR2+=I3AR2+=I7AR2+=I6BMethod 1m is implemented with the following steps:The values of AR1, AR2 after initialization and all moves define the Method 1m address pair sequence.To perform Method 2m, three moves are implemented as defined in Table X. The third move combines the operations of the first two moves. Relative to the unmodified Method 2, Method 2m is an example of x and y axis inversion.TABLE X.AR1=AR1+I0BAR1=AR1+1AR1=AR1+1+I0BAR2=AR2+1AR2=AR2+I0BAR2=AR2+I0B+1Method 2m is implemented with the following steps:Similar to Method 3, Method 3m is implemented to reduce processor cycles for input arrays of even log2N size. Method 3m requires only two address registers to operate. To perform Method 3m, six moves are implemented as defined in Table XI. In the Table XI, ARx+=Iy represents ARx=ARx+Iy.TABLE XI.AR1+=1AR1+=I0BAR1-=1AR1-=I0BAR1+=2AR1-=I5BAR2+=I0BAR2+=1AR2-=I0BAR2-=1AR2+=I5BAR2-=2Method 3m is implemented with the following steps:Excluding initialization, all values of AR1, AR2 after moves define the Method 3m address pair sequence.Method 4m illustrates a scheme different from Method 1m for reconnecting subsequences of address pairs. To perform Method 4m, five moves are implemented as defined in Table XII. For processors with only one address increment register, note after two Move 1's the final resulting AR1, AR2 changes are equivalent to Move 5.TABLE XII.AR1=AR1+1AR1=AR1+I0BAR1=AR1+1+I0BAR1=AR1-I0BAR1=AR1+2AR2=AR2+I0BAR2=AR2+1AR2=AR2+I0B+1AR1=AR1-1AR2=AR2+I5Method 4m is implemented with the following steps:The values of AR1, AR2 after initialization and all moves define the Method 4m address pair sequence.In Methods 1 and 4, for each iteration that an address pair is advanced, typically only one address is advanced with a bit reversed increment. To apply a bit reversed increment to an address register, ARx, in the absence of an alignment restriction, one approach is as follows:1) Subtract the buffers start address from ARx. (After subtraction, the effective start address reference is zero, and thus the alignment restriction is satisfied);2) Perform the bit reversed incrementation on ARx; and3) Add the start address back to ARx.For the above approach, ideally an address increment register can be reserved for the start address that is subtracted and added to ARx. On the TI C54x, only one address increment register, AR0, is available for use by the IPBR Methods 1 through 4 presented herein. An alternative procedure is to create a "shadow" address register, ARy, for each address register used by the IPBR method. Keep ARx=ARy+start address, so the shadow register references a zero start address and satisfies the alignment restriction. For each iteration the address pair is advanced, the address that is bit reversed is incremented according to the following steps:1) If ARy is not already up to date, force ARy=ARx - start address;2) Perform the bit reverse incrementation on ARy; and3) Add the start address and store in ARx, i.e., ARx=ARy+start address.If the inner loop always bit reverses increments for the same address of the address pair, then step one can be removed from the inner loop.An alternative embodiment of the present invention generates addresses using out of place bit reversal (OOPBR) to reduce cycles for processors that do not support bit reversed address register incrementation and consequently require many cycles to generate a bit reversed address. The conventional OOPBR approach is to generate one address pair per data move. With the present invention, about half as many bit reversed address offsets are generated by using bit reversed offsets twice. First, one of this invention's IPBR methods is used to generate address pair offsets [AR1, bit_rev(AR1)] as if S_in=0. The OOPBR algorithm copies data from the contents referenced by S_in+AR1 into S_out+bit_rev(AR1) and copies data referenced by S_in+bit_rev(AR1) into S_out+AR1. Finally, a second address generator is used to generate all self-reversed offsets, and for each self reversed offset only one data transfer is made. This OOPBR method removes the start address offset from the address pair sequence generated, and consequently this OOPBR method need not impose any alignment constraints on the input or output buffer.To implement the OOPBR Method, IPBR Method 1 is extended for OOPBR applications. Address pairs AR3 and AR4 are initialized to zero instead of S_in, because for OOPBR, relative address offsets are generated, not actual addresses. Beyond the moves for the chosen IPBR method, three additional moves are needed. These additional moves only affect one address register. To perform the OOPBR Method, six moves are implemented as defined in Table XIII.TABLE XIII.AR1=AR1+I1BAR1=AR3AR3=AR3+I3AR1=AR1+1+I0BAR1=AR3AR3=AR3+I2AR2=AR2+I2AR2=AR4AR4=AR4+I4BTo implement the operations of the OOPBR Method, the following steps are performed:For all of the above moves (that affect AR1, AR2) transfer data from address S_in+AR1 to S_out+AR2, and transfer data from S_in+AR2 to S_out+AR1. When it is costly in cycles to calculate the result of bit reversed address incrementation, this is helpful because two data transfers are made for each bit reversed address computation. Note that making the two indicated data transfers for the address pair sequence given above will not complete the OOPBR operation because all the self-reversed addresses are omitted.The operations of the OOPBR Method are continued with the following steps:For all of the preceding moves for OOPBR that affect AR1, only transfer data from S_in+AR1 into S_out+AR1.The present invention can be efficiently implemented even when data elements are represented by multiple contiguous words. For each of the methods disclosed, the initial address pair(s) and all increment registers are multiplied by 2^M when data elements are represented by 2^M contiguous words. For a linear increment of one, however, scaling up by 2^M is normally not needed as demonstrated below. Also, methods for dealing with the lowest M bits of the address that needs a bit reversed increment are described below. For some FFTs, each sequential data element may require two or four words of memory. For example, double precision complex FFT data can be in the format, R_MSW(1), R_LSW(1), I_MSW(1), I_LSW(1), R_MSW(2), R_LSW(2), I_MSW(2), I_LSW(2),... for R_MSW = signed real most significant word; R_LSW = unsigned real least significant word; I_MSW = signed imaginary most significant word; I_LSW = unsigned imaginary least significant word.Assume for a particular IPBR swap and move, the goal is to advance AR1 linearly to the next element and advance AR2 bit reversed. Table XIV illustrates IPBR processing of single and four word elements.TABLE XIV.Single precision real M=0Double precision complex M=21 word of contiguous memory per element in array4 words of contiguous memory per array elementAR0 = 2^(log2N-1); address incrementAR0 = 2 + 4*2^(log2N-1) Start of LoopStart of LoopSwap (AR1, AR2) data,Swap (AR1, AR2) data,AR1=AR1+1, AR2=AR2+1AR1=AR1+1Swap (AR1, AR2) data,AR2=bitrev_add(AR2, AR0)AR1=AR1+1, AR2=AR2+1End of LoopSwap (AR1, AR2) data,AR1=AR1+1, AR2=AR2+1Swap (AR1, AR2) data,AR1=AR1+1AR2=bitrev_add(AR2, AR0)End of LoopFor the exemplary double precision complex FFT, Table XIV adds two to the bit-reversed increment, AR0, relative to the single precision real case. This procedure avoids using another instruction to subtract three from AR2. Adding two to the bit-reversed increment for four words of contiguous memory clears an offset of three, since in bit reversed addition 3+2B=0. An alternative approach is to treat alternating sequential swaps differently. The first data swap lets the two least significant bits advance to three by advancing from the R_MSW to the I_MSW. The second swap starts by swapping I_MSW and advances backwards to swap the R_MSW data last.To estimate the number of cycles required to implement the methods of the present invention on a TI C54x processor, the number of address pairs generated is multiplied by the cycles required to generate a new address pair and perform (or decline to perform) an exchange of data referenced by the address pair. The cycle estimates presented in Table XV ignore the penalty for a limited number of outer loops when loops are nested. These results demonstrate that the new IPBR methods are competitive with OOPBR. IPBR methods reduce the number of cycles by more than 80% over the conventional method, in most cases. Cycles per address pair are reduced from 14 or 12 cycles down to 4 or 3 cycles and the number of address pairs is reduced in half. The modified methods only vary from the corresponding un-modified method in that their preferred implementation use only two address registers.TABLE XV.OOPBR Conventional Method132^(log2N-1)2^log2NNo21IPBR Conventional Method214 or 122^(log2N)2^log2NNo21Method 1 (even log2N)342^(log2N-1) -2^(log2N/2)2^(log2 N/2-1)Yes42Method 1 (odd log2N)32^(log2N-1) -2^(log2N/2+1/2)2^((log 2N-1)/2)Yes41Method 2 (odd log2N)32^(log2N-1) -2^(log2N/2+1/2)2^log2NNo21Method 3 (even log2N)32^(log2N-1) -2^(log2N/2)2^log2NYes41Method 4 & 5 (even log2N)32^(log2N-1) -2^(log2N/2+1/2)2^log2NYes41Method 4 & 5 (odd log2N)32^(log2N-1) -2^(log2N/2)2^log2NYes41Referring to the Table XV footnotes:1) For relocatable data memory. A C54x OOPBR optimized loop processes two bit reversed address pairs per iteration for single word data elements, i.e.; DLD *AR1+,Areg; STH Areg,*AR2+0B; STL Areg,*AR2+0B. For applications designating a fixed, non-relocatable output array location in data memory, a faster OOPBR C54x implementation exists that cannot be extended for the case when each data elements is represented by multiple contiguous words. This would invoke 2^(log2N) iterations of a single instruction loop using the instruction, MVDK *AR1+0B, #OUT_ADDR. This C54x instruction requires one cycle in single instruction loops.2) For each address pair generated, 14 cycles are required if swap is executed, 12 if swap is omitted.3) A one cycle penalty in the inner loop is due to the TI C54x having only one address increment register. Only 3 cycles are required on TI processors with more than one address increment register. To swap data from address register ARx and ARy, while advancing ARx by Ix and ARy by Iy bit reversed, use the three TI DSP instructions LD AR2, Areg; MVDD AR2+x%, *AR1; STL Areg, *AR1+yB. However, for the C54x only one address increment register is available so x=y=0. For even log2N the required address increments are a factor of 2 apart, so an extra MAR AR2+0 instruction is required as well as separate hard coded loops to process even and odd log2N cases. If the even log2N one cycle penalty is unacceptable, a different method can be used for the even log2N case. In that event, Method 1 is only being used for odd log2N, and might be replaced by the more efficient Method 2.The IPBR cycle counts are competitive with OOPBR in all cases. TI C54x cycle counts for the entire IPBR implementation are given in Table XVI for array sizes of 1024 and 2048. For the TI C54X IPBR implementations reported on in this table, outer loop counters were hard coded for one value of log2N. Programming general formulas (executed once) for any log2N value would add a small amount of cycles to these results. For the TI C54X, the modified methods did not reduce cycles, as the increase in nested loops and branching outweighed removing (AR3, AR4) address pair manipulations. The Method 3 cycles given are only ten percent more than the minimum possible for OOPBR. Method 2 is more efficient than OOPBR. Using the methods of the present invention, the results illustrate that a significant reduction in cycles relative to conventional IPBR is achieved.TABLE XVI.Method to place array in bit reversed order for single word data elements (M=0)C54x cycles for log2N=10 array size N=1024C54x cycles for log2N=11 array size N=2048OOPBR Conventional Method for relocatable data memory. (cycles of inner loop only given here, using 3*2^(log2N-1)1,536+overhead3,072+overheadIPBR Conventional Method11,24222,474IPBR Method 12,3673,390IPBR Method 1 no alignment constraint4,4527,458IPBR Method 1m2,5203,820IPBR Method 2 & 2m (odd log2N only)3,052IPBR Method 3 (even log2N only)1,694IPBR Method 41,8403,674Because many varying and different embodiments may be made within the scope of the inventive concept herein taught, and because many modifications may be made in the embodiments herein detailed in accordance with the descriptive requirements of the law, it is to be understood that the details herein are to be interpreted as illustrative and not in a limiting sense.
In particular illustrative embodiments, circuit devices and methods of controlling a voltage swing are disclosed. The method includes receiving a signal at an input of a digital circuit device including a capacitive node. The method also includes selectively activating a voltage level adjustment element to regulate an electrical discharge path from the capacitive node to an electrical ground to prevent complete discharge of the capacitive node. In a particular illustrative embodiment, the received signal may be a clock signal.
WHAT IS CLAIMED IS: 1. A method of controlling a voltage swing, the method comprising: receiving a clock signal at an input of a digital circuit device including a capacitive node; and selectively activating a voltage level adjustment element to throttle an electrical discharge path from the capacitive node to an electrical ground to prevent complete discharge of the capacitive node. 2. The method of claim 1, wherein the voltage level adjustment element increases a logic low voltage level at the capacitive node to a first voltage level that is greater than a ground voltage level such that the capacitive node discharges to the first voltage level instead of to the ground voltage level. 3. The method of claim 2, further comprising adjusting the logic low voltage level based on a received signal. 4. The method of claim 2, further comprising applying a control signal to a voltage level control circuit coupled to the voltage level adjustment element to incrementally adjust the voltage level. 5. The method of claim 2, further comprising: receiving a first control signal at a voltage level control circuit coupled to the voltage level adjustment element; and increasing the voltage level to a second voltage level that is greater than the voltage level in response to the first control signal. 6. The method of claim 5, further comprising: receiving at least one second control signal at the voltage level control circuit; and increasing the voltage level to a third voltage level that is greater than the second voltage level. 7. The method of claim 2, wherein the digital circuit device includes a first voltage supply and an electrical ground and wherein the voltage level adjustment element increases the voltage level without providing a second voltage supply. 8. The method of claim 1, wherein the capacitive node comprises a terminal of a capacitor responsive to a logic circuit coupled to the input. 9. The method of claim 1, further comprising: selectively asserting a power mode control enable signal to a control input of the voltage level adjustment circuit to activate the voltage level adjustment circuit in a first operating mode; and selectively deasserting the power mode control enable signal to bypass the voltage level adjustment circuit in a second operating mode. 10. The method of claim 1, further comprising decreasing a logic high portion of the signal at the capacitive node to a high voltage level that is less than a voltage level of a high portion of the clock signal. 11. A circuit device comprising: an input to receive a digital logic value; a logic device responsive to the input; a capacitive node coupled to the logic device; and a voltage level adjustment element coupled to the capacitive node to increase a logic low voltage level to a voltage level above a logic low level of the input to reduce a voltage swing associated with the capacitive node. 12. The circuit device of claim 11, wherein the digital logic value comprises a clock signal and wherein the capacitive node is not completely discharged during a logic low portion of the clock signal. 13. The circuit device of claim 11, further comprising a programmable voltage level control circuit including one or more inputs to receive one or more control inputs, the programmable voltage level control circuit to control the voltage level adjustment element to incrementally increase the voltage level in response to receiving the one or more control inputs. 14. The circuit device of claim 11, wherein the voltage level adjustment element comprises a first transistor and a second transistor coupled in parallel between the capacitive node and an electrical ground, the first transistor including a first control terminal responsive to a power mode control enable input to selectively activate the voltage level adjustment element. 15. The circuit device of claim 14, wherein the second transistor comprises a second control terminal coupled to the capacitive node to regulate a discharge path through the second transistor based on a voltage level at the capacitive node. 16. The circuit device of claim 14, wherein the second transistor comprises a second control terminal coupled to a programmable voltage level control circuit. 17. The circuit device of claim 16, wherein the programmable voltage level control circuit comprises: a p-channel transistor including a first terminal coupled to a voltage source, a second terminal coupled to the input, and a third terminal coupled to the second control terminal; an n-channel transistor including a fourth terminal coupled to the third terminal; a fifth terminal coupled to the input; and a sixth terminal coupled to the capacitive node. 18. The circuit device of claim 17, wherein the programmable voltage level control circuit further comprises one or more pairs of n-channel transistors, each pair of n-channel transistors comprising: a first n-channel transistor including a seventh terminal coupled to the second control terminal, an eighth terminal coupled to the input, and a ninth terminal; and a second n-channel transistor including a tenth terminal coupled to the ninth terminal, an eleventh terminal coupled to a control input; and a twelfth terminal coupled to the capacitive node. 19. A circuit device comprising: an input to a circuit element; a capacitive node coupled to the circuit element and responsive to the input; and a voltage level adjustment element coupled to the capacitive node and adapted to provide an electrical discharge path to an electrical ground for the capacitive node, the voltage level adjustment element to throttle the electrical discharge path to prevent complete discharge of the capacitive node when a signal at the input is at a logic low voltage level. 20. The circuit device of claim 19, wherein the circuit element comprises a logic gate. 21. The circuit device of claim 19, wherein the input is a digital signal that is responsive to a clock signal. 22. The circuit device of claim 19, further comprising a voltage level control circuit including at least one control input to receive at least one control enable input signal, the voltage level control circuit coupled to the voltage level adjustment element to incrementally increase a discharge voltage level for the capacitive node relative to a ground voltage level based on the at least one control input. 23. The circuit device of claim 22, wherein the voltage level control circuit includes one or more second control inputs to further adjust the voltage level. 24. The circuit device of claim 19, further comprising a power mode enable input coupled to the voltage level adjustment element to selectively activate the voltage level adjustment element. 25. A circuit device comprising: means for receiving a clock signal at an input of a digital circuit device including a capacitive node; and means for selectively activating a voltage level adjustment element to throttle an electrical discharge path from the capacitive node to an electrical ground to prevent complete discharge of the capacitive node. 26. The circuit device of claim 25, wherein the voltage level reduces a voltage swing of a signal at the capacitive node, such that the capacitive node discharges to a non-ground voltage level instead of to a ground voltage level. 27. The circuit device of claim 26, further comprising: means for receiving a first control signal at a voltage level control circuit coupled to the voltage level adjustment element; and means for increasing the non-ground voltage level to a second voltage level that is greater than the non-ground voltage level. 28. The circuit device of claim 25, further comprising: means for asserting a power mode control enable signal to a control input of the voltage level adjustment element to activate the voltage level adjustment circuit in a first operating mode; and means for deasserting the power mode control enable signal to bypass the voltage level adjustment circuit in a second operating mode. 29. The circuit device of claim 25, further comprising means for adjusting the voltage level of a logic low portion of the signal based on a received instruction. 30. The circuit device of claim 25, further comprising means for applying a control signal to a voltage level control circuit coupled to the voltage level adjustment element to incrementally adjust the non-ground voltage level.
CIRCUIT DEVICE AND METHOD OF CONTROLLING A VOLTAGE SWINGClaim of Priority under 35 U.S.C. $119The present Application for Patent claims priority to Provisional ApplicationNo. 60/896,090 entitled "Circuit Producing a Signal Having a Reduced Voltage Swing" filed March 21, 2007, and assigned to the assignee hereof and hereby expressly incorporated by reference herein./. FieldThe present disclosure is generally related to a circuit device and method of controlling a voltage swing.II. Description of Related ArtAdvances in technology have resulted in smaller and more powerful personal computing devices. For example, there currently exist a variety of portable personal computing devices, including wireless computing devices, such as portable wireless telephones, personal digital assistants (PDAs), and paging devices that are small, lightweight, and easily carried by users. More specifically, portable wireless telephones, such as cellular telephones and IP telephones, can communicate voice and data packets over wireless networks. Further, many such wireless telephones include other types of devices that are incorporated therein. For example, a wireless telephone can also include a digital still camera, a digital video camera, a digital recorder, and an audio file player. Also, such wireless telephones can process executable instructions, including software applications, such as a web browser application, that can be used to access the Internet. As such, these wireless telephones can include significant computing capabilities.Generally, as processing power of integrated circuits increases, power consumption can also increase. For mobile electronics, such as wireless telephones, PDAs, and other portable electronic devices, power consumption considerations increase component and design costs and may impact speed and performance. [0004] Conventionally, circuit designers have attempted to reduce power consumption by reducing voltage swing, in part, because significant power may be consumed by switching capacitances within a particular circuit device. However, such attempts to reduce power consumption may impact at least one of the circuit speed, the circuit area, and the wiring routing complexity. In some instances, multiple power supplies have been introduced to reduce voltage swing, increasing the cost and complexity of the integrated circuit. Hence, there is a need for an improved circuit device and method of controlling a voltage swing.III. SummaryIn a particular illustrative embodiment, a method of controlling a voltage swing is disclosed that includes receiving a clock signal at an input of a digital circuit device including a capacitive node. The method further includes selectively activating a voltage level adjustment element to regulate an electrical discharge path from the capacitive node to an electrical ground to prevent complete discharge of the capacitive node.In another particular illustrative embodiment, a circuit device is disclosed that includes an input to receive a digital logic value, a logic device responsive to the input, and a capacitive node coupled to the logic device. The circuit device further includes a voltage level adjustment element coupled to the capacitive node and adapted to increase a logic low voltage level to a voltage level above a logic low level of the input.In still another particular illustrative embodiment, a circuit device is disclosed that includes an input to a circuit element and a capacitive node that is coupled to the circuit element and that is responsive to the input. The circuit device further includes a voltage level adjustment element that is coupled to the capacitive node and is adapted to provide an electrical discharge path to an electrical ground for the capacitive node. The voltage level adjustment element regulates the electrical discharge path to prevent complete discharge of the capacitive node when a signal at the input is at a logic low voltage level.In yet another particular illustrative embodiment, a circuit device includes means for receiving a clock signal at an input of a digital circuit device including a capacitive node. The circuit device also includes means for selectively activating a voltage level adjustment element to regulate an electrical discharge path from the capacitive node to an electrical ground to prevent complete discharge of the capacitive nodeOne particular advantage provided by embodiments of a voltage swing adjustment circuit is provided in that overall power consumption may be reduced without impacting speed by reducing a voltage swing of a clock signal or of other signals, thereby reducing power consumption due to switched capacitances.Another particular advantage is provided by embodiments of the voltage swing adjustment in that voltage swing adjustment circuit can be used to throttle a discharge path of a circuit to stop a voltage discharge at a certain level. In particular embodiments, the discharge level may be programmable.Still another particular advantage is provided in that the active power consumption of a device may be reduced by using the voltage swing adjustment circuit without introducing additional power supplies. In a particular illustrative embodiment, the voltage swing adjustment circuit may reduce power consumed by a device by as much as thirty-three percent (33%).Other aspects, advantages, and features of the present disclosure will become apparent after review of the entire application, including the following sections: Brief Description of the Drawings, Detailed Description, and the Claims.IV. Brief Description of the DrawingsFIG. 1 is a block diagram of a particular illustrative embodiment of a system to control a voltage swing;FIG. 2 is a circuit diagram of a second particular illustrative embodiment of a system to control a voltage swing;FIG. 3 is a block diagram of a third particular illustrative embodiment of a system to control a voltage swing;FIG. 4 is a circuit diagram of a fourth particular illustrative embodiment of a system to control a voltage swing;] FIGS. 5 A and 5B are graphical representations of clock signals and adjusted clock signals having a reduced voltage swing implemented using the systems of FIGS. 1-4;FIG. 6 is a block diagram of a fifth particular illustrative embodiment of a system to control a voltage swing;FIG. 7 is a block diagram of a sixth particular illustrative embodiment of a system to control a voltage swing;FIGS. 8A and 8B are graphical representations of clock signals and adjusted clock signals having a reduced voltage swing implemented using the systems of FIGS. 6 and 7;FIG. 9 is a flow diagram of a particular illustrative embodiment of a method of controlling a voltage swing; andFIG. 10 is a block diagram of a wireless communication device that includes a circuit device and a method of controlling a voltage swing, such as the circuit devices and methods shown in FIGS. 1-4, 6, 7 and 9.V. Detailed DescriptionFIG. 1 is a block diagram of a particular illustrative embodiment of a system 100 to control a voltage swing. The system 100 includes a digital circuit device 102 that includes an input 104, which may be responsive to a signal, such as a clock signal. The digital circuit device 102 includes a logic circuit device 106 that is coupled to the input 104 and to a line 108. The digital circuit device 102 includes a capacitive node 110 that is coupled to the line 108 and to a voltage level adjustment circuit 112. The voltage level adjustment circuit 112 is coupled to the line 108, to the capacitive node 110, and to an electrical ground 114.In a particular illustrative embodiment, a clock input may be received at the input 104 and may be provided to the line 108 via the logic circuit device 106. The voltage level adjustment circuit 112 is adapted to regulate a discharge path from the capacitive node 110 via the line 108 and to the electrical ground 114 to prevent the capacitive node 110 from discharging to a zero voltage level. In a particular illustrative embodiment, the term "regulate" as used herein refers to controlling, throttling or otherwise regulating current flow via the discharge path. In a particular illustrative embodiment, a method of regulation may reduce a rate of discharge of a capacitor or capacitive node. In another particular illustrative embodiment, the term "regulate" may refer to altering a low voltage level to prevent discharge of the capacitive node 110 to a ground voltage level. In another particular illustrative embodiment, the term "regulate" may refer to clamping a voltage level of a signal to a voltage range that is less than a voltage level of the voltage source and greater than a ground voltage level (i.e., a non- ground voltage level). By limiting the discharge of the capacitive node 110 to a non- ground voltage level (i.e., a voltage level that is greater than zero volts), the capacitive node 110 uses less power to recharge to a logic high voltage level. In addition, a voltage level of the line 108 may vary within a reduced voltage range. The line 108 may be coupled to another circuit to provide a clock signal having a reduced voltage swing or another signal to the circuit device. Within a larger circuit, the reduced voltage swing may result in a reduced overall power consumption, which may extend a life of a battery, may allow for reallocation of power resources to other processes, or any combination thereof.In a particular illustrative embodiment, the dissipated energy consumed by a given net or chip can be estimated using the following equation:<E>(dlssp) = C(Total) - Vdd - V(swmg) (Equation 1)The dissipated energy (E(diSsp)) represents the dynamic energy consumed by the given net or chip, a total capacitance (C([tau]otai)) represents a capacitance that is charged or discharged when switching between logic zero (0) and logic one (1), VDD represents a pin voltage that supplies power for the circuit, and V(swing) represents a difference between the logic one (high) and logic zero (low) values. In general, the energy dissipated (E(diSSp)) by the given net or chip is proportional to the voltage swing (V(swing)). Accordingly, by utilizing the voltage level adjustment circuit 112 to throttle the discharge of the capacitive node 110 when the clock signal is at a logic low level, the voltage swing of the digital circuit device 102 is reduced. Thus, the energy dissipated by the digital circuit device 102 is also reduced.FIG. 2 is a circuit diagram of a second particular illustrative embodiment of a system 200 to control a voltage swing. The system 200 includes a logic circuit element, such as a logic NAND gate 202, that has a first input 204 responsive to a signal source, such as a clock, to receive an input signal. The logic NAND gate 202 also includes a second input that is coupled to an electrical ground 206. The NAND gate 202 also has an output 207. The system also includes a p-channel transistor 208 and an n-channel transistor 210 arranged to form an inverter circuit. The p-channel transistor 208 includes a first terminal coupled to a power supply terminal (VDD), a control terminal coupled to the output 207, and a second terminal coupled to a capacitive node 220. The n-channel transistor 210 includes a first terminal coupled to the capacitive node 220, a control terminal coupled to the output 207, and a second terminal coupled a node 211. A voltage level adjustment circuit 212 is coupled between a node 211 and the electrical ground 206.The voltage level adjustment circuit 212 includes a pair of n-channel transistors216 and 218 arranged in parallel. The n-channel transistor 216 includes a first terminal coupled to the node 211, a control terminal coupled to a power mode control bypass input 214, and a second node coupled to the electrical ground 206. The n-channel transistor 218 includes a first terminal coupled to the node 211, a control terminal coupled to the capacitive node 220, and a third terminal coupled to the electrical ground 206. The system 200 may include a capacitor 222 that is coupled between the capacitive node 220 and the electrical ground 206. In an alternative embodiment, the capacitor 222 may represent line capacitances of wire traces and switching capacitances associated with various circuit devices, such as the transistor 224. The transistor 224 may include a first terminal coupled to a circuit element 226, a control terminal coupled to the capacitive node 220, and a third terminal coupled to the electrical ground 206. In a particular illustrative embodiment, the circuit element 226 may be a receiver that is adapted to receive a data input and to provide an output.In a particular illustrative embodiment, a clock input signal is received at the input 204. The clock input signal is inverted by the NAND gate 202 and provided as an inverted clock signal at the output 207. When the clock input signal at the input 204 is at a logic low level, the value at the output 207 is at a logic high level. The p-channel transistor 208 is turned off, and the n-channel transistor 210 is activated to pull down a voltage level at the node 220. When the clock input signal at the input 204 is at a logic high level, the value at the output 207 is at a logic low level. The n-channel transistor 210 is turned off and the p-channel transistor 208 is active. In this instance, the p- channel transistor 208 pulls up a voltage level at the node 220 to a logic high level.In a particular illustrative embodiment, when the clock input signal at the input204 is at a logic high level, the voltage level at the node 220 is also at a logic high level and the capacitor 222 is charged. When the clock input signal at the input 204 transitions to a logic low level, the voltage level at the node 220 also transitions. The capacitor 222 discharges via a discharge path 228, which includes the n-channel transistor 210, the voltage level adjustment circuit 212 and the electrical ground 206. In a particular illustrative embodiment, a power mode control signal may be applied to the power mode control enable input 214 to activate the transistor 216, providing a bypass path for current flow from the node 211 to the electrical ground 206. When the power mode control signal is not applied to activate the transistor 216, the transistor 218 may be activated and controlled based on a voltage level at the node 220. When the voltage level at the node 220 switches from the logic high voltage level to a logic low voltage level, the n-channel transistor 210 turns on (since a voltage level at the node 207 is at a logic high voltage level) and the capacitor 222 discharges via the discharge path 228.In a particular illustrative embodiment, the discharging voltage from the capacitor 222 initially activates the transistor 218 to couple the node 211 to the electrical ground 206. As the capacitor 222 discharges, the voltage level of the node 220 decreases and current flow through the transistor 218 is reduced because a voltage level at the control terminal of the transistor 218 is reduced, until the voltage level at the control terminal of the transistor 218 is approximately equal to a threshold voltage of the transistor 218. At this point, the transistor 218 turns off and the voltage level at the node 220 is held at a voltage level that is greater than a voltage level of the electrical ground 206. In this manner, the capacitor 222 is prevented from completely discharging to a ground voltage level. Thus, the voltage swing of the capacitive node 220 can be reduced by increasing a logic low or discharge voltage level.In a particular illustrative embodiment, a clock signal is received at the input 204 and is provided to the capacitive node 220. The voltage level adjustment circuit 212 throttles a discharge path of the capacitive node 220 to provide a reduced capacitive discharge from the capacitor 222, providing a second clock signal (CLK 2) at the node 220. The second clock signal (CLK 2) at the node 220 is a reduced version of the clock signal at the input 204. In a particular illustrative embodiment, the term "reduced clock signal" refers to a second clock signal that has a smaller voltage swing than a clock signal at the input 204. The second or reduced clock signal (CLK 2) at the node 220 may be provided to the circuit element 226. By providing a reduced version or second clock signal (CLK 2) to the circuit element 226, power consumption by the circuit element 226 may be reduced.In a particular illustrative embodiment, the swing of the clock input signal may range from a first voltage level (VDD) to a ground voltage level, for example. In contrast, the reduced clock signal (CLK 2) may range from the first voltage level (VDD) to a second voltage level that is greater than the ground voltage level. In a particular embodiment, the second voltage level may be approximately a threshold voltage level (VT) above the ground voltage level, where the threshold voltage level is determined by the device characteristics of the transistor 218.FIG. 3 is a block diagram of a third particular illustrative embodiment of a system 300 to control a voltage swing. The system 300 includes a digital circuit device 302 that includes an input 304, which may be responsive to a signal, such as a clock signal. The digital circuit device 302 includes a logic circuit device 312 that is coupled to the input 304 and to a line 314. The digital circuit device 302 includes a capacitive node 316 that is coupled to the line 314 and to a voltage level adjustment circuit 320. The voltage level adjustment circuit 320 is coupled to the line 314, to the capacitive node 316, and to an electrical ground 322. The digital circuit device 302 also includes a programmable voltage level control circuit 318 and one or more control inputs 306 to receive one or more control input signals. The programmable voltage level control circuit 318 is coupled to the voltage level adjustment circuit 320.In a particular illustrative embodiment, a clock input may be received at the input 304 and may be provided to the line 314 via the logic circuit device 312. The voltage level adjustment circuit 320 is adapted to regulate a discharge path from the capacitive node 316 via the line 314 and to the electrical ground 322 to prevent the capacitive node 316 from discharging to a zero voltage level when the clock signal is at a logic low voltage level. In a particular illustrative embodiment, one or more control input signals may be applied to the one or more control inputs 306 to control the programmable voltage level control circuit 318 to adjust a voltage level of the voltage level adjustment circuit 320. The programmable voltage level adjustment control circuit 318 may be adapted to regulate (i.e., throttle, restrict or otherwise control) current flow via the discharge path from the capacitive node 316 to the electrical ground 322. In a particular illustrative embodiment, a first control signal may be received via the one or more control inputs 306 to control the programmable voltage level control circuit 318 to increase a baseline voltage level of the capacitive discharge path to a first voltage level by controlling the voltage level adjustment circuit 320, such that the capacitive node 316 discharges to the first voltage level instead of to a ground voltage level. In another particular illustrative embodiment, a second control signal may be received via the one or ore control inputs 306 to control the programmable voltage level control circuit 318 to adjust the voltage level adjustment circuit 320 to increase the baseline voltage level of the capacitive discharge path to a second voltage level, such that the capacitive node 316 discharges to the second voltage level instead of to a ground voltage level. In another particular illustrative embodiment, the programmable voltage level control circuit 318 may aggregate one or more control signals received via the one or more control inputs 306. The programmable voltage level control circuit 318 may control the voltage level adjustment circuit 320 to throttle the discharge path to allow the capacitive node 316 to discharge to a desired voltage level.In a particular illustrative embodiment, by limiting the discharge of the capacitive node 316 to a non-ground voltage level (i.e., a voltage level that is greater than zero volts), the capacitive node 316 retains a portion of its charge and consequently uses less power to recharge to a logic high voltage level. Within a larger circuit, the reduced voltage swing may result in a reduced overall power consumption, which may extend a life of a battery, may allow for reallocation of power resources to other processes, or any combination thereof.FIG. 4 is a circuit diagram of a fourth particular illustrative embodiment of the system 400 to control a voltage swing. The system 400 includes a logic circuit element, such as a logic NAND gate 402, that includes a first input 404 to receive an input signal, such as a clock signal. The logic circuit element 402 also includes a second input that is coupled to an electrical ground 406. Since the second input is held at a logic low voltage level (i.e., a ground voltage level), the output of the logic NAND gate 402 at a node 407 represents an inverted version of the input signal at the first input 402. [0037] The system 400 also includes a p-channel transistor 408 and an n-channel transistor 410 arranged to form an inverter circuit. The p-channel transistor 408 includes a first terminal coupled to a voltage supply (VDD), a control terminal coupled to the node 407, and a second terminal coupled to a capacitive node 420. The n-channel transistor 410 includes a first terminal coupled to the capacitive node 420, a control terminal coupled to the node 407, and a second terminal coupled to a node 411. The system 400 further includes a voltage level adjustment circuit 412 that is coupled between the node 411 and the electrical ground 406. In a particular illustrative embodiment, the voltage level adjustment circuit 412 may be an embodiment of the voltage level adjustment circuit 320 illustrated in FIG. 3. The voltage level adjustment circuit 412 includes a transistor 416 and a transistor 418 arranged in parallel between the node 411 and the electrical ground 406. The transistor 416 includes a first terminal coupled to the node 411, a control terminal coupled to a power mode bypass input 414, and a second terminal coupled to the electrical ground 406. When a power mode bypass signal is applied to the power mode bypass input 414, the voltage level adjustment circuit 412 provides a discharge path from the node 411 to the electrical ground 406. The transistor 418 includes a first terminal coupled to the node 411, a control terminal coupled to a node 450 that is responsive to a programmable voltage level control circuit 430, and a second terminal coupled to the electrical ground 406. In a particular illustrative embodiment, the programmable voltage level control circuit 430 may be an embodiment of the programmable voltage level control circuit 318 illustrated in FIG. 3.The programmable voltage level control circuit 430 includes multiple transistor pairs. The programmable voltage level control circuit 430 includes a p-channel transistor 438 and n-channel transistors 440, 442, 444, 446, and 448. The p-channel transistor 438 and the n-channel transistor 440 represent a transistor pair. Additionally, the n-channel transistors 440 and 442 and the n-channel transistors 446 and 448 represent transistor pairs. The p-channel transistor 438 includes a first terminal coupled to the power supply (VDD), a control terminal coupled to the node 407 by the line 432, and a second terminal coupled to the node 450. The n-channel transistor 440 includes a first terminal coupled to the node 450, a control terminal coupled to the node 407 via the line 432, and a second terminal coupled to the capacitive node 420. The n-channel transistor 442 includes a third terminal coupled to the node 450, a control terminal coupled to the node 407 via the line 432, and a fifth terminal. The n-channel transistor 444 includes a sixth terminal coupled to the fifth terminal, a control terminal coupled to a first control enable input 434 to receive a control enable (0) signal, and a seventh terminal coupled to the capacitive node 420. The n-channel transistor 446 includes an eighth terminal coupled to the node 450, a control terminal coupled to the node 407 via the line 432, and a ninth terminal. The n-channel transistor 448 includes a tenth terminal coupled to the ninth terminal, a control terminal coupled to a second control enable input 436 to receive a second control enable (1) signal, and an eleventh terminal coupled to the capacitive node 420. It should be understood that the programmable voltage level control circuit 430 may include additional transistors, such as the transistors 442, 444, 446 and 444 and additional control inputs, such as the control inputs 434 and 436 to provide additional control and additional voltage levels.The system 400 further includes a capacitor 422 coupled between the capacitive node 420 and the electrical ground 406. In a particular illustrative embodiment, instead of being a discrete circuit component, the capacitor 422 may represent line capacitances and gate capacitances of the circuit device. The system 400 also includes a transistor 424 including a first terminal coupled to a circuit element 426, a control terminal coupled to the capacitive node 420, and a second terminal coupled to the electrical ground 406. The circuit element 426 may be a circuit adapted to receive a clock signal, such as a receiver, a transmitter, another circuit, or any combination thereof.In a particular illustrative embodiment, the programmable voltage level control circuit 430 may receive a control enable signal via the control enable input 434, which activates the transistor 444 to couple the transistor 442 between the node 450 and the capacitive node 420. If the voltage level at the node 407 switches from low to high, the voltage level of the capacitive node 420 switches from high to low. The capacitor 422 discharges via the discharge path 428. The voltage level at the node 407 when it reaches a logic high voltage level, turns on the transistors 440, 442 and 446. The transistor 448 is not enabled, so the transistor 446 does not pass current. The transistor 444 is turned on by the control enable signal at the control enable input 434, and the transistor 442 passes current via the transistor 444 to the capacitive node 420. The transistors 440, 442 and 444 cooperate to pull down a voltage level of the node 450, thereby turning off the transistor 418 to prevent complete discharge of the capacitor 422 via the discharge path 428. In a particular illustrative embodiment, transistors 440, 442, 444, 446, and 448 are coupled to the capacitive node 420 to provide a current feedback loop that operates to regulate the current flow through the transistor 418 to prevent complete discharge of the capacitor 420.In a particular illustrative embodiment, the node 450 is isolated from the input404. When the input signal applied to the input 404 is a clock signal, the node 450 is kept at a voltage level, such as the voltage level of the voltage source (VDD) until the level of the clock signal (CLK 2) at the node 420 falls to a voltage level that is at least one voltage threshold below the voltage level of the voltage source (VDD). When this voltage level is reached, the programmable voltage level control circuit 430 enables a sharp pulldown transition at the capacitive node 420.FIGS. 5 A and 5B are graphical representations of clock signals and adjusted clock signals having a reduced voltage swing implemented using systems of FIGS. 1-4. FIG. 5 A is a graphical representation 500 illustrating a clock signal 502 (shown as a dashed line) that has a voltage swing between a logic low voltage level (Vss) <an>d a logic high voltage level (VDD)- The graphical representation 500 also includes a reduced swing clock signal (i.e., a second clock, CLK 2) 504. The clock signal 502 may be a signal that is received, for example, at one of the inputs 104, 204, 304, or 404 illustrated in FIGS. 1-4, respectively. The reduced swing clock signal 504 represents a corresponding signal at the line 108 in FIG. 1, at the node 220 in FIG. 2, at the line 314 in FIG. 3, or at the node 420 in FIG. 4. The reduced swing clock signal 504 has a low portion 506 that corresponds to a low portion 508 of the clock signal 502, but the voltage level of the low portion 506 and the low portion 508 have a voltage differential ([Delta]VSS)J which represents a difference between a logic low voltage level and a first voltage level, for example.FIG. 5B is a graphical representation 520 illustrating a clock signal 502 (shown as a dashed line) that has a voltage swing between a logic low voltage level (Vss) and a logic high voltage level (VDD). The clock signal 502 may be a signal that is received, for example, at one of the inputs 104, 204, 304, or 404 illustrated in FIGS. 1-4, respectively. The graphical representation 520 also includes a first reduced swing clock signal 504, a second reduced clock signal 524, a third reduced clock signal 526, and a fourth reduced clock signal 528. The first, second third and fourth reduced swing clock signals 504, 524, 526 and 528 may represent various voltage levels or tiers (generally indicated at 522), which may be selected by applying control signals to control inputs 434 and 436 of the programmable voltage control circuit 430 illustrated in FIG. 4, for example. The first, second, third and fourth reduced swing clock signals 504, 524, 526, and 528 represent corresponding signals that appears at the line 108 in FIG. 1, at the node 220 in FIG. 2, at the line 314 in FIG. 3, or at the node 420 in FIG. 4. For example, the first, second, third, and fourth reduced clock signals 504, 524, 526, and 528 may be generated by controlling the voltage level adjusting circuits 320 and 412 illustrated in FIGS. 3 and 4, respectively, using the programmable voltage level control circuit 318 illustrated in FIG. 3 or the programmable voltage level control circuit 430 in FIG. 4, respectively. In a particular illustrative embodiment, the third reduced clock signal 526 illustrates a second clock (CLK 2) at node 420 in FIG. 4, when the two control enable inputs are received at the programmable voltage level control circuit 430 via the control enable inputs 434 and 436 illustrated in FIG. 4.FIG. 6 is a block diagram of a fifth particular illustrative embodiment of a system 600 to control a voltage swing. The system 600 includes a logic circuit element, such as a logic NAND gate 602, that includes a first input 604 to receive an input signal, such as a clock signal. The logic circuit element 602 also includes a second input that is coupled to an electrical ground 606. Since the second input is held at a logic low voltage level (i.e., a ground voltage level), the output of the logic NAND gate 602 at a node 607 represents an inverted version of the input signal at the first input 604.The system 600 includes a p-channel transistor 608 and a n-channel transistor610 arranged to form an inverter circuit. The p-channel transistor 608 includes a first terminal coupled to a node 611, a control terminal coupled to the node 607, and a second terminal coupled to a capacitive node 620. The n-channel transistor 610 includes a first terminal coupled to the capacitive node 620, a control terminal coupled to the node 607, and a second terminal coupled to the electrical ground 606. The system 600 also includes a voltage level adjustment circuit 612 that has a transistor 616 and a transistor 618 arranged in parallel between a voltage source (VDD) and the node 611. The transistor 616 includes a first terminal coupled to the voltage source (VDD), a control terminal coupled to a power mode bypass enable input 614, and a second terminal coupled to the node 611. When a power mode bypass enable signal is received at the power mode bypass enable input 614, the transistor 616 couples the node 611 to the voltage source (VDD). The transistor 618 includes a first terminal coupled to the voltage source (VDD), a control terminal coupled to a node 636, and a second terminal coupled to the node 611.The system 600 also includes a transistor 634 having a first terminal coupled to the node 636, a control terminal coupled to the node 607, and a second terminal coupled to the capacitive node 620. The system 600 further includes a transistor 632 including a first terminal coupled to the node 636, a control terminal coupled to the node 607, and a second terminal coupled to the electrical ground 606. Additionally, the system 600 includes a capacitor 622 coupled between the capacitive node 620 and the electrical ground 606. The system 600 also includes a transistor 624 including a first terminal coupled to a circuit element 626, a control terminal coupled to the capacitive node, and a second terminal coupled to the electrical ground 606. The circuit element 626 may include a data input 628 and an output 630. In a particular illustrative embodiment, the circuit element 626 may be a receiver, a transmitter, a processor, another circuit element, or any combination thereof.In a particular illustrative embodiment, when a clock signal at the input 604 transitions from a logic low to a logic high voltage level, the voltage level at the node607 transitions from a logic high to a logic low voltage level, activating the transistors608 and 634 and turning off the transistor 632. The capacitive node 620 may be electrically coupled to the voltage supply (VDD) via a charge path illustrated by a line 638. A voltage level of the capacitive node 620 charges to a first voltage level that is less than the level of the voltage source (VDD), because the transistor 632 passes less current in response to the rising voltage at the capacitive node 620. Thus, a voltage at the node 636 increases, restricting or regulating current flow through the transistor 618 to the capacitive node 620. When the clock signal at the input 604 switches from high to low, the voltage level at the node 607 transitions from low to high, turning off the transistors 608 and 634 and activating the transistor 632 to pull down a voltage level at the node 636. Since the transistor 608 is turned off, current does not flow to the capacitive node 620.In a particular illustrative embodiment, the voltage level adjustment circuit 612 may be utilized to reduce a logic high portion of the signal at the capacitive node 620 to a first voltage level that is less than the voltage level of the voltage source (VDD). Thus, for a clock signal at the input 604, the second clock signal (CLK 2) at the capacitive node 620 may swing between a logic low voltage level (i.e. a ground voltage level) and the first voltage level. The reduced voltage swing clock signal (i.e., CLK 2) may be provided as a clock signal to other circuit devices, such as the circuit element 626. By reducing the swing of the clock signal, overall power consumption of the circuit may be reduced.FIG. 7 is a block diagram of a sixth particular illustrative embodiment of a system 700 to control a voltage swing. The system 700 includes a circuit element, such as a logic NAND gate 702 including a first input 704 to receive a signal, such as a clock signal. The logic NAND gate 702 also includes a second input coupled to an electrical ground 706. Since the second input to the logic NAND gate 702 is held at a voltage low level, the output of the logic NAND gate 702 at a capacitive node 707 represents an inverted version of the input signal at the first input 704.The system 700 includes a p-channel transistor 708 and a n-channel transistor710 arranged to form an inverter circuit. The p-channel transistor 708 includes a first terminal coupled to a node 713, a control terminal coupled to the capacitive node 707, and a second terminal coupled to a capacitive node 712. The n-channel transistor 710 includes a first terminal coupled to the capacitive node 712, a control terminal coupled to the capacitive node 707, and a second terminal coupled to a node 711. The system 700 includes a logic high voltage level adjustment circuit 722 coupled between the node 713 and a voltage source (VDD) and includes a logic low voltage level adjustment circuit 734 coupled between the node 711 and an electrical ground 706. The system 700 includes a capacitor 714 coupled between the capacitive node 712 and the electrical ground 706. The system 700 also includes a transistor 716 having a first terminal coupled to a circuit element 718, a control terminal coupled to the capacitive node 712, and a second terminal coupled to the electrical ground 706. In a particular illustrative embodiment, the circuit element 718 may be a receiver circuit, a transmitter circuit, another circuit element that receives a reduced voltage swing signal via the capacitive node 712, or any combination thereof. The circuit element 718 may include a data input 719 and an output 720.The logic high voltage level adjustment circuit 722 includes a first transistor 726 and a second transistor 728 coupled in parallel between the voltage source (VDD) and the node 713. The first transistor 726 includes a first terminal coupled to the voltage source (VDD), a control terminal coupled to a log high power mode control bypass terminal 724 to receive a logic high power mode control bypass signal, which enables the system 700 to bypass the logic high voltage level adjustment circuit 722. The transistor 728 includes a first terminal coupled to the voltage supply (VDD), a control terminal coupled to a logic high level control circuit 730, and a third terminal coupled to the node 713. The logic high level control circuit 730 may be coupled to the capacitive node 712 and may include one or more control inputs 732 to receive one or more control input signals to adjust a logic high voltage level for the system 700. In a particular illustrative embodiment, the logic high level control circuit 730 is adapted to reduce the logic high voltage level to a first logic high voltage level based on the logic high control input signals.The logic low voltage level adjustment circuit 734 includes a first transistor 738 and a second transistor 740 arranged in parallel between the node 711 and the electrical ground 706. The first transistor 738 includes a first terminal coupled to the node 711, a control terminal coupled to a bypass input 736 to receive a logic low power mode control bypass signal, and a second terminal coupled to the electrical ground. The second transistor 740 includes a first terminal coupled to the node 111, a control terminal coupled to a logic low level control circuit 742, and a second terminal coupled to the electrical ground 706. When a logic low power mode control bypass signal is applied to the bypass input 736, the logic low voltage level adjustment circuit 734 is bypassed to electrically couple the node 711 to the electrical ground 706. The logic low level control circuit 742 is coupled to the capacitive node 712 and includes one or more control inputs 744 to receive one or more logic low control signals, which control the logic low level control circuit 742 to adjust a logic low voltage level of the logic low voltage level adjustment circuit 734.In a particular illustrative embodiment, the logic high voltage level adjustment circuit 722 and the logic low voltage level adjustment circuit 734 cooperate to clamp a voltage swing of a signal at the node 712 between a high voltage level that is less than the supply voltage (VDD) and a low voltage level that is greater than a ground voltage (i.e., electrical ground 706). Additionally, the logic high level control circuit 730 and the logic low level control circuit 742 may be implemented using transistors. The logic high level control circuit 730 and the logic low level control circuit 742 may be controlled by the one or more control input signals via the logic high control inputs 732 and the logic low control inputs 744 to reduce the high voltage level and to increase the low voltage level to tune the voltage swing at the node 712.FIGS. 8A and 8B are graphical representations of clock signals and adjusted clock signals having a reduced voltage swing implemented using systems of FIGS. 6 and 7. FIG. 8A is a graphical representation 800 illustrating a clock signal 802 having a voltage swing from a low voltage level (Vss) to a high voltage level (VDD). In this instance, a logic high voltage level adjustment circuit, such as the voltage level adjustment circuit 612 illustrated in FIG. 6, may reduce a logic high portion of the clock signal 802 to a reduced clock signal 804 (i.e., a second clock signal, CLK 2). The difference between the logic high portion of the clock signal 802 and the reduced clock signal 804 is a differential voltage ([Delta]VDD). By using the reduced clock signal 804 to provide a clock signal to various circuit components, the overall power consumption of a circuit device.FIG. 8B is a graphical representation 820 illustrating a clock signal 802 having a voltage swing from a low voltage level (Vss) to a high voltage level (VDD). In this instance, a logic high voltage level adjustment circuit and a logic low voltage level adjustment circuit, such as the voltage level adjustment circuits 722 and 734 illustrated in FIG. 7, may cooperate to produce a second clock signal having a reduced voltage swing, such as the reduced clock signal 824. In this instance, the reduced clock signal 824 varies from the input clock signal 802 at both the logic low and the logic high portions of the signal. The differential logic high voltage (VDD) and the differential logic low voltage (Vss) represent reductions in the clock voltage swing, which may result in reduced power consumption for the circuit.FIG. 9 is a flow diagram of a particular illustrative embodiment of a method of controlling a voltage swing. At 902, a clock signal is received at an input to a digital circuit device that includes a capacitive node. Advancing to 904, a voltage level adjustment circuit is selectively activated to increase a logic low portion of the clock signal applied to the capacitive node to a voltage level that is greater than a ground voltage level. Moving to 906, a first control signal is received at a voltage level control circuit coupled to the voltage level adjustment circuit. Proceeding to 908, the voltage level of the logic low portion of the clock signal is increased to a second voltage level that is greater than the voltage level. The method terminates at 910.In general, the voltage level adjustment circuit may be adjustable. In a particular illustrative embodiment, the voltage level adjustment circuit may be coupled to a programmable voltage level control circuit, which may receive one or more control signals to regulate current flow through the voltage level adjustment circuit. By regulating the current flow, the voltage level adjustment circuit prevents a capacitive node from discharging to a ground voltage, prevents the capacitive node from charging to a voltage level of a voltage source (VDD), or both. Thus, a voltage swing of the signal at the capacitive node is clamped to reduce the voltage swing and thereby to reduce power consumption. Additionally, since the capacitor need not recharge to the level of the voltage source (VDD) nor discharge to the ground voltage level (Vss), the capacitor may switch faster.In general, while the capacitive node illustrated in FIGS. 1-4, 6 and 7 was shown in conjunction with a discrete capacitor circuit component, it should be understood that the capacitor may represent line and gate capacitances associated with other circuit components.FIG. 10 is a block diagram of a wireless communication device 1000 that includes a circuit device to control a voltage swing 1011, which may be one of the circuit devices illustrated in FIGS. 1-4, 6 and 7 or which may implement the method illustrated and described with respect to FIG. 9. The portable communications device 1000 includes an on-chip system 1022 that includes a processor, such as a digital signal processor 1010. The digital signal processor 1010 includes at least one device having a voltage swing adjustment circuit 1011, as described with respect to FIGS. 1-4, 6, 7 and 9. In a particular illustrative embodiment, the voltage swing adjustment circuit 1011 may generate a reduced voltage swing signal to be used in high speed processors, such as the digital signal processor 1010, and system on chip devices, such as the on-chip system 1022. The reduced voltage swing signal may reduce active power consumption through reduced voltage swing on signal buses and clock buses. In a particular illustrative embodiment, the voltage swing adjustment circuit 1011 may provide the reduced voltage swing signal without impacting processing speed, without introducing separate power supplies, and with little circuit area impact. In a particular illustrative embodiment, the voltage swing adjustment circuit 1011 may be programmable to selectively adjust the range of the voltage swing.FIG. 10 also shows a display controller 1026 that is coupled to the digital signal processor 1010 and to a display 1028. Moreover, an input device 1030 is coupled to the digital signal processor 1010. Additionally, a memory 1032 is coupled to the digital signal processor 1010. A coder/decoder (CODEC) 1034 can also be coupled to the digital signal processor 1010. A speaker 1036 and a microphone 1038 can be coupled to the CODEC 1034.FIG. 10 also indicates that a wireless controller 1040 can be coupled to the digital signal processor 1010 and to a wireless antenna 1042. In a particular embodiment, a power supply 1044 is coupled to the on-chip system 1022. Moreover, in a particular embodiment, as illustrated in FIG. 10, the display 1028, the input device 1030, the speaker 1036, the microphone 1038, the wireless antenna 1042, and the power supply 1044 are external to the on-chip system 1022. However, each is coupled to a component of the on-chip system 1022.In a particular illustrative embodiment, the voltage swing adjustment circuit1011 may be used to enhance overall performance of the portable communications device 1000. In particular, a voltage swing adjustment circuit 1011 may reduce overall clock power consumption of the device 1000, thereby extending battery life, improving power efficiencies overall and enhancing the performance of the device 1000.It should be understood that while the voltage swing adjustment circuit 1011 is shown only within the digital signal processor 1010, the voltage swing adjustment circuit 1011 may be provided in other components, including the display controller 1026, the wireless controller 1040, the CODEC 1034, or any other component that receives or uses a clock signal, such as a logical latch circuit, a logical flip-flop circuit, other clocked circuitry, or any combination thereof.In general, embodiments of the voltage swing adjustment circuit 1011 provide significant advantages over prior are voltage swing reduction techniques. In a particular illustrative embodiment, the voltage swing adjustment circuit 1011 may provide as much as 33 percent power savings on a net of a circuit device without adversely impacting timing. Instead, because the voltage swing is reduced, the timing of the circuit may be enhanced, i.e., sped up. Additionally, the voltage swing can be reduced without introducing additional biases or extra power supplies. Embodiments disclosed herein include bypass logic to allow the device to bypass the power savings in particular instances. Moreover, the implementations illustrated and described herein may be scaled for higher voltages and can be mixed and matched based on robustness, timing and power tradeoffs to reduce a logic high voltage level, to increase a logic low voltage level, or both. Another advantage provided by embodiments of the voltage swing adjustment circuit 1011 is that the circuit reduces the voltage swing without compromising the signal integrity.Those of skill would further appreciate that the various illustrative logical blocks, configurations, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, configurations, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, PROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a computing device or a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a computing device or user terminal. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the disclosed embodiments. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope possible consistent with the principles and novel features as defined by the following claims.
In-circuit-emulation of an integrated circuit including a digital data processor capable of executing program instructions. A debug event detector detects predetermined debug event. Upon detection of a debug event, the in-circuit-emulator suspends program execution except for real time interrupts. An emulation monitor program permitting visibility into the state of the integrated circuit is run as such a real time interrupt interrupt. The integrated circuit includes a serial scan path for control of the state of the integrated circuit, such as a JTAG interface. The in-circuit-emulation selectively assigning emulation resources of the integrated circuit to one of the serial scan path or the monitor program. A monitor privilege input controls this assignment by its digital state. The the emulation resource may be a read write data register and he assignment includes accessing the data register.
1. A method of in circuit emulation of an integrated circuit including a digital data processor capable of executing program instructions, the integrated circuit including a serial scan path for control of the state of the integrated circuit, comprising the steps of:detecting a predetermined debug event; upon detection of said predetermined debug event suspending program execution except for at least one type interrupt; executing an emulation monitor program via said at least one type interrupt; and selectively assigning control of at least one emulation resource of the integrated circuit to one of said serial scan path or said monitor program. 2. The method of claim 1, wherein the integrated circuit includes a monitor privilege input, said method wherein:said step of selectively assigning emulation resources of the integrated circuit assigns said emulation resources to said serial scan path upon a first digital state of said monitor privilege input and assigns said emulation resources to said emulation monitor program upon a second digital state of said monitor privilege input. 3. The method of claim 1, the emulation resources include at least one read write data register, said method further comprising:said step of selectively assigning emulation resources of the integrated circuit includes accessing said at least one read write data register.
This application claims priority under 35 USC [section]119(e)(1) of Provisional Application No. 60/120,683, filed Feb. 19, 1999.This application is related to co-assigned applications all of which are incorporated herein by reference:U.S. patent application Ser. No. 09/154,385 entitled "METHOD OF INITIALIZING A CPU CORE FOR EMULATION" filed Sep. 16, 1998, now U.S. Pat. No. 6,167,365 granted Dec. 26, 2002; andU.S. patent application Ser. No. 09/483,367, entitled "EMULATION SUSPEND MODE WITH DIFFERING RESPONSE TO DIFFERING CLASSES OF INTERRUPTS" claiming priority from U.S. Provisional Application No. 60/120,809 filed Feb. 19, 1999;U.S. patent application Ser. No. 09/481,852, entitled "EMULATION SUSPENSION MODE WITH STOP MODE EXTENSION" claiming priority from U.S. Provisional Application No. 60/120,809 filed Feb. 19, 1999;U.S. patent application Ser. No. 09/483,568, entitled "EMULATION SUSPEND MODE HANDLING MULTIPLE STOPS AND STARTS" claiming priority from U.S. Provisional Application No. 60/120,809 filed Feb. 19, 1999;U.S. patent application Ser. No. 06/09/483,697, entitled "EMULATION SUSPEND MODE WITH FRAME CONTROLLED RESOURCE ACCESS" claiming priority from U.S. Provisional Application No. 60/120,809 filed Feb. 19, 1999;U.S. patent application Ser. No. 09/482,902, entitled "EMULATION SUSPEND MODE WITH INSTRUCTION JAMMING" claiming priority from U.S. Provisional Application No. 60/120,809 filed Feb. 19, 1999;U.S. patent application Ser. No. 09/483,237, entitled "EMULATION SYSTEM WITH SEARCH AND IDENTIFICATION OF OPTIONAL EMULATION PERIPHERALS" claiming priority from U.S. Provisional Application No. 60/120,960 filed Feb. 19, 1999;U.S. patent application Ser. No. 09/483,783, entitled "EMULATION SYSTEM WITH ADDRESS COMPARISON UNIT AND DATA COMPARISON UNIT OWNERSHIP ARBITRATION" claiming priority from U.S. Provisional Application No. 60/120,791 filed Feb. 19, 1999;U.S. patent application Ser. No. 09/481,853, entitled "EMULATION SYSTEM WITH PERIPHERALS RECORDING EMULATION FRAME WHEN STOP GENERATED" claiming priority from U.S. Provisional Application No. 60/120,810 filed Feb. 19, 1999; andU.S. patent application Ser. No. 09/483,321, entitled "EMULATION SYSTEM EMPLOYING SERIAL TEST PORT AND ALTERNATIVE DATA TRANSFER PROTOCOL" claiming priority from U.S. Provisional Application No. 60/120,667 filed Feb. 19, 1999TECHNICAL FILED OF THE INVENTIONThe technical field of this invention is complex integrated circuits including embedded digital processor cores and more particularly in circuit emulation of integrated circuits with embedded digital processor cores.BACKGROUND OF THE INVENTIONProgrammable digital processors such as microprocessors and digital signal processors have become very complex machines. Testing these programmable digital processors has also become complex task. It is now common for semiconductor manufactures to build single integrated circuit programmable digital processors with millions of transistors. The current trend is to devote many of these transistors to on-chip cache memories. Even so, the number of logic circuits and their complex relationships makes testing such integrated circuits an increasingly difficult task.A trend in electronics makes this testing problem more difficult. Single integrated circuit programmable digital processors are becoming more and more of the electronics of many end products. A single integrated circuit used in this way typically includes a programmable digital processor, read only memory storing the base program, read/write memory for operation and a set of peripherals selected for the particular product. This trend is known as system level integration. In the ultimate system level integration, all the electronics are embodied in a single integrated circuit. This level of integration is now achieved in electronic calculators. Many electronic calculators consist of a single integrated circuit, a keyboard, a display, the battery or solar panel power source and a plastic case. Such integration provides less "visibility" into the operation of the programmable digital signal processor. Because the address and data busses of the digital processor are no longer brought out the device pins, it is more difficult to determine the behavior of the embedded processor from external connections.Another trend in electronics makes this testing problem more difficult. Many new product applications require differing types of processing. Often control processes and user interface processes are better handled with a different programmable digital processor than digital signal processes. An example is wireless telephones. Many coding/decoding and filtering tasks are best handled by a digital signal processor (DSP). Other tasks such as dialing, controlling user inputs and outputs are best handled by microprocessors such as a RISC (Reduced Instruction Set Computer) processor. There is a trend for a system integrated circuit to include both a RISC processor and a DSP. These two processors will typically operate independently and employ differing instruction set architectures. Thus there may be more than one programmable digital processor on a single integrated circuit, each having limited visibility via the device pins.Another problem is product emulation when employing these programmable digital processors. Product development and debugging is best handled with an emulation circuit closely corresponding to the actual integrated circuit to be employed in the final product. In circuit emulation (ICE) is in response to this need. An integrated circuit with ICE includes auxiliary circuit not needed in the operating product included solely to enhance emulation visibility. In the typical system level integration circuit, these emulation circuits use only a very small fraction of the number of transistors employed in operating circuits. Thus it is feasible to include ICE circuits in all integrated circuits manufactured. Since every integrated circuit can be used for emulation, inventory and manufacturing need not differ between a normal product and an emulation enhanced product.As a result of these trends there is a need in the art for integrated circuits which are easier to test and easier to emulate.SUMMARY OF THE INVENTIONThis invention involves in-circuit-emulation of an integrated circuit. The integrated circuit includes a digital data processor capable of executing program instructions. A debug event detector detects predetermined debug event. Upon detection of a debug event, the in-circuit-emulator suspends program execution except for real time interrupts. An emulation monitor program permitting visibility into the state of the integrated circuit is run as such a real time interrupt interrupt.The integrated circuit includes a serial scan path for control of the state of the integrated circuit, such as a JTAG interface. The in-circuit-emulation selectively assigning emulation resources of the integrated circuit to one of the serial scan path or the monitor program. A monitor privilege input controls this assignment by its digital state. The emulation resource may be a read write data register and the assignment includes accessing the data register.These and other aspects of this invention are illustrated in the drawings, in which:FIG. 1 illustrates the environment of the debugging system of this invention which is known in the art;FIG. 2 illustrates the known 14-pin JTAG header used to interface the target system to the access adapter;FIG. 3 illustrates an emulation level view of the target system;FIG. 4 illustrates an electrical connection view of the coupling between the access adapter and the target system; andFIG. 5 illustrates the possible operation states in the debugging environment of the preferred embodiment of this invention.DETAILED DESCRIPTION OF PREFERRED EMBODIMENTSFIG. 1 illustrates the environment of the debugging system of this invention. This environment connects high level debugging software executing on a debug host computer 1 to a low level debug interface supported by the target system 3. In this invention the target system 3 may include more than one programmable digital processor and possibly more than one such programmable digital processor on a single integrated circuit. In this application the term programmable digital processor is meant to encompass devices commonly known as microprocessors, microcontrollers and digital signal processors. The target system 3 provides a standard interface to the access adapter 2.Debug host computer 1 consists of a computer, for example a PC, running a CPU core specific software debugger as one of its tasks. The debug host computer 1 allows the user to issue high level commands such as setting breakpoints, single stepping the programmable digital processor in target system 3 and displaying the contents of a memory range.Access adapter 2 is a combination of hardware and software that connects the debug host computer 1 to target system 3. Access adapter 2 utilizes one or more hardware interfaces and/or protocols to convert messages created by user interface commands of debug host computer 1 into debug commands operable on target system 3. Access adapter 2 can be either loosely coupled or tightly coupled to the debug host computer 1. In the case of a PC host computer, access adapter 3 can be an XDS 510 scan controller attached directly to the PC bus. This implements a tightly coupled configuration requiring the PC to perform even the lowest level actions necessary to manage debug activity. In loosely coupled configurations, debug host computer 1 CPU communicates with another processor over a high bandwidth interface such as a SCSI, LAN or other interface. An example of this is a XDS 510WS controller connected to the target system debug interface and to the PC through a SCSI port. In this case access adapter 2 is a debug subsystem manager and handles the detailed manipulation of the target debug capability, and debug host computer 1 send high level commands to the debug subsystem. Access adapter 2 returns data and error conditions to debug host computer 1. Current PC operating systems may not be able to service the low level debug requirements continuously. Thus it may be advantageous to partition the problem into the display and user interface and control sections.The target system 3 contains one or more programmable digital processor cores. The programmable digital processor core(s) contain hardware designed explicitly to ease debugging. This special hardware of target system 3 is the lowest element of the system debug environment 10. The programmable digital processor core debug facilities allow the user to control the program execution, examine or change system memory, core CPU resources in real-time.The interface of access adapter 2 to target system 3 is preferably an extension to the IEEE 1149.1 (JTAG) test standard. The JTAG standard includes 5 primary signals known as nTRST, TCK, TMS, TDI, and TDO. The JTAG standard typically employs three additional signals Test Clock Out (TCKO), the target supply (Vdd) and ground (GND). The preferred embodiment of this invention also employs the two extension signals nET1 and nET0. Table 1 lists these signals, states whether the signal is an input, an output or both, and gives the descriptive name of the signal.<tb><sep>TABLE 1<tb><sep><sep>Type<sep><tb><sep>Pin<sep>Input/Output<sep>Description<tb><sep>nTRST<sep>I<sep>Test Logic Reset Not<tb><sep>TCK<sep>I<sep>Test Clock<tb><sep>TMS<sep>I<sep>Test Mode Select<tb><sep>TDI<sep>I<sep>Test Data Input<tb><sep>TDO<sep>O<sep>Test Data Output<tb><sep>TCKO<sep>O<sep>Test Clock Out<tb><sep>PD(Vdd)<sep>I<sep>Target Power Supply<tb><sep>GND<sep>I/O<sep>Ground<tb><sep>nET1<sep>I/O<sep>Emulation and Test 1 Not<tb><sep>nET0<sep>I/O<sep>Emulation and Test 0 NotThe signal nTRST is called Test Logic Reset Not. A low applied to this pin causes all test and debug logic in the target device to be reset along with the IEEE 1149.1 interface.The signal TCK is called Test Clock. This signal is used to drive the IEEE 1149.1 state machine and logic. The same TCK supplied to the target device is supplied to this pin.The signal TMS is called Test Mode Select. This signal directs the next state of the IEEE 1149.1 test access port state machine.The signal TDI is called Test Data Input. This signal is the scan data input to the target device.The signal TDO is called Test Data Output. This signal is the scan data output of the target device.FIG. 2 illustrates a 14-pin JTAG header used to interface target system 3 to access adapter 2. The JTAG header requires three additional pins. and further includes two extension pins. The signal TCKO is called Test Clock Out. This signal is a test clock supplied by the scan controller to the target system. This test clock can be used as the system TCK source, or it can be ignored with the TCK source being generated by the target system. In many target systems, TCKO is simply connected to TCK and used as the test clock. The PD(Vdd) is called the Target Power Supply. This is used as a power detect input to access adapter 2. The JTAG header also includes ground connections.The preferred embodiment of this invention includes an extension to the JTAG interface. Two pins nET0 and nET1 serve as a two pin trigger channel function. This function supplements the serial access capability of the standard interface with continuous monitoring of device activity. The two added pins create debug and test capabilities that cannot be created with the standard interface. The nET0 signal is called Emulation and Test 0 Not. This signal helps create a trigger to channel zero. Similarly, the nET1 signal is called Emulation and Test 1 Not. This signal helps create a trigger to channel one. These channels will be further explained below.FIG. 3 illustrates an emulation level view of target system 3. Target system 3 may include plural devices 11, 13 and 15. FIG. 3 illustrates details of example device 13 which includes plural megamodules 21, 23 and 25. FIG. 3 illustrates details of example megamodules 23. Example megamodule 23 includes debug and test control unit 30 and plural device domains. These device domains include central processing unit (CPU) core 31, analysis unit 33, memory 35 and debug/test direct memory access (DT_DMA) unit 37.Debug and test control unit 30 contains the IEEE interface. It provides a bridge between the Extended IEEE Interface and the debug and test capability distributed through the domains. Debug and test control unit 30 can independently control by the domains 31, 33, 35 and 37. In other words, one or more domains can be scanned or controlled while other domains continue operate in their normal functional way.FIG. 4 illustrates an electrical connection view of the coupling between access adapter 2 and target system 3. FIG. 4 shows the connections of the of the various signals of the JTAG header 5 illustrated in FIG. 2. All these signals are connected to scan controller 41. The signals nTRST, TCK and TMS are connected to two example megamodules 31 and 33. FIG. 4 illustrates the optional connection of TCKO to the target system clock SYSCLK. The scan input TDI connects to a scan input of megamodule 31. The scan output of megamodule 31 supplies the scan input of eg module 33. The scan output of megamodule 33 supplies the scan output TDO. The two extension signals nET0 and nET1 control meg modules 31 and 33 via merge unit 32. These extension signals are monitored by test equipment 43.The debugging environment illustrated in FIGS. 1 to 4 permit the user to control application execution by any programmable digital processor of target system 3. Typical control processes include: keyboard directives such as run, halt and step; software breakpoints using op-code replacement; internal analysis breakpoints specified by the program counter or watchpoints specified by data accesses; and externally generated debug events.Actions such as decoding a software breakpoint instruction (DSTOP), the occurrence of an analysis breakpoint or watchpoint (ASTOP), or the occurrence of a debug host computer event (HSTOP) are referred to as debug events. Debug events cause execution to suspend. Debug events tied to the execution of specific instructions are called breakpoints. Debug events generated by memory references are called watchpoints. External debug events can also suspend execution. Debug events cause entry into the Debug State.FIG. 5 illustrates the possible operation states in the debugging environment of the preferred embodiment of this invention. These operation states are execute (EXE) 101, debug suspend (DSUSP) 102 and interrupt during debug suspend (IDS) 103.Execute state 101 corresponds to the ordinary operation of target device 3. In the execute state 101 instructions are executed by the programmable digital processor in normal fashion. There are no outstanding debug suspend conditions. A low logic level applied to the nTRST terminal or a software directive requesting functional run forces the operational state to execute state 101. In execute state 101 the pipeline fetches and executes instructions and process interrupts in a normal way.The operational state transits from execute state 101 to debug suspend state 102 upon a debug event. The debugging environment of the preferred embodiment of this invention allows the suspension of program execution at points defined by breakpoints, watchpoints, and debug software directives, provided the application is an allowable debug suspend window. In general, debug events are allowed at an instruction boundary, when reset is inactive and no interrupts are active. Program execution suspends at an instruction boundary and the operational state changes to debug suspend state 102. When any debug condition is not met, the operational state remains in execute state 101 and no debug event processing in the delayed slots of delayed branch instructions. Debug events occurring outside the debug suspend window create a debug pending condition. This condition suspends program execution when the application enables debug interrupts by opening the debug suspend window.In the debug suspend state 102 background instruction execution is inactive. This state permits debug/emulation visibility into the state of target device 3 while background execution is suspended. In debug suspend state 102, the program counter (PC) and status bits are generally preserved at their values prior to the debug event. The PC points to the instruction to be executed next. When execution resumes, the instruction referenced by the PC and those following is fetched for execution. This is facilitated by flushing the front end of the pipeline upon entry into debug suspend state 102 from execute state 101.The operational state may return to execute state 101 by a debug run directive. This may be either an unconditional run directive or a single step run directive. A single step run directive enters execute state 101 long enough to advance the instruction pipeline one stage and then returns to debug suspend state 102.It is important to note that debug suspend state 102 consumes no CPU bandwidth because no monitor code executes as a result of suspending execution. The debug architecture provides for complete register and memory accessibility without the aid of a monitor program. The user may change the operating mode at any time without restrictions.Certain interrupts transition the operation state from debug suspend state 102 to interrupt during suspend (IDS) state 103. These interrupts are defined by a separate interrupt mask independent of the normal interrupt mask. Those interrupts defined as high priority interrupts (HPI) or foreground interrupts cause the operation state to enter the interrupt during suspend state 103 from the debug suspend state 102. The debug suspend state 102 enables high priority interrupts independent of the state of the global interrupt enable bit or of software run directives. This enables debugging of background tasks while the target device 3 continues to service a real time application via high priority interrupts.The interrupt pipeline jam for such a high priority interrupt moves the operational state to interrupt during suspend state 103. This jam causes an extra word to be pushed on the stack containing the debug status describing the reason the debug suspend state 102 entry occurred. Interrupt during suspend state 103 differs from the execute state 101 in that the interrupt processing creates a thread, linking the interrupt execution to the debug suspend state 102 as described in above. A digital frame counter (DFC) is incremented upon each high priority interrupt taken. The high priority interrupt sets an interrupt during debug state bit (IDS), which is part of the CPU status. The IDS bit sets after the context save stores the previous value on the stack with the status word. When returning from an interrupt the IDS bit indicates whether to re-enter debug suspend state 102. If the IDS bit is set, the interrupt occurred during a debug suspend state 102 and the operational state should return to the debug suspend state 102. If the IDS bit is not set, the interrupt occurred during the execute state 101 and the operational state should return to execute state 101. Upon returning from the interrupt, the PC and status return to their state before the interrupt unless the interrupt service routine has purposely modified values on the stack. This is required because it is possible for multiple interrupts to occur and be serviced while the device is in debug suspend state 102.The digital frame counter is decremented upon each return from interrupt. This count permits the debug environment to track the status of the suspended foreground task. For example, a taken high priority interrupt may change the machine state and thus the current machine state would not reflect the suspended background task. However, if the digital frame counter were zero, then the debug environment is assured no interrupts have temporarily changed the machine state.The interrupt during suspend state 103 is exited at the end of the interrupt service routine. A normal end of an interrupt involves a return from interrupt instruction (RTI). Upon execution of a return from interrupt instruction, the machine status is popped from the stack. As noted above, the interrupt during debug state bit indicates whether the interrupt occurred during execute state 101 or debug suspend state 102. The operational state return to the former state as indicated by the interrupt during debug state bit. The prior value of the program counter is reloaded to recover the prior machine status. Execution of a return from interrupt instruction also decrements the digital frame counter. Because it is possible to receive a higher priority interrupt while servicing a prior interrupt, more than one interrupt level may be pending. The digital frame counter indicates the current interrupt level. This is useful to debug processing as the machine status may be changed by the multiple interrupts. The debug software can read the digital frame counter and supply a debug level identity to identify currently targeted interrupt level. Execution and register operations target a specific debug level. Memory operations can target a specific debug level or bypass the level comparison. In particular, the status of the background task suspended on initial entry into debug suspend state 102 can only be reliably determined if the digital frame counter is zero. The maximum number of levels of the digital frame counter corresponds to the size of the interrupt hierarchy. This data preserves a path back to the debug suspend state 102 when the application concludes the interrupt service routine with a return from interrupt instruction.The interrupt during suspend state 103 transits to the execute state 102 upon execution of an abort interrupt (ABORTI) instruction. The abort interrupt instruction would ordinarily be used only on detection of a unrecoverable error in the interrupt service routine. The path back to the debug suspend state is broken upon execution of the abort interrupt instruction. The status of the interrupt during debug state bit and the digital frame counter are ignored in this case. In particular, the interrupt during debug state bit is cleared and the digital frame counter is set to zero. This mechanism enables recovery to the background task when a high priority interrupt service routine has an unrecoverable error.Interrupts can be serviced the while the debug software views or modifies the CPU state. The debug state shown to the programmer reflects the machine state when there is no interrupt service routine active. The debug software works with on-chip debug support to ensure the high priority interrupts are transparent to a debug session. Likewise this hardware and software combination works to make debug activity transparent to high priority interrupt service routines. Note that program execution can actually be suspended in multiple locations if it is desired to break within a time critical interrupt while still allowing others to be serviced.Table 2 lists all the debug related registers included in each megamodule. Miscellaneous control bits supporting the JTAG interface are not included in this list. Most but not all of the debug unit registers are placed in the memory map so they are accessible by both debug software and the application. There are three levels of register access: registers always shared by the application and debug facilities; registers accessed through the extended JTAG port only; and registers accessed through the extended JTAG port or a specially privileged monitor program but not shared.The application and debug software share registers controlling external trigger event inputs, breakpoints and watchpoints, data logging, parallel signature analysis, and count functions. The application and debug software can not simultaneously own these resources but establish ownership and release ownership through memory mapped control registers continuously visible to both the application and debug software. The debug software has the ability to seize any resource if necessary, or negotiate with the application through software sequences.Other registers are specific to JTAG scan support and can never be accessed by the application. This class of registers is clocked with TCK and includes the JXREG, GPSR. EXSR, and IR_LTCH registers. Another register, the MF_REGS-1 register is clocked with FCK but is not accessible to the application. This register controls the device operational state as illustrated in FIG. 5, special reset modes, test modes, clock source selection and the like.A third class of registers is accessible through JTAG and accessible to the application if a special privileges are granted to a monitor function via a megamodule terminal MON_PRIV. When this terminal is grounded the application cannot access this register class. When this terminal is a logic 1, the application code can access a debug control register normally controlled by JTAG scans. This register contains nET0 and nET1 pin control, execution control and the debug frame reference register.During normal operation, when MON_PRIV is a 1, the application owns the MF13REGS130 resources. They cannot be accessed via JTAG scan as this amounts to dual allocation of resources. The monitor program must manage execution control and other debug resources. The monitor program can communicate with the debug software through a serial port or other mechanism that is not the JTAG interface. This allows the extended JTAG port to be operated in the Hidden IEEE 1149.1 format. This allows the application to assign system functions to all or some or all of the extended JTAG port pins, with the extended JTAG port available for production test. The drawbacks of this approach are simply diminished capability on a number of fronts.This approach requires a monitor program and additional memory resources. The data logging capabilities through the JTAG interface are lost. This approach brings along with it the traditional class of problems associated with asynchronous communication that may be laced with resets and other system upsets. In spite of these disadvantages, the advantages of using a system communication mechanism with a smaller number of debug related pins can out weigh the disadvantages in some systems.<tb>TABLE 2<tb><sep><sep>Register<sep><tb>Width<sep>Memory Mapped<sep>Name<sep>Description<tb>8<sep>No<sep>IR_LTCH<sep>Latched Instruction<tb><sep><sep><sep>Register<tb>6<sep>No<sep>EXSR<sep>Extended Shift<tb><sep><sep><sep>Register<tb>32<sep>No<sep>JXREG<sep>JTAG Transfer<tb><sep><sep><sep>Register<tb>32<sep>No<sep>GPSR<sep>General Purpose Shift<tb><sep><sep><sep>Reg.<tb>32<sep> No**<sep>FXREG<sep>Functional Transfer<tb><sep><sep><sep>Register<tb>32<sep>No<sep>MF_REGS_1<sep>Misc. Function<tb><sep><sep><sep>Register 1<tb>32<sep>Yes<sep>MF_REGS_0<sep>Misc. Function<tb><sep><sep><sep>Register 0<tb>16<sep>Yes<sep>DBG_STATUS<sep>Debug status<tb>16<sep>Yes<sep>ECNTL<sep>External Event<tb><sep><sep><sep>Control<tb>16<sep>Yes<sep>ACNTL<sep>Address Unit Control<tb>32<sep>Yes<sep>AMSK<sep>Adrs. Mask Register<tb>32<sep>Yes<sep>AREF<sep>Adrs. Reference<tb><sep><sep><sep>Register<tb>16<sep>Yes<sep>DCNTL<sep>Data Unit Control<tb>32<sep>Yes<sep>DMSK<sep>Data Mask Register<tb>32<sep>Yes<sep>DREF<sep>Data Reference<tb><sep><sep><sep>Register<tb>16<sep>Yes<sep>HPIR<sep>High Priority<tb><sep><sep><sep>Interrupt Reg.<tb>**Monitor privileged writes to MF_REG_0 use the FXREG as a temporary register. Table 3 shows the memory map register order for the sixteen debug registers placed in the memory map. Debug registers are accessed as 32 bit values for debug while the application may access them as 32 bit register pairs or sixteen bit registers when the underlying CPU architecture supports only 16 bit data. Registers fourteen and fifteen are accessible in the memory map when the MON_PRIV megamodule terminal is TRUE. These two registers are write only. A bit in an scan accessible register also enables this mode.<tb><sep>TABLE 3<tb><sep>Reg.*<sep>Register Name<sep>Read<sep>Write<sep>Description<tb><sep>19<sep>CMSGH<sep>No<sep>Yes<sep>Cmd. Msg. Reg.<tb><sep><sep><sep><sep><sep>High<tb><sep>18<sep>CMSGL<sep>No<sep>Yes<sep>Cmd. Msg Reg. Low<tb><sep>17<sep>DMSGH<sep>No<sep>Yes<sep>Data Msg. Reg.<tb><sep><sep><sep><sep><sep>High<tb><sep>16<sep>DMSGL<sep>No<sep>Yes<sep>Data Msg Reg. Low<tb><sep>15<sep>MF_REGS_OH<sep>No<sep>Mon.<sep>Misc. Func. Reg. 0<tb><sep><sep><sep><sep><sep>High<tb><sep>14<sep>MF_REGS_OL<sep>No<sep>Mon.<sep>Misc. Func. Reg. 0<tb><sep><sep><sep><sep><sep>Low<tb><sep>13<sep>Reserved<sep>-<sep>-<sep>Reserved<tb><sep>12<sep>DBG_STAT<sep>Yes<sep>Yes<sep>Debug Status<tb><sep>11<sep>ECNTL<sep>Yes<sep>Yes<sep>External Unit<tb><sep><sep><sep><sep><sep>Control<tb><sep>10<sep>DCNTL<sep>Yes<sep>Yes<sep>Data Unit Control<tb><sep>09<sep>DREFH<sep>Yes<sep>Yes<sep>Data Ref. Reg.<tb><sep><sep><sep><sep><sep>High<tb><sep>08<sep>DREFL<sep>Yes<sep>Yes<sep>Data Ref. Reg. Low<tb><sep>07<sep>DMSKH<sep>Yes<sep>Yes<sep>Data Mask Reg.<tb><sep><sep><sep><sep><sep>High<tb><sep>06<sep>DMSKL<sep>Yes<sep>Yes<sep>Data Mask Reg. Low<tb><sep>05<sep>AREFH<sep>Yes<sep>Yes<sep>Adrs. Ret. Reg.<tb><sep><sep><sep><sep><sep>High<tb><sep>04<sep>AREFL<sep>Yes<sep>Yes<sep>Adrs. Ref. Reg.<tb><sep><sep><sep><sep><sep>Low<tb><sep>03<sep>AMSKH<sep>Yes<sep>Yes<sep>Adrs. Mask Reg.<tb><sep><sep><sep><sep><sep>High<tb><sep>02<sep>AMSKL<sep>Yes<sep>Yes<sep>Adrs. Mask Reg.<tb><sep><sep><sep><sep><sep>Low<tb><sep>01<sep>ACNTL<sep>Yes<sep>Yes<sep>Address Unit<tb><sep><sep><sep><sep><sep>Control<tb><sep>00<sep>HPIR<sep>Yes<sep>Yes<sep>High Priority Int.<tb><sep><sep><sep><sep><sep>Reg.The EXSR, GPSR and JXREG registers are clocked with the test clock (TCK) and are only accessible via the extended JTAG port. The EXSR and GPSR provide the instruction and data register scan paths. The JXREG provides an a holding register for the GPSR contents. Data is copied from the GPSR to the JXREG any time the GPSR data needs to be transferred to another register. Since these registers are basic components of the scan path and are not tightly coupled to functional logic, they are inaccessible to the application. Update JTAG states and fast downloads actions cause these transfers. When operating in imbedded command mode, the GPSR is also moved to the JXREG for disposition.
To provide a board structure realizing compaction of a package by reducing the number of layers or a thickness of the layers of a board corresponding to a high-speed signal.SOLUTION: In an IC device 100, a board includes a first metal layer 104 having a ground plane, and a second metal layer 106. A first signal trace 124 existing in the second metal layer 106 is electrically connected with a first signal pad 132 existing in the first metal layer 104 by a first signal via 126. The second metal layer 106 may include a second signal trace. The second signal trace is electrically connected with a second signal pad existing in the first metal layer by a second signal via. The board also includes a ground trace 120 existing between the first and second signal traces in the second metal layer. The ground trace 120 is electrically connected with a ground plane 130 by a ground via 122. The vias connected with these traces include a self-alignment or null positional deviation via.SELECTED DRAWING: Figure 1A
A substrate having a first metal layer and a second metal layer, a ground plane in said first metal layer, and a first signal trace in said second metal layer, said first signal trace comprising said A first signal electrically coupled by a first signal via to a first signal pad present in a first metal layer, said first signal via having a width substantially similar to the width of said first signal trace. a trace and a second signal trace present in the second metal layer, the second signal trace electrically coupled by a second signal via to a second signal pad present in the first metal layer; the second signal via is between the first signal trace and the second signal trace in the second metal layer and a second signal trace having a width substantially similar to the width of the second signal trace; wherein the ground trace is electrically coupled to the ground plane by a ground via, the ground via having a width substantially similar to the width of the ground trace and A package substrate comprising:The ground trace is a first ground trace electrically coupled to the ground plane by a first ground via, and the package substrate is a second ground trace present in the second metal layer, Two ground traces are electrically coupled to the ground plane by a second ground via, the second ground via further comprising a second ground trace having a width substantially similar to the width of the second ground trace. 2. The package substrate of claim 1, further comprising: said first signal trace being between said first ground trace and said second ground trace.3. The package substrate of claim 2, wherein said first ground trace is electrically connected to said second ground trace by said ground plane.4. The package substrate of claim 3, wherein the ground plane has patterned metal lines electrically coupled to the first ground via and the second ground via.4. The package substrate of claim 3, wherein said ground plane comprises a ground plane in said first metal layer extending over a region of said first metal layer overlying said first signal trace.The package substrate includes two adjacent signal traces on the second metal layer, the two signal traces defining a differential pair of signal traces, and the ground plane defining a differential pair of signal traces. 6. The package substrate of claim 5, having a gap in a region of said first metal layer above said two adjacent signal traces.The ground plane is a first ground plane, the package substrate further comprises a third metal layer, the third metal layer has a second ground plane, the second metal layer comprises the first metal layer and the 2. The package substrate of claim 1, wherein said second ground plane is between said third metal layer and said second ground plane is electrically connected to said ground trace by said first ground plane of said first metal layer.8. The package substrate of claim 7, wherein said second ground plane is electrically coupled to said first ground plane by vias through said second metal layer.9. The package substrate of any one of claims 1-8, wherein the ground vias comprise one of self-aligned vias or zero misalignment vias.9. The package substrate of any one of claims 1-8, wherein the first signal via and the second signal via comprise one of self-aligned vias or zero misalignment vias.9. The package substrate comprises a plurality of signal traces in the second metal layer and a plurality of ground traces in the second metal layer, wherein the number of signal traces equals the number of ground traces. The package substrate according to any one of .The package substrate of any one of claims 1-8, wherein the ground plane has a thickness between 10-15 µm.9. The package substrate of any one of claims 1-8, wherein the ground plane has a thickness of less than 6 [mu]m.9. The package substrate of any one of claims 1-8, wherein the ground plane comprises copper.a signal solder bump electrically coupled to the first signal pad; a ground pad of the first metal layer electrically coupled to the ground plane; and electrically to the ground pad. 9. The package substrate of any one of claims 1-8, further comprising coupled ground solder bumps.16. The package substrate of claim 15, wherein said first signal pad is a first level interconnect (FLI).17. The package substrate of claim 16, wherein said FLI has a copper thickness between 1.4[mu]m and 1.6[mu]m.9. The package substrate of claim 1, wherein said first signal trace and said second signal trace are high speed input/output traces.9. The package substrate of any one of claims 1-8, wherein the package substrate comprises a die edge, and wherein the ground plane has surface metal on the first metal layer extending to the die edge.forming a substrate ground plane in a third metal layer of a substrate; forming a plurality of traces having a predetermined trace width in a second metal layer of the substrate; forming signal vias in a subset of traces, said forming said signal vias comprising forming said signal vias with a width substantially similar to said predetermined trace width; forming signal vias, wherein a first subset of traces comprises a plurality of alternating traces; forming ground vias in a second subset of traces of said plurality of traces; A subset is different from the first subset of traces, and forming the ground vias comprises forming the ground vias with a width substantially similar to the predetermined trace width; forming a ground via, wherein the subset of traces has a plurality of alternating traces; and forming a surface ground plane in the first metal layer, the surface ground plane of the first metal layer comprising: forming a surface ground plane electrically connected to at least one ground trace by said ground vias.21. The method of claim 20, further comprising forming a signal pad in said first metal layer, said signal pad electrically connected to at least one signal trace by said signal via.22. The method of claim 20 or claim 21, further comprising forming a substrate ground via in said second metal layer, said substrate ground via being electrically connected to said substrate ground plane and said surface ground plane. Method.22. The method of claim 20 or 21, wherein forming the surface ground plane comprises an additive process to form a patterned metal layer on the first metal layer of the package substrate.a processor mounted on a substrate, a communication logic unit within the processor, and a memory within the processor, the substrate comprising first and second metal layers and a ground present on the first metal layer. a plane and a first signal trace present in the second metal layer, the first signal trace electrically coupled by a first signal via to a first signal pad present in the first metal layer; The first signal via comprises a first signal trace having a width substantially similar to the width of the first signal trace and a second signal trace present in the second metal layer, the second signal via A trace is electrically coupled by a second signal via to a second signal pad present in the first metal layer, the second signal via having a width substantially similar to the width of the second signal trace. , a second signal trace, and a ground trace present between the first signal trace and the second signal trace on the second metal layer, the ground trace being electrically connected to the ground plane by a ground via. and wherein the ground via has a width substantially similar to the width of the ground trace.The ground trace is a first ground trace, the ground via is a first ground via, the substrate is a second ground trace present in the second metal layer, and the first signal trace is the first ground trace. between one ground trace and said second ground trace, said second ground trace being electrically coupled to said ground plane by a second ground via, said second ground via being connected to said second ground trace; a second ground trace having a width substantially similar to the width of the second ground trace and a third ground trace present in the second metal layer, the second signal trace comprising the first ground trace and the third ground trace; and the third ground trace is electrically coupled to the ground plane by a third ground via, the third ground via having a width substantially similar to the width of the third ground trace. 25. The computing device of claim 24, further comprising a third ground trace.
For semiconductor products, packaging dimensions can contribute to overall device size. Packaging dimensions for mobile devices can facilitate overall form factor reduction. Packaging dimensions can limit product performance due to constraints in board layout and density.1 is a schematic diagram representing a cross-sectional view of a package substrate including a via architecture according to some embodiments of the present disclosure; FIG.FIG. 4B is a schematic diagram representing a cross-sectional view of another exemplary package substrate including a via architecture in accordance with some embodiments of the present disclosure;1 is a schematic diagram representing a top view of a package substrate including a via architecture and a ground plane according to some embodiments of the present disclosure; FIG.1 is a schematic diagram representing a perspective cutaway view of an exemplary package substrate in accordance with some embodiments of the present disclosure; FIG.FIG. 4B is a schematic diagram representing a perspective cutaway view of another exemplary package substrate in accordance with some embodiments of the present disclosure;FIG. 4B is a schematic diagram representing a perspective cutaway view of another exemplary package substrate in accordance with some embodiments of the present disclosure;FIG. 4B is a schematic diagram representing a perspective cutaway view of another exemplary package substrate in accordance with some embodiments of the present disclosure;FIG. 10 is a process flow diagram for forming a package substrate including self-aligned or zero-misaligned vias and top metal layers according to some embodiments of the present disclosure;1 is a schematic diagram of a computing device according to some embodiments of the disclosure; FIG.These drawings may not be drawn to scale.The configuration of vias with surface ground plane designs to increase routing trace density is described herein. In the following description, various aspects of exemplary implementations will be described using terminology commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. However, it will be apparent to those skilled in the art that the present disclosure may be practiced with some of the described aspects. For purposes of explanation, specific numbers, materials, and configurations are set forth in order to provide a thorough understanding of these example implementations. However, it will be apparent to those skilled in the art that the present disclosure may be practiced without these specific details. In other instances, well-known features have been omitted or simplified so as not to obscure these example implementations.Various operations will be described as multiple discrete operations in sequence in a manner that is most useful for understanding the present disclosure. However, the order of description should not be construed to imply that these operations are necessarily order dependent. In particular, these operations need not be performed in the order presented.As used herein, the terms "above," "below," "between," and "above" refer to the position of one material layer or component relative to another layer or component. It points. For example, a layer positioned above or below another layer may be in direct contact with the other layer or may have an intervening layer or layers. Additionally, a layer disposed between two layers may be in direct contact with those two layers or may have an intervening layer or layers. In contrast, a first layer “above” a second layer is in direct contact with this second layer. Similarly, unless expressly stated otherwise, a structure located between two structures may be in direct contact with an adjacent structure or an intervening structure. or may have multiple layers.Package z-height is a differentiating factor for modern semiconductor products, especially in the mobile domain. Two of the limiting factors for z-height reduction include power supply and input/output (I/O) routing.The present disclosure provides a z-height reduction due to a reduced number of layers and/or a reduced layer thickness by using self-aligned vias (SAVs) or zero-misalignment vias (ZMVs) and patterned top metal layers. An architecture is described for an upper package substrate layer that includes via configurations that can increase I/O routing density, which can lead to reduced thickness. The use of lithographically defined SAV or ZMV and the use of dense ground planes allows single-ended or differential pair high speed I/O to be routed on a single metal layer instead of being routed across multiple metal layers. be routed through This via configuration facilitates a 1:1 ground to signal trace ratio in the routing layer. This via configuration allows a wide range of impedances to be closely matched. For example, increasing the distance to the top GND plane can change the impedance, whereas changing the thickness distance to the bottom GND does not change the impedance much. The via configurations described herein also reduce crosstalk between adjacent signal traces.FIG. 1A is a schematic diagram representing a cross-sectional view of a package substrate 100 including a via architecture according to embodiments of the present disclosure. Package substrate 100 includes substrate 102 . The package substrate may include multiple metallization interconnect layers for the integrated circuit. The package substrate may include alternating metal and dielectric layers. Among the multiple metal layers, some may form ground planes or power planes, and others may be used for signal traces.Substrate 102 includes metallization interconnect layers for integrated circuits. According to aspects of the present disclosure, the number of metal layers can be reduced (eg, by metal layer pairs such as top and bottom metal layers). In FIG. 1A, substrate 102 includes three metal layers, M1 104, M2 106, and M3 108. In FIG. Each of these is separated by a dielectric layer. In at least some embodiments, substrate 102 includes interconnects, eg, vias, configured to connect metallization layers M1 104, M2 106, and M3 108. FIG.The M3 metal layer 108 is typically formed first. Here, the M3 metal layer generally includes the M3 ground plane 110 . The M3 ground plane 110 may be interconnected to layers above by vias 112 . The M3 metal layer 108 also includes power routing lines 118 and corresponding vias. M3 ground plane 110 may also be coupled to M2 metal layer 106 by ground vias 114 . In the M2 metal layer, ground pads 116 may electrically couple M2 ground traces (eg, ground trace 120 ) to M3 ground plane 110 .The M2 metal layer 106 generally includes high speed I/O signal traces (eg, signal trace 124) and ground traces (eg, ground trace 120). Signal trace 124 is electrically coupled to M1 signal pad 132 by SAV or ZMV 126 . Similarly, ground trace 120 may be electrically coupled to M1 metal layer ground plane 130 by SAV or ZMV 122 . The M2 metal layer also includes other vias and interconnects, such as M2 ground landing pads 128 and M2 power landing pads 129 .The upper metal layer or M1 metal layer 104 may include first level interconnect (FLI) pads, such as signal pads 132 and power interconnect pads 134 . The M1 metal layer 104 may also include surface metal that can serve as the M1 ground plane 130 . Solder bumps 144a-144c may be used to interconnect various circuit elements to other chips. M1 metal layer 104 may also include solder resist 142 .The M1 metal layer 104 is coupled to the M2 metal layer 106 by SAV or ZMV coupled to traces of the M2 metal layer 106 . SAV or ZMV 126 connects signal trace 124 to signal bump 144b. Ground trace 120 is coupled to M1 ground plane 130 by SAV or ZMV 122 . Some ground traces of M2 metal layer 106 may be coupled to ground pads 116 of M2 metal layer 106 . These ground traces will be coupled to the M1 metal layer ground plane 130 by the M3 ground plane 110 . The M1 metal layer ground plane 130 is connected to the ground bump 144a on the die as well as to the M3 ground plane 110 in the substrate. This helps adjust impedance, ties all ground lines to the same potential, reduces crosstalk, and enables high speed input/output (HSIO) SAV/ZMV I/Os to reach optimum performance. toFIG. 1B is a schematic diagram representing a cross-sectional view of another exemplary package substrate 150 including a via architecture according to embodiments of the present disclosure. The via architecture of package substrate 150 is similar to that shown in FIG. 1A. The surface metal of the M1 ground plane 130 in the embodiment illustrated by FIG. 1A can be a standard thickness metal layer (typically 10-15 μm thick). However, if such metal thickness is not required (eg, all I/O, whether fast or slow, can be routed on M2), the top metal is can function only as an interconnect (FLI) pad for the For example, pad 162 can function as a signal pad while pad 164 can function as a power pad. The M1 ground plane 160 can replace the thicker M1 metal layer ground plane 130 shown in FIG. 1A. The FLI pad can be made as thin as possible given the FLI requirements. A copper thickness of only 1.5 μm is sufficient for an effective ground plane to facilitate signal transmission. To have a stable FLI, this copper (Cu) layer can be followed by a barrier layer of nickel (Ni), followed by palladium (Pd) and gold (Au) (thin layers). The total thickness can be 5 μm or less. This can further reduce the thickness of the package.A thin metal layer for the top M1 ground plane 160 has a thickness between 2-6 μm and may be formed of copper. Other metals typical for surface finishes may also be used depending on the application.SAV and ZMV do not require large pads for landing. As such, trace density can be increased and can be formed in a single metal layer (eg, M2 106). Since these traces are on a single metal layer, ground traces can be formed between each signal trace (except for differential pairs). For example, ground trace 120 exists between signal trace 156 and signal trace 152 . Signal trace 152 exists between ground traces 120 and 154 . To provide a ground connection, these ground traces may be connected to a top/surface ground layer (M1 metal layer ground plane 130) by an underlying ground plane (eg, M3 ground plane 110).In general, the via architecture described herein reduces the z-height of the package substrate and reduces near-end and far-end crosstalk. Greater I/O density can be achieved by using SAV or ZMV with the goal of routing all critical HSIO lines on a single layer. This is accomplished without changing design rules, such as by using new or state-of-the-art patterning equipment.One of the few limitations of increasing the number of I/O lines on a single metal layer is that crosstalk can increase as the signaling lines get closer together. The increased line density of the via architecture described herein allows ground lines to be placed on either side of all signal lines and ground lines on both sides of differential pair lines to meet impedance targets. together improve far-end and near-end crosstalk.These ground lines should have the same potential to meet impedance requirements and improve signal transmission. There is no alignment margin for down facing vias (not SAV or ZMV) so the thin metal layer/surface finish of the M1 ground plane 160 used for FLI attachment is used to connect all ground lines . Ground connectivity is achieved by vias leading down to the package substrate GND layer (eg, through M3 ground plane 110) wherever alignment margins allow this. This is illustrated by ground pad 116 and corresponding ground via 114 in FIGS. 1A-1B.Electrical signals, such as power and/or input/output (I/O) signals, may be routed to and/or from the device through one or more metal (interconnect) layers. One or more interconnect layers M1 104, M2 106, and M3 108 may form a metallization stack (also called an "interlayer dielectric stack") of the package substrate.Routing traces (eg, signal traces 124 and ground traces 120) may be placed within the M2 metal layer 106 to route electrical signals according to a wide variety of designs. In some embodiments, this routing trace may include a trace filled with an electrically conductive material such as metal.These interconnect layers may include a dielectric material disposed between multiple interconnect structures. For example, M2 metal layer 106 may include dielectric material 158 between these traces and other M2 metal layer structures. In some embodiments, dielectric material 158 disposed between interconnect structures in different ones of interconnect layers M1 104, M2 106, and M3 108 may have different compositions. In other embodiments, the composition of dielectric material 158 between different interconnect layers M1 104, M2 106, and M3 108 may be the same.Integrated circuit (IC) device 100 may include a solder resist material 142 (eg, polyimide or similar material) and one or more conductive contacts 131 , 132 and 134 formed in M1 metal layer 104 . In FIG. 1A, conductive contacts 131, 132 and 134 are illustrated as taking the form of bond pads. Conductive contacts 132 may be electrically coupled to SAV/ZMV vias 126 and configured to route electrical signals using traces 124 of M2 metal layer 106 . Similarly, conductive contact 131 may be electrically coupled with SAV/ZMV via 122 and configured to be a ground line routed by ground trace 120 .Solder bumps 144a may be formed on ground conductive contacts 131 to mechanically and/or electrically couple a package containing IC package 100 with another component (eg, a circuit board). IC package 100 may include additional or alternative structures for routing electrical signals from metal layers 104-108. For example, conductive contacts may include other similar structures (eg, posts) that route electrical signals to external components.FIG. 2 is a schematic diagram representing a top view 200 of package substrate 100 including via architecture and ground planes according to an embodiment of the present disclosure. FIG. 2 illustrates a package substrate 202 including bump fields including solder bump landing pads (eg, signal pads 204a-b, and ground pads 216). An M1 ground plane 206 is illustrated. Ground traces and signals are also illustrated, but it is understood that these ground traces are on the M2 metal layer and are shown for illustration purposes. SAV and ZMV are also exemplified. Similarly, these SAVs and ZMVs are understood to be in the M2 metal layer.For example, FIG. 2 illustrates signal trace 208a routed to solder bump signal pad 204a and connected by SAV/ZMV 210a, signal trace 208b routed to solder bump 204b and connected by SAV/ZMV 210b. These signal traces are on the M2 metal layer and are presented for illustrative purposes.FIG. 2 also illustrates ground traces between each signal trace. For example, signal trace 208a is adjacent to ground traces coupled to ground SAV/ZMV 214a and SAV/ZMV 214b. Signal trace 208a is shown bending between adjacent ground traces to reach signal pad 204a. These ground traces may extend as far as possible before terminating.FIG. 2 also illustrates ground trace routing. A ground pad 216 may be electrically connected to the M1 ground plane 206 . Ground pad 216 may be connected to M2 ground trace by ground SAV/ZMV 214a (ground trace not shown). M1 ground plane 206 may be patterned to connect to ground pad 216 . FIG. 2 shows a patterned M1 ground line 212a coupling ground pad 216 to M1 ground plane 206. FIG. FIG. 2 illustrates how the M1 ground plane can be patterned to accommodate the increased density of signal lines. Another example is shown as ground SAV/ZMV 214c. It is coupled to the M1 ground plane 206 by a patterned M1 metal line 212 c and to the M1 ground plane 206 at location 218 . Signal line 208b is on a lower layer (eg, M2 layer) than patterned M1 metal line 212c, highlighting the ability to couple M2 layer ground traces to a common ground using a SAV/ZMV configuration.3-6 illustrate various embodiments for M1 metal layer ground plane configurations. Each embodiment facilitates reducing near-end and far-end crosstalk. It is understood that FIGS. 3-6 illustrate example configurations and are not limiting. Other ground plane configurations can also be used to achieve similar results. 3-6 further illustrate how a signal trace, with the exception of a differential pair trace (shown in FIG. 6), is adjacent to a ground trace, two signal traces adjacent to the ground trace. Illustrate.FIG. 3 is a schematic diagram representing a perspective cutaway view of an exemplary package substrate 300 in accordance with an embodiment of the present disclosure. Package substrate 300 includes small patches of metal on surface 302 that are connected to ground traces by SAV/ZMV. The surface metallization configuration of FIG. 3 uses minimal surface finish to connect the ground traces of the routing layer (M2 metal layer) to the main ground structure and ground bumps 304 .For example, patterned ground line 320a may electrically couple ground trace 306a with ground trace 306b. Similarly, patterned ground line 320b may electrically couple ground trace 306c with ground traces 306d and 306e. Surface ground plane patches 322a and 322b may be coupled to the M3 ground plane (not shown) by vias.FIG. 4 is a schematic diagram representing a perspective cutaway view of another exemplary package substrate 400 in accordance with an embodiment of the present disclosure. A surface ground plane 404 resides on the package surface 402 and extends from the location of the die level ground bump field 408 to the edge of the die 410 . This surface ground plane 404 also has slots 414 (ie, openings) above the differential pair signal lines 412 . Surface ground plane 404 also includes pads 406 for connecting surface ground plane 404 to an M3 metal ground plane (not shown).FIG. 5 is a schematic diagram representing a perspective cutaway view of another exemplary package substrate 500 in accordance with an embodiment of the present disclosure. The package substrate 500 is similar to the package substrate 400 . Package substrate 500 includes a larger surface ground plane 502 that does not include slots for differential pair traces. Surface ground plane 502 extends from bump field 508 the entire length of the trace to the point where a single trace is connected to a via down to the second layer interconnect field. The surface ground plane 502 also includes pads 504 for connecting the surface ground plane 502 to an M3 metal ground plane (not shown).FIG. 6 is a schematic diagram representing a perspective cutaway view of another exemplary package substrate 600 in accordance with an embodiment of the present disclosure. Package substrate 600 may be considered a combination of the surface ground plane configurations illustrated in FIGS. A surface ground plane 602 extends from the bump field 608 to the end of the routing. Surface ground plane 602 includes slots 610 above differential pair signal traces 612 . The surface ground plane 602 also includes pads 604 for connecting the surface ground plane 602 to an M3 metal ground plane (not shown).FIG. 7 is a process flow diagram 700 for forming a package substrate including self-aligned or ZMV and top metal layers according to embodiments of the present disclosure. A core metal material may be provided (702). This core metal material can be patterned to form an M3 metal layer structure, such as an M3 ground plane (704). This core metal material can be further processed to form the M2 metal layer structure. For example, M2 metal layer routing traces may be patterned and formed (706). M2 metal layers SAV and/or ZMV may be patterned and formed (708). Formation of SAVs and ZMVs may be performed by known techniques, such as patterning and formation of routing traces. Forming SAVs or ZMVs can result in vias having widths substantially similar to the widths of the traces to which they are connected. The length of the SAV or ZMV can be varied to suit connectivity and trace routing. The via z-height may be controlled based on the desired overall M2 metal layer z-height and/or the overall package z-height.As an example, a zero misalignment via (ZMV) formation process can use a dual-tone photoresist containing two layers of photomasks. The photomask is rigid and substantially flat and can be formed using known techniques that are more accurate than standard via-pad alignment techniques. Therefore, via-pad misalignment can be reduced. This allows the pad dimensions to be reduced to the same or similar dimensions as the via dimensions. In some exemplary cases, the use of ZMV has been shown to be effective at 50-80 I/O/mm, including I/O connection densities greater than 20 I/O/mm/layer, including connection densities as high as 100-250 I/O/mm/layer. It can facilitate I/O connection densities such as between mm/layer and higher.Similarly, masks can be used to form self-aligned vias (SAVs). Self-aligned vias can be formed using known techniques. For example, a SAV can be made by forming an Mx+1 layer over an Mx layer trace (and an insulating layer). The Mx+1 layer can be patterned using a hard mask or via mask to form trenches exposing the Mx metal layer traces. SAV metal (eg, copper) can be deposited on the traces within the trenches using known metal deposition techniques. The resulting vias (ie, SAVs) can have the same or similar widths as the underlying traces. The length and height of the SAV can be controlled based on implementation preferences.An M1 metal layer (eg, M1 ground plane) may be patterned and formed (710). The patterning and formation of the top metal layer M1 can be done using substrate semi-additive fabrication (including seed layer deposition, lithography, plating, resist removal and seed layer etching) or using subtractive or additive processing techniques. can be realized. One advantage of additive manufacturing is that the process flow is simplified by combining deposition and patterning into one step instead of requiring multiple steps used in conventional semi-additive manufacturing. That would be Therefore, the M1 metal layer ground plane with patches and slots can be made in a single step.Some examples of additive processing include:1. Cold spray: A powder of conductive material to be deposited is accelerated to high speed through a nozzle and forms a mechanical bond upon impact with the substrate surface. Patterning can be achieved by controlling the size and movement of the nozzle and/or by spraying the powder through a microstructured shadow mask. This approach would produce highly conductive films, presumably due to the absence of organic binders or solvents. It has the ability to keep the substrate at room temperature during spraying, thus reducing oxidation.2. Ink jet printing: A conductive ink is printed directly onto a substrate (eg, using an aerosol jet printer) and then cured or sintered to remove solvent. This approach will likely produce very thin films and structures of small dimensions (eg, line widths of ˜12 μm have been demonstrated using aerosol jet printers).3. Stencil printing of conductive paste.4. Laser-assisted selective electroless plating: areas to be patterned by a conductive layer are first functionalized using a self-assembled monolayer and laser exposure, followed by functionalized areas Electroless plating occurs only atThe package substrate may then undergo solder resist patterning, surface finishing, and solder bump formation (712).The use of a zero misalignment via-pad structure or a self-aligned via-pad structure, as described herein, increases the achievable density, such as number of input/output connections/mm/layer, while substantially reduce the size of vias and pads. Aspects of this embodiment have advantages such as reduced manufacturing costs, reduced z-height, and improved electrical performance for off-package I/O connections. Embodiments that provide self-aligned or zero-misalignment via-pad structures as described herein advantageously benefit 2.5D packaging, e.g., at least two central processing units (CPUs) , memory, and graphics processing unit (GPU) joint packaging, die splitting, quasi-monolithic integration, and other 2.5D packaging techniques. These embodiments may facilitate reduced manufacturing costs, reduced package z-height, improved electrical performance, and increased scalability.FIG. 8 is a schematic diagram illustrating a computing device in accordance with embodiments of the present disclosure. Computing device 800 may include a processor as well as memory and communication circuits. A processor and other circuitry may be supported by a package substrate that includes a substrate. The substrate can contain routing traces on a single metal layer (eg, M2 metal layer) by using self-aligned or ZMV as well as surface ground planes (eg, M1 metal layer ground plane). These routing traces may repeat signal traces and ground traces. This increases trace density while also providing a ground shield against crosstalk between signal traces.A computing device 800 illustrated in FIG. 8, according to an embodiment of the disclosure, may include a number of components. In one embodiment, these components are attached to one or more motherboards. In alternative embodiments, some or all of these components are fabricated on a single system-on-chip (SoC) die. These components in computing device 800 include, but are not limited to, integrated circuit chip 802 and at least one communication logic unit 808 . In some implementations, communication logic unit 808 is fabricated within integrated circuit chip 802 . By contrast, in other implementations, the communication logic unit 808 is shared with the integrated circuit chip 802, or is on a separate integrated circuit chip that can be bonded to a substrate or motherboard that is electronically coupled to the integrated circuit chip 802. manufactured. The integrated circuit chip 802 may include a CPU 804 and on-die memory 806 that is often used as cache memory and may be provided by technologies such as embedded DRAM (eDRAM) or spin transfer torque memory (STTM or STT-MRAM).Computing device 800 may or may not be physically and electrically coupled to a motherboard, or may include other components that may or may not be fabricated within the SoC die. These other components include, but are not limited to, volatile memory 810 (eg, DRAM), non-volatile memory 812 (eg, ROM or flash memory), GPU, digital signal processor 816, cryptographic processor 842 (hardware). chipset 820, antenna 822, display or touchscreen display 824, touchscreen controller 826, battery 828 or other power source, power amplifier (not shown), voltage regulator (not shown), not shown), global positioning system (GPS) device 830, compass, motion coprocessor or sensor 832 (which may include an accelerometer, gyroscope, and compass), speakers 834, camera 836, (keyboard, mouse, user input devices 838 (such as styluses and touch pads); and mass storage devices 840 (such as hard disk drives, compact discs (CDs), digital versatile discs (DVDs), etc.).Communication logic unit 808 enables wireless communication for data transfer to and from computing device 800 . The term "wireless" and its derivatives are used to describe circuits, devices, systems, methods, techniques, communication channels, etc. that can communicate data using modulated electromagnetic radiation through a non-solid medium. you can Although in some embodiments the associated devices may not include wires, the term does not imply that these devices are completely wire-free. Communication logic unit 808 includes, but is not limited to, WiFi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, Long Term Evolution (LTE ), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, and 3G, 4G, 5G, and beyond Any of a number of wireless standards or protocols may be implemented, including any other wireless protocol designated as . Computing device 800 may include multiple communication logic units 808 . For example, a first communication logic unit 808 may be dedicated to shorter range wireless communication such as WiFi and Bluetooth, and a second communication logic unit 808 may be dedicated to GPS, EDGE, GPRS , CDMA, WiMAX, LTE, Ev-DO, and others.In various embodiments, computing device 800 is a laptop computer, netbook computer, notebook computer, ultrabook computer, smart phone, tablet, personal digital assistant (PDA), ultramobile PC, cell phone, desktop computer, server , printer, scanner, monitor, set-top box, entertainment control unit, digital camera, portable music player, or digital video recorder. In further implementations, computing device 800 may be any other electronic device that processes data.It is understood that the subject matter of this description is not necessarily limited to the specific applications illustrated in FIGS. 1-8. This subject matter may be applied to other microelectronic device and assembly applications, as well as any suitable heat removal application, as will be appreciated by those skilled in the art.The above description of example implementations illustrated in this disclosure, including what is described in the Abstract, is not intended to be exhaustive and the present disclosure in the precise form disclosed. is not intended to limit the Although specific implementations of, and examples for, the disclosure are described herein for purposes of illustration, various equivalents within the scope of the disclosure will be appreciated by those skilled in the art. changes are possible.These changes may be made to the disclosure in light of the above detailed description. The terms used in the following claims should not be construed as limiting the disclosure to the particular implementations disclosed in the specification and claims.The following paragraphs provide examples of various of the embodiments disclosed herein.Example 1 includes a substrate including a first metal layer and a second metal layer, a ground plane present on the first metal layer, and a ground plane present on the second metal layer and a first signal pad present on the first metal layer. A first signal trace electrically coupled by a signal via, the first signal via having a width substantially similar to the width of the first signal trace and a second metal layer. A second signal trace present and electrically coupled by a second signal via to a second signal pad present in the first metal layer, the second signal via being substantially the width of the second signal trace. a second signal trace having a similar width and a ground trace present between the first signal trace and the second signal trace in the second metal layer and electrically coupled to the ground plane by a ground via; A ground via is a package substrate that includes a ground trace having a width substantially similar to the width of the ground trace.Example 2 may include the subject matter of Example 1, wherein the ground trace is a first ground trace electrically coupled to the ground plane by a first ground via, the package substrate resides on a second metal layer, A second ground trace electrically coupled to the ground plane by a second ground via, the second ground via further including a second ground trace having a width substantially similar to the width of the second ground trace. , the first signal trace is between the first ground trace and the second ground trace.Example 3 may include the subject matter of Example 2, with a first ground trace electrically connected to a second ground trace by a ground plane.Example 4 may include the subject matter of Example 3, wherein the ground plane includes patterned metal lines electrically coupled to the first ground via and the second ground via.Example 5 may include the subject matter of Example 3, wherein the ground plane includes a ground plane in the first metal layer that extends over a region of the first metal layer overlying the first signal trace.Example 6 may include the subject matter of Example 5, wherein the package substrate includes two signal traces on the second metal layer, the two signal traces defining a differential pair of signal traces, and the ground plane includes the signal traces includes a gap in the region of the first metal layer above the differential pair of .Example 7 may include the subject matter of any of Examples 1-6, wherein the ground plane is the first ground plane, the package substrate further includes a third metal layer, the third metal layer includes the second ground plane. , the second metal layer is between the first metal layer and the third metal layer, and the second ground plane is electrically connected to the ground trace by the first ground plane of the first metal layer.Example 8 may include the subject matter of any of Examples 1-7, wherein the first ground plane is electrically coupled to the first ground plane by a via through the second metal layer.Example 9 may include the subject matter of any of Examples 1-8, wherein the ground vias include one of self-aligned vias or zero-misalignment vias.Example 10 may include the subject matter of any of Examples 1-9, wherein the first signal via and the second signal via include one of self-aligned vias or zero misalignment vias.Example 11 may include the subject matter of any of Examples 1 through 10, wherein the package substrate includes a plurality of signal traces in the second metal layer and a plurality of ground traces in the second metal layer, the number of signal traces is equal to the number of ground traces.Example 12 may include the subject matter of any of Examples 1-11, with the ground plane including a thickness of between 10-15 μm.Example 13 may include the subject matter of any of Examples 1-12, with the ground plane including a thickness of less than 6 μm.Example 14 may include the subject matter of any of Examples 1-13, and the ground plane includes copper.Example 15 may include the subject matter of any of Examples 1-14, a signal solder bump electrically coupled to the first signal pad, a ground pad of the first metal layer electrically to the ground plane. A coupled ground pad and a ground solder bump electrically coupled to the ground pad may also be included.Example 16 may include the subject matter of Example 15, wherein the first signal pad is a first level interconnect (FLI).Example 17 may include the subject matter of Example 16, the FLI comprising copper with a thickness between 1.4 μm and 1.6 μm.Example 18 may include the subject matter of any of Examples 1-17, wherein the first signal trace and the second signal trace are high speed I/O traces.Example 19 may include the subject matter of any of Examples 1 through 18, wherein the package substrate includes the die edge and the ground plane includes surface metal on the first metal layer extending to the die edge.Example 20 includes the steps of forming a substrate ground plane in a third metal layer of the substrate, forming a plurality of traces having a predetermined trace width in a second metal layer of the substrate, and forming signal vias in a first subset of traces of the step of forming signal vias, said step of forming signal vias comprising forming signal vias with a width substantially similar to a predetermined trace width; One subset of traces includes a plurality of alternating traces, forming signal vias and forming ground vias in a second subset of the plurality of traces, the second subset being the second Unlike one subset of traces, this step of forming the ground vias includes forming the ground vias with a width substantially similar to the predetermined trace width, wherein the second subset of traces alternates. forming a ground via including a plurality of traces; and forming a surface ground plane in the first metal layer, the surface ground plane of the first metal layer being electrically connected to the at least one ground trace by the ground via. and forming a surface ground plane that is electrically connected to the package substrate.Example 21 may include the subject matter of Example 20 and may also include forming a signal pad in the first metal layer, the signal pad electrically connected to at least one signal trace by a signal via. .Example 22 may include the subject matter of any of Examples 20-21 and further includes forming a substrate ground via in the second metal layer, the substrate ground via electrically connecting the substrate ground plane and the surface ground plane. connected toExample 23 may include the subject matter of any of Examples 20-22, wherein forming the surface ground plane includes additive processing to form a patterned metal layer on the first metal layer of the package substrate.Example 24 may include the subject matter of Example 23, where the additive processing includes one or more of cold spray, inkjet printing, stencil printing of conductive paste, laser-assisted selective electroless plating.Example 25 is a computing device that includes a processor attached to a substrate, a communication logic unit within the processor, and memory within the processor. The substrate includes a first metal layer and a second metal layer, a ground plane present on the first metal layer, and a first signal via present on the second metal layer to a first signal pad present on the first metal layer. a first signal trace electrically coupled, the first signal via being present in the first signal trace having a width substantially similar to the width of the first signal trace and the second metal layer; A second signal trace electrically coupled by a second signal via to a second signal pad present in one metal layer, the second signal via having a width substantially similar to the width of the second signal trace. and a ground trace present between the first signal trace and the second signal trace on the second metal layer and electrically coupled to the ground plane by a ground via, the ground via comprising: A ground trace having a width substantially similar to the width of the ground trace may be included.Example 26 may include the subject matter of Example 25, wherein the ground trace is the first ground trace and the ground via is the first ground via. The substrate is a second ground trace residing on the second metal layer, the first signal trace being between the first ground trace and the second ground trace, the second ground trace being grounded by the second ground via. electrically coupled to the plane, the second ground vias being a second ground trace having a width substantially similar to the width of the second ground trace, and a third ground trace present in the second metal layer; , the second signal trace is between the first ground trace and the third ground trace, the third ground trace electrically coupled to the ground plane by a third ground via, the third ground via connecting to the third ground A third ground trace may be included having a width substantially similar to the width of the trace.[Item 1] A substrate having a first metal layer and a second metal layer, a ground plane present on the first metal layer, and a first signal trace present on the second metal layer, wherein the first signal A trace is electrically coupled by a first signal via to a first signal pad present in the first metal layer, the first signal via having a width substantially similar to the width of the first signal trace. , a first signal trace and a second signal trace present in the second metal layer, the second signal trace being electrically connected by a second signal via to a second signal pad present in the first metal layer. and the second signal via comprises a second signal trace having a width substantially similar to the width of the second signal trace; and the first signal trace and the second signal via in the second metal layer. a ground trace present between the traces, the ground trace electrically coupled to the ground plane by a ground via, the ground via having a width substantially similar to the width of the ground trace , and ground traces. [Item 2] The ground trace is a first ground trace electrically coupled to the ground plane by a first ground via, and the package substrate is a second ground trace present in the second metal layer. and wherein the second ground trace is electrically coupled to the ground plane by a second ground via, the second ground via having a width substantially similar to the width of the second ground trace. 2. The package substrate of item 1, further comprising a ground trace, wherein said first signal trace is between said first ground trace and said second ground trace. [Item 3] The package substrate according to item 2, wherein the first ground trace is electrically connected to the second ground trace by the ground plane. 4. The package substrate of claim 3, wherein the ground plane has patterned metal lines electrically coupled to the first ground via and the second ground via. [Item 5] The package substrate according to item 3, wherein the ground plane has a ground plane on the first metal layer that extends over a region of the first metal layer that covers the first signal trace. [Item 6] The package substrate includes two adjacent signal traces on the second metal layer, the two signal traces defining a differential pair of signal traces, and the ground plane being a differential pair of the signal traces. 6. The package substrate of item 5, having a gap in the region of the first metal layer above the two adjacent signal traces defining a dynamic pair. [Item 7] The ground plane is a first ground plane, the package substrate further includes a third metal layer, the third metal layer has a second ground plane, and the second metal layer is the first ground plane. The package of item 1, wherein the second ground plane is between a metal layer and the third metal layer, and the second ground plane is electrically connected to the ground trace by the first ground plane of the first metal layer. substrate. [Item 8] The package substrate of item 7, wherein the second ground plane is electrically coupled to the first ground plane by a via passing through the second metal layer. [Item 9] The package substrate of item 1, wherein the ground vias have one of self-aligned vias or zero misalignment vias. [Item 10] The package substrate of item 1, wherein the first signal via and the second signal via have one of self-aligned vias or zero misalignment vias. [Item 11] wherein the package substrate comprises a plurality of signal traces in the second metal layer and a plurality of ground traces in the second metal layer, wherein the number of signal traces is equal to the number of ground traces; Package substrate as described. [Item 12] The package substrate of item 1, wherein the ground plane has a thickness of between 10-15 μm. [Item 13] The package substrate of item 1, wherein the ground plane has a thickness of less than 6 μm. [Item 14] The package substrate of item 1, wherein the ground plane comprises copper. [Item 15] A signal solder bump electrically coupled to the first signal pad, a ground pad of the first metal layer electrically coupled to the ground plane, and the ground pad The package substrate according to item 1, further comprising a ground solder bump electrically coupled to the package substrate. 16. The package substrate of claim 15, wherein said first signal pad is a first level interconnect (FLI). [Item 17] The package substrate of item 16, wherein the FLI has a copper thickness between 1.4 μm and 1.6 μm. [Item 18] The package substrate according to item 1, wherein the first signal trace and the second signal trace are high speed input/output traces. [Item 19] The package substrate of item 1, wherein the package substrate comprises a die edge, and the ground plane has a surface metal on the first metal layer that extends to the die edge. [Item 20] forming a substrate ground plane in a third metal layer of a substrate; forming a plurality of traces having a predetermined trace width in a second metal layer of the substrate; forming signal vias in a first subset of traces, wherein forming the signal vias comprises forming the signal vias with a width substantially similar to the predetermined trace width; forming signal vias in a second subset of the plurality of traces, wherein the first subset of traces comprises a plurality of alternating traces; and forming ground vias in a second subset of the plurality of traces. , the second subset is different from the traces of the first subset, and forming the ground vias comprises forming the ground vias with a width substantially similar to the predetermined trace width. , the second subset of traces has a plurality of alternating traces, forming a ground via; and forming a surface ground plane in the first metal layer, wherein the surface of the first metal layer comprises: forming a surface ground plane, the ground plane electrically connected to at least one ground trace by the ground vias. 21. The method of claim 20, further comprising forming a signal pad in said first metal layer, said signal pad being electrically connected to at least one signal trace by said signal via. 22. The method of claim 20, further comprising forming substrate ground vias in said second metal layer, said substrate ground vias being electrically connected to said substrate ground plane and said surface ground plane. . 23. The method of claim 20, wherein forming the surface ground plane comprises an additive process to form a patterned metal layer on the first metal layer of the package substrate. [Item 24] comprising a processor attached to a substrate, a communication logic unit within the processor, and a memory within the processor, wherein the substrate comprises first and second metal layers, and the first metal layer and a first signal trace present in said second metal layer, said first signal trace being electrically connected by a first signal via to a first signal pad present in said first metal layer. a first signal trace coupled to and the first signal via having a width substantially similar to the width of the first signal trace; and a second signal trace present in the second metal layer, wherein: The second signal trace is electrically coupled to a second signal pad present in the first metal layer by a second signal via, the second signal via being substantially as wide as the second signal trace. and a ground trace present between the first signal trace and the second signal trace in the second metal layer, the ground trace being connected to the ground by a ground via. A computing device comprising: a ground trace electrically coupled to a plane, the ground via having a width substantially similar to the width of the ground trace. [Item 25] The ground trace is a first ground trace, the ground via is a first ground via, the substrate is a second ground trace present in the second metal layer, and the first signal A trace is between the first ground trace and the second ground trace, the second ground trace electrically coupled to the ground plane by a second ground via, the second ground via connecting to the second ground trace. a second ground trace having a width substantially similar to the width of the second ground trace; and a third ground trace present in said second metal layer, said second signal trace combining said first ground trace and said ground trace. a third ground trace, the third ground trace electrically coupled to the ground plane by a third ground via, the third ground via substantially the width of the third ground trace; 25. The computing device of item 24, further comprising a third ground trace having a similar width.
An improved electrical interconnect for an integrated circuit and methods for providing the same are disclosed. The electrical interconnect includes an air bridge extending through a gaseous medium so as to reduce the capacitance of the interconnect. The air bridge is supported at a first and second end such that the air bridge is suspended above the substrate. The air bridge comprises a highly conductive material, such as silver, so as to provide the air bridge with a reduced resistivity. To inhibit gaseous medium from contaminating the air bridge, the air bridge further comprises an adherent coating interposed between the air bridge and the gaseous medium. A method of forming the electrical interconnect is also disclosed, wherein, prior to forming the adherent coating, the conductive material is processed so as to form fewer grain boundaries, which enhances the electrical properties of the air bridge.
What is claimed is:1. An integrated circuit device comprising:a semiconductor substrate;at least two circuit components formed on the semiconductor substrate and spaced distally apart; andat least one bridge structure laterally extending between the at least two circuit components in a suspended manner above the semiconductor substrate so as to electrically interconnect the at least two circuit components, and wherein the at least one bridge structure is disposed adjacent a gaseous medium so as to reduce the capacitance of the at least one bridge structure, and wherein the at least one bridge structure comprises a reduced grain boundary component that is processed so as to improve the electrical properties of the at least one bridge structure.2. The device of claim 1, wherein the at least one bridge structure comprises a conductive material.3. The device of claim 2, wherein the conductive material includes copper.4. The device of claim 2, wherein the conductive material includes silver.5. The device of claim 2, wherein the conductive material includes gold.6. The device of claim 2, wherein the conductive material comprises a resistivity less than that of aluminum.7. The device of claim 2, wherein the conductive material comprises a ratio of mass density over modulus of elasticity (E/[rho]) that is greater than gold.8. The device of claim 1, wherein the at least one bridge structure is coated with an insulating material so as to improve the environmental degradation resistance of the at least one bridge structure.9. The device of claim 8, wherein the insulating material includes at least one of titanium, zirconium, hafnium, chromium, and vanadium.10. The device of claim 1, wherein the gaseous medium comprises air.11. The device of claim 1, wherein the gaseous medium comprises a non-conductive fluid.12. The device of claim 11, wherein the non-conductive fluid includes a non-conductive gas.13. The device of claim 12, wherein the non-conductive gas includes carbon-dioxide.14. The device of claim 1, wherein the gaseous medium comprises an insulating material.15. The device of claim 14, wherein the insulating material is at least one of a polymer, a foamed polymer, a polymide, a foamed polymide, an inorganic material, and a porous inorganic material.
RELATED APPLICATIONSThis application is a divisional application of U.S. patent application Ser. No. 10/291,909 filed Nov. 8, 2002 entitled "COATING OF COPPER AND SILVER AIR BRIDGE STRUCTURES TO IMPROVE ELECTROMIGRATION RESISTANCE AND OTHER APPLICATONS", which is hereby incorporated by reference in its entirety.BACKGROUND OF THE INVENTION1. Field of the InventionThe present invention relates to integrated circuits and, in particular, relates to miniaturized electrical interconnects having reduced resistance and capacitance.2. Description of the Related ArtTo provide improved performance, manufacturers of integrated circuit devices continually strive to increase circuit density. Such devices are typically formed on a semiconductor substrate, such as a silicon wafer, and comprise a large number of miniaturized circuit elements. These elements, which include transistors, diodes, capacitors, and resistors, are usually disposed within or adjacent the substrate and define a plurality of circuit nodes. To combine the circuit elements into a useful electronic circuit, integrated circuit devices typically include a plurality of conducting paths that link the circuit nodes in a preferred manner. Typically, the conducting paths are provided by electrical interconnects comprising metallic wires including, for example, wires made of aluminum or aluminum alloy that are embedded in an insulating layer such as a layer of insulating SiO2.However, as circuit density is increased, problems associated with conventional electrical interconnects are becoming more apparent. In particular, a higher density device having an increased number of circuit elements will likely require an even greater increase in the number of electrical interconnects. Consequently, the electrical interconnects will need to have a reduced thickness and adjacent interconnects will need to be spaced more closely together.Unfortunately, such dimensional reductions tend to increase the resistance of individual interconnects and increase the capacitance between adjacent interconnects, thereby possibly increasing signal propagation delays and signal cross-talk. In particular, electrically charged adjacent conductors acts as the plates of the capacitors. As the distance between adjacent conductors decrease, the resulting capacitance increases. This resulting increase in capacitance slows propagation of signals as the capacitance must be overcome prior to propagation of the signal along the conductor. Hence, while it is desirable to increase device density on integrated circuits, considerations such as these pose problems for maintaining or improving circuit performance.To improve the conductivity of interconnects, it has been suggested that copper metallurgy be substituted for the aluminum metallurgy that is now typically being used. Advantageously, copper metallurgy interconnects are viewed as having increased conductivity and thus less resistance. The lower resistance of interconnects of this metallurgy could allow the use of smaller dimensions of interconnects thereby facilitating the increase of device density on the integrated circuit. However, several potential problems have been encountered in the development of this proposed metallurgy. One of the main ones being the fast diffusion of copper through both silicon and SiO2. Fast diffusion of copper into silicon or silicon oxide results in diffusion of the conductive interconnect into the surrounding materials which can damage device performance or can result in short circuits occurring between adjacent interconnects.To decrease capacitive loading, it has been suggested that the interconnects could be embedded in a solid insulating medium other than SiO2, such as a polymer comprising fluorinated polymide. However, as in the case of SiO2, an incompatibility problem with copper metallurgy has been found. In the case of polyimide, and many other polymers, it has been found that the polymer, during the curing, reacts with copper forming a conductive oxide CuO2, which is dispersed within the polymer. This then raises the effective dielectric constant of the polymer and in many cases increases its conductivity. Hence, there have been numerous suggested approaches towards solving the problems of capacitive coupling and increased resistance occurring as a result of a need to formulate smaller dimensioned interconnects that are spaced closer together. A primary difficulty that results is the relative incompatibility of lower resistance materials with the surrounding insulating material.Silver is one of the best conductors, in that it has the lowest specific resistivity of any metal or alloy. Furthermore, a vacuum is the ultimate dielectric, with air being nearly as good. However, the use of a vacuum introduces additional problems or complexities to the device. The first being the low heat conductivity of the vacuum and the second being the cost of the package required to maintain the vacuum. Air, which has somewhat better thermal conductivity, has its own problems in that both copper and silver react with air to form oxides or other compounds. Alternatively, gold is known to be quite environmentally stable. However it's specific resistivity is higher than that of copper and silver.To address the problem of increased capacitance, interconnects comprising an air bridge have been developed as described in U.S. Pat. No. 5,891,797. The air bridge is a length of conducting material that extends from a first supported end to a second supported end through an air space such that the bridge is substantially surrounded by air. Consequently, because air has a dielectric constant that is substantially less than that of SiO2, the capacitance between adjacent interconnects is reduced.However, because the air bridge tends to sag under its own weight, the length of the air bridge is a possible limiting factor. In particular, because the air bridge is only supported at its first and second ends, gravitational forces acting on the air bridge when the bridge is horizontally disposed cause the air bridge to sag such that the unsupported middle of the air bridge is deflected downward with respect to the first and second ends. Because the degree of sagging increases as the length of the bridge is increased, the length of the air bridge cannot exceed that which would cause the air bridge to break or come into contact with another conductor of the device.According to classical mechanics for simple air bridge structures, the center of the bridge is deflected downward with respect to the supported and constrained ends by an amount [delta] given by[mathematical formula - see original document]wherein [rho] is the mass per unit volume of the air bridge, L is the length of the air bridge, h is the height of the air bridge, and E is the modulus of elasticity of the air bridge. Consequently, aside from the geometric factors L and h, the deflection [delta] is proportional to the ratio of ([rho]/E). Thus, an air bridge formed of a material having a reduced ratio of ([rho]/E) will experience less sagging. If the ends of the bridge are not considered to be constrained then[mathematical formula - see original document]This is the worst case assumption.<tb><sep>Resistivity<sep>Elastic Modulus<sep>Mass Density<sep><tb>Material<sep>(n[Omega]m)<sep>(GPa)<sep>(Mg/m3)<sep>[rho]/E<tb>Copper<sep>16.7<sep>128<sep>8.93<sep>0.0698<tb>Silver<sep>14.7<sep>71<sep>10.5<sep>0.148<tb>Gold<sep>23.5<sep>78<sep>19.3<sep>0.247<tb>Aluminum<sep>27.5<sep>70<sep>2.7<sep>0.039The table above illustrates the physical properties of possible air bridge materials. Both copper and silver have resistivities that are substantially less than that of aluminum and, thus, would provide air bridges with reduced resistance. Because copper has a ratio of ([rho]/E) which is less than that of silver, a low resistance bridge comprising copper would experience less sagging and, thus, would be more suitable for applications that require bridges having extended lengths. Alternatively, because silver has a resistivity less than that of copper, a bridge comprising silver would be more suitable for applications that require reduced resistance. However, as was pointed out previously, both copper and silver are susceptible to environmental degradation in an air environment.Gold also has a resistivity less than that of aluminum. Furthermore, gold is not susceptible to environmental degradation in an air environment. However, because the resistivity of gold and the ratio of ([rho]/E) of gold are substantially larger those of silver or copper, a bridge formed of gold would have a relatively large resistance and would experience a relatively high degree of sagging.Various processing techniques may also contribute to the effects of device reliability and environmental degradation. For one, annealing is a process involving heating and cooling of a mechanically work-hardened region, which is designed to effect the microstructure of crystalline materials. Annealing typically softens work-hardened microstructures by relieving residual stress caused by mechanical processes, such as polishing and/or grinding. Additionally, for sub-micron structures, chemistry is a substantially important variable for establishing high electrical conductivity in conductive interconnects. The working of mechanical processes may significantly decrease electrical conductivity and retard grain development.As is known in the art, abnormal grain development may be associated with a duplex grain structure caused by the dissolution of oxides during a high temperature annealing process. The propensity for grain coarsening and duplex grains may be attributed to excessive solution temperatures and oxygen concentrations. Unfortunately, coarse grains formed during high temperature anneals may remain present after cooling. In addition, the rate of cooling from high temperature anneals may also detrimentally influence the mechanical properties of materials comprising high levels of impurities. Furthermore, rapid cooling may also result in substantially high, non-equilibrium levels of impurities in solid solution. Alternatively, slow cooling may allow for the interaction between impurities and oxygen, which may lead to subsequent precipitation from solid solution. Typical high temperature annealing techniques may be considered harmful and may detrimentally effect chemical, electrical, and mechanical properties of crystalline materials, wherein localized inhomogeneities may change with deformation and thermal history, metal purity, and oxygen content.The reduction in conductor size introduces additional problems as the surface to volume ratio increases, as it must with the reduced conductor size, the specific electromigration resistance decreases. This is a direct result of the fact that the surface diffusion rate is higher than the grain boundary diffusion rate which is higher than the "bulk" rate. As the relative surface area increases the surface diffusion rate, which may be up to two orders of magnitude greater than the bulk rate, becomes more and more significant.From the foregoing, therefore, it will be appreciated that there is a need for an improved air bridge structure for an integrated circuit that not only provides a relatively small resistance but also is extendable over relatively large distances. It should also be appreciated that there exists a need to improve processing methods associated with air bridge structures for the purpose of increased reliability.SUMMARY OF THE INVENTIONThe aforementioned needs are satisfied by one aspect of the present invention which discloses a method and device for forming an electrical interconnect comprising an air bridge structure for electrically connecting at least two circuit elements in an integrated circuit. In order to reduce the electromigration rate of the copper or any other sub-micron conductor it will be necessary to find ways to reduce the effect of surface diffusion on the electromigration rate. This can be accomplished by coating the surface of the conductor with a thin highly adhesive coating, which has a low solubility in the base conductor. Or if the coating has a significant solubility it must have such a low diffusion rate into the conductor at the processing and use conditions that it does not penetrate the conductor, during the time of service. For a copper conductor, a thin zirconium film can be deposited by selective plating or CVD on to the conductor, after the last high temperature step. The coating material must form an adherent layer upon the surface such that diffusion along the boundary between the coating and the base material is significantly less than the surface diffusion rate of the base material.The aforementioned needs may be satisfied by a method of forming an air bridge structure between a first and second circuit component on a substrate. In one embodiment, the method may comprise forming a support structure on the substrate, forming vias in support structure above the first and second circuit components, and depositing a conductive layer so as to form vertically extending legs in the vias and a laterally extending member between the upper portions of the vertically extending legs in a manner so as to electrically interconnect the first and second circuit components, wherein forming the laterally extending member results in an increased resistivity through the air bridge structure. The method may further comprise removing the support structure so as to suspend the laterally extending member above the substrate between the first and second circuit components via the vertically extending legs, wherein removing the support structure results in an increased resistivity through the air bridge structure, and processing the air bridge structure by re-crystallizing the laterally extending member and the vertically extending legs, which results in a decreased resistivity through the air bridge structure.In one aspect, the depositing the conductive layer may include depositing a material with a line resistivity of at least less than that of aluminum. Also, depositing the conductive layer may include depositing a material with a ratio of mass density over modulus of elasticity (E/[rho]) that is at least greater than gold. In addition, re-crystallizing the laterally extending member may comprises coalescing the grain boundaries in a manner so as to form fewer grain boundaries. Coalescing the grain boundaries may occur at room temperature. Coalescing the grain boundaries may include performing a heat treatment. Coalescing the grain boundaries may improve the electrical properties of the conductive layer. Improving the electrical properties of the conductive layer includes enhancing the electromigration resistance of the conductive layer. Improving the electrical properties of the conductive layer includes enhancing the diffusion resistance of the conductive layer.In another aspect, the method may further comprise annealing the air bridge structure. Also, forming the laterally extending member includes planarizing the conductive layer using a CMP process. Forming a conductive layer comprises depositing a least one of copper, silver, and gold. In addition, the method further comprises forming an adherent coating on the air bridge structure. Forming the adherent coating on the air bridge structure includes depositing at least one of titanium, zirconium, and hafnium on the air bridge structure.The aforementioned needs may be satisfied by a method of forming an air bridge structure between a first and second circuit component on a substrate. In another embodiment, the method may comprise forming a support structure on the substrate, forming vias in support structure above the first and second circuit components and depositing a conductive layer so as to form vertically extending legs in the vias and a laterally extending member between the upper portions of the vertically extending legs in a manner so as to electrically interconnect the first and second circuit components, wherein forming the laterally extending member exposes grain boundaries adjacent the surface of the laterally extending member resulting in an increased resistivity through the air bridge structure. The method may further comprise removing the support structure so as to suspend the laterally extending member above the substrate between the first and second circuit components via the vertically extending legs, wherein removing the support structure results in forming grain boundaries in the laterally extending member and the vertically extending legs, which increases resistivity through the air bridge structure, and processing the air bridge structure by coalescing the grain boundaries so as to form fewer grain boundaries, which results in a decreased resistivity through the air bridge structure.The aforementioned needs may be satisfied by a method of forming an electrical interconnect for an integrated circuit having a substrate with at least two semiconductor components. In still another embodiment, the method may comprise forming a bridge structure having a crystalline microstructure by laterally extending a first material between the at least two semiconductor components in a manner so as to suspend the first material in a gaseous medium above the substrate of the integrated circuit, wherein forming the first material produces grain boundaries in the crystalline microstructure and recrystallizing the first material in a manner so as to form fewer grain boundaries in the crystalline microstructure, wherein forming fewer grain boundaries improves the electrical properties of the first material. The method may further comprise insulating the first material with a second material so as to substantially reduce environmental degradation of the first material and applying a heat treatment in a manner so as to strengthen the adhesive bond between the first and second material, wherein the heat treatment further improves the electrical properties of the first material.The aforementioned needs may also be satisfied by an integrated circuit device comprising a semiconductor substrate, at least two circuit components formed on the semiconductor substrate and spaced distally apart, and at least one bridge structure laterally extending between the at least two circuit components in a suspended manner above the semiconductor substrate so as to electrically interconnect the at least two circuit components, and wherein the at least one bridge structure is disposed adjacent a gaseous medium so as to reduce the capacitance of the at least one bridge structure, and wherein the at least one bridge structure comprises a reduced grain boundary component that is processed so as to improve the electrical properties of the at least one bridge structure.In one aspect, the at least one bridge structure may comprise a conductive material. The conductive material may include copper. The conductive material may include silver. The conductive material may include gold. The conductive material may comprise a resistivity less than that of aluminum. The conductive material may comprise a ratio of mass density over modulus of elasticity (E/[rho]) that is greater than gold. The at least one bridge structure may be coated with an insulating material so as to improve the environmental degradation resistance of the at least one bridge structure. The insulating material may include at least one of titanium, zirconium, hafnium, chromium, and vanadium. The gaseous medium may comprise air. The gaseous medium may comprise a non-conductive fluid. The non-conductive fluid may include a non-conductive gas. The non-conductive gas may include carbon-dioxide. The gaseous medium comprises an insulating material. The insulating material may comprise at least one of a polymer, a foamed polymer, a polymide, a foamed polymide, an inorganic material, and a porous inorganic material.From the foregoing, it should be apparent that the electrical interconnect of the present invention and methods of providing the same provide many advantages over interconnect known in the art. In particular, because the bridge section of the interconnect is disposed adjacent an air space instead of a solid insulating material, the bridge may comprise a reduced capacitance. In addition, because the material of the bridge structure is more conductive than that which is used in typical interconnects, the interconnect of the present invention may be formed with an increased length and a reduced cross-sectional area. Moreover, processing the bridge structure so as to coalesce grain boundaries prior to applying the adherent coating may enhance the electrical properties of the bridge structure such that the bridge structure comprises a lower resistivity. These and other objects and advantages of the present invention will become more apparent from the following description taken in conjunction with the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 illustrates a schematic diagram of an integrated circuit device according to one aspect of the present invention, the device comprising at least one electrical interconnect having an air bridge structure;FIG. 2 illustrates a cross-sectional schematic diagram of the air bridge of FIG. 1 as seen along a y-axis;FIG. 3 illustrates a cross-sectional schematic diagram of the air bridge of FIG. 1 as seen along an x-axis;FIG. 4 illustrates a flow chart of one embodiment of a method of forming the electrical interconnect of FIG. 1;FIG. 5 illustrates a cross-sectional schematic diagram of one embodiment of the electrical interconnect of FIG. 1 in a partially fabricated state according to the method of FIG. 4;FIG. 6 illustrates a flow chart of one embodiment of a method of forming adjacent electrical interconnects having overlapping air bridge sections;FIG. 7 illustrates one embodiment of a cross-sectional schematic diagram of the electrical interconnects of FIG. 1 in a partially fabricated state according to the method of FIG. 6;FIG. 8 illustrates one embodiment of an integrated circuit having an air bridge electrical interconnect interposed between two circuit elements;FIGS. 9A-9H illustrate one embodiment of forming the air bridge electrical interconnect of FIG. 8 with enhanced electrical properties including improved electromigration properties.FIG. 10 illustrates one embodiment of a process flow for forming the air bridge electrical interconnect in FIGS. 8, 9 with enhanced electrical properties.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTThe illustrated embodiments of the present invention comprise a miniaturized electrical interconnect having improved operating characteristics and methods for providing the same. The electrical interconnect includes a bridge section surrounded by air, referred to hereinbelow as an "air bridge", so as to reduce the capacitance of the interconnect. Air bridges are also described in U.S. Pat. No. 5,891,797 which is incorporated herein by reference in its entirety. As will be described in greater detail below, air bridges formed in accordance with the various aspects of the present invention are provided with reduced resistance and a reduced tendency to sag, thereby enabling them to have a reduced cross-sectional area and to extend across larger distances.Improved electrical interconnects formed according to the methods of the illustrated embodiments are particularly useful in the manufacture of ultra-high density integrated circuit devices, such as a Dynamic Random Access Memory (DRAM), a microprocessor, or a Digital Signal Processor (DSP). It should be understood, however, that the methods described hereinbelow could be used in any application or structure in which it is desirable to include improved miniaturized electrical interconnects. Furthermore, the methods of the present invention are particularly well-suited for providing improved electrical interconnects on or above a semiconductor substrate, such as a silicon wafer, or substrate assembly, referred to herein generally as "substrate," used in forming any of a number of conventional integrated circuits.It should be understood that the methods of the present invention are not limited to integrated circuits formed on silicon wafers; rather, other types of wafers (e.g., gallium arsenide, etc.) can be used as well. Thus, the skilled artisan will find application for the processes and materials discussed below for any of a number of devices requiring improved electrical interconnects.Reference will now be made to the drawings wherein like numerals refer to like parts throughout. FIG. 1 schematically illustrates an integrated circuit device 30 according to one aspect of the present invention. The integrated circuit 30 comprises a plurality of known circuit components, such as transistors, resistors, capacitors and the like, formed in a well known manner. The circuit components are formed within, on, or above a substrate 32. In one embodiment, the substrate 32 has a planar shape and is aligned in an x-y plane as shown in FIG. 1. The circuit components define a plurality of circuit nodes which are interconnected by way of a plurality of improved electrical interconnects as will be described in greater detail below.As schematically illustrated in FIG. 1, the integrated circuit comprises a first electrical interconnect 34 extending from a first node 36 to a second node 38 of the integrated circuit 30. The first interconnect 34 has an elongate shape which is shown extending in a linear manner along the y-axis. However, it will be appreciated that, in another embodiment, the first interconnect 30 could have a different shape and extend in a non-linear manner along virtually any direction with respect to the substrate. In one embodiment, the integrated circuit 30 further comprises a substantially similar second electrical interconnect 40 extending along the x-axis from a third node 42 to a fourth node 44 such that the second interconnect 40 overlaps the first interconnect 34 as shown in FIG. 1. While the nodes 36, 38, 42 and 44 are described in this embodiment as being positioned within the substrate 32, it will be appreciated by a person of ordinary skill that the nodes can actually be formed in an insulating layer positioned over the substrate 32. Hence, the use of the bridge structures described herein should not be viewed as being limited to use with nodes formed in the substrate as they can be used between nodes formed in or above the substrate 32.FIGS. 2 and 3 further illustrate the integrated circuit 30 of FIG. 1, wherein FIG. 2 is a schematic diagram corresponding to a view along the x-axis and FIG. 3 is a schematic diagram corresponding to a view along the y-axis. The first interconnect 34 comprises a first end section 46 extending from the first node 36, a second end section 48 extending from the second node 38, and a bridge section 50 extending between the first and second end sections 46 and 48. The first end section 46 supports a first end 47 of the bridge section 50 and the second end section 48 supports a second end 49 of the bridge section 50. Consequently, gravitational forces acting on the bridge section 50 cause an unsupported midpoint 51 of the bridge section 50 to sag such that the midpoint 51 is downwardly displaced with respect to the ends 47 and 49. As will be described in greater detail below, in one embodiment, the bridge section 50 comprises a material having a reduced ratio of mass density over modulus of elasticity (p/E) so as to reduce the degree of sagging.The end sections 46 and 48 and the bridge section 50 comprise a conducting material that provides a conducting path extending between the first and second nodes 36 and 38. Furthermore, the bridge section 50 is disposed in a plane that is outwardly displaced from the plane of the substrate. Moreover, the bridge 50 extends through a space 52 having a gaseous medium disposed therein such that the bridge section 50 is substantially surrounded by the gaseous medium. In the preferred embodiment, the gaseous medium comprises air or any other low dielectric gaseous mixtures. Consequently, because air has a relatively small dielectric constant, the first interconnect 34 is provided with a relatively small capacitance with respect to nearby conducting elements of the device.In one embodiment, the first and second end sections 46 and 48 laterally extend from the respective first and second nodes 36 and 38 and the bridge section 50 longitudinally extends therebetween. However, a person skilled in the art will realize that the methods described herein could also be used to form interconnects having an alternative geometry. For example, the end sections could extend from the nodes 36 and 38with longitudinal components and the bridge section could extend with a lateral component. Furthermore, rather than extending along a plane disposed away the substrate, in another embodiment, the bridge section could extend through a trench formed within the substrate such that the bridge section substantially overlaps the plane of the substrate. Moreover, in yet another embodiment, the electrical interconnect could consist solely of the bridge section such that the bridge section extends directly from the first node to the second node through the trench formed in the substrate or in an insulating layer formed on the substrate.In one embodiment, the second interconnect 40 comprises a second air bridge 54 which is substantially similar to the air bridge 50 of the first interconnect 34. As shown in FIGS. 2 and 3, the second air bridge extends between laterally disposed first and second end sections 56 and 58 through the air space 52 such that the second bridge 54 is disposed above the first air bridge 50. Thus, because air separates the first and second air bridges 50 and 54, the capacitance between the first and second interconnects 34 and 40 is reduced. Consequently, independent signals propagating along the first and second interconnects 34 and 40 are less likely to interfere with each other and the speed of propagation of signals will be less effected by capacitance.As shown in FIGS. 2 and 3, the air bridge 50 comprises a core 60 extending along its length that provides the air bridge 50 with desirable electrical and mechanical properties. In particular, to promote conduction along its length, the core 60 preferably comprises a highly conductive material. Furthermore, to reduce sagging, the core 60 preferably comprises a material having a relatively small ratio of mass density over modulus of elasticity ([rho]/E). As mentioned above in the background section, materials having relatively low resistivity and relatively low p/E include copper and silver. In one embodiment, the core 60 comprises copper. In another embodiment, the core 60 comprises silver.As shown in FIGS. 2 and 3, the air bridge 50 further comprises a tightly adherent coating 62, that is deposited on the core 60 and substantially surrounds the core 60, such that the coating 62 is interposed between the core 60 and the air of the air space 52. In one embodiment, the purpose of the adherent coating 62 is to provide the air bridge 50 with improved electromigration resistance along with desirable environmental properties. In another embodiment, the adherent coating 62 comprises a protective coating that serves as a protective barrier which prevents surface diffusion as well as inhibiting contaminants, such as oxygen, from reaching the core 60. Furthermore, the coating 62 preferably comprises a material having a low solubility with respect to the core 60 that does not readily diffuse into the core and significantly degrade the conductivity of the core 60. Thus, because the core 60 is substantially shielded from the air space 52, the core 60 is able to include environmentally sensitive materials, such as copper or silver, that provide the bridge 50 with reduced resistance and reduced sagging.In one embodiment, the coating 62 comprises a conducting material, that reduces surface diffusion and resists air molecules from diffusing therethrough and enhances conduction along the bridge 50. For example, the coating can include the reactive elements titanium, zirconium or hafnium. If one of the reactive elements is used, zirconium may be preferred due to its low solubility in both copper and silver.In another embodiment, the coating 62 comprises an insulating material that inhibits air molecules from reaching the core 60. For example, the coating 62 could comprise an inorganic material such as Si3N4.Reference will now be made to FIGS. 4-5 which illustrate a method 100 of forming an individual electrical interconnect according to one embodiment of the present invention. As will be described in greater detail below, the method 100 essentially comprises forming the core 60 of the interconnect 34 and then disposing the coating 62 on the core 60.As shown in FIG. 4, in one embodiment, the method 100 comprises, in a state 102, forming a temporarily support structure or mandril. The purpose of the mandril is to provide a supporting surface that supports the bridge section of the electrical interconnect during formation of the bridge section. The mandril can be formed from any of a wide variety of materials that provide the electrical interconnect with temporary support and that can subsequently be removed to expose a lower surface of the air bridge section.For example, as shown in FIG. 5, the temporary support structure may comprise a layer 104 of photoresist which is deposited over the substrate 32 using conventional deposition techniques. The photoresist layer 104 is deposited with a substantially uniform thickness such that the substrate 32 is substantially covered by the layer 104 and the first and second nodes 36 and 38 are disposed under the layer 104. The thickness of the layer 104 is selected so as to provide a desired separation distance between a lower surface 106 of the bridge 50 and an upper surface 108 of the substrate 32.As shown in FIG. 4, the method 100 further comprises, in a state 120, modifying the mandril so as to expose the first and second nodes 36 and 38. In particular, using conventional etching techniques, first and second vias 110 and 112 are formed in the mandril that vertically extend from an upper surface 114 of the mandril 104 to the respective first and second nodes 36 and 38 of the circuit 30 as shown in FIG. 5.As shown in FIGS. 4 and 5, the method 100 further comprises, in a state 130, depositing a conducting layer 132 over the mandril 104 such that the conducting layer 132 horizontally extends across the upper surface 114 of the mandril between the vias 110 and 112, so as to subsequently form the core 60 of the bridge 50, and vertically extends through the vias 110 and contact the first and second nodes 36 and 38 so as to provide the end sections 46 and 48 of the interconnect 34. Because the conducting layer 132 will eventually become the core 60 of the first electrical interconnect 34, the conducting layer 132 preferably comprises a highly conductive material having a relatively small ratio of ([rho]/E), such as silver or copper, thereby providing the electrical interconnect 34 with a relatively small resistance and a reduced tendency to sag as will be described in greater detail hereinbelow.As shown in FIGS. 4 and 5, the method 100 further comprises, in a state 140, modifying the conductive layer so as to define the shape of the core 60 of the electrical interconnect 34. For example, the core 60 can be shaped by employing conventional pattern and etching processes that leave behind the first and second end sections 46 and 48 vertically extending from the respective nodes 36 and 38 and also leave behind the bridge section 50 horizontally extending between the end sections 46 and 48 as shown in FIG. 5.However, it will be appreciated that the core 60 of the interconnect 34 could be formed in an alternative manner without departing from the spirit of the present invention. For example, in an alternative embodiment, the core 60 could be formed by defining a trench in the mandril, depositing conductive material in the trench, and removing excess conductive material using a conventional chemical mechanical planarization process.As shown in FIG. 4, the method 100 further comprises, in a state 150, removing the mandril so as to expose the lower surface 106 of the core. In one embodiment, the photoresist layer 104 is removed by exposing the photoresist layer 104 to a known etchant that selectively removes the photoresist 104 layer and does not remove the core 60 of the interconnect 34.As shown in FIG. 4, the method 100 further comprises, in a state 154, disposing the adherent coating 62 on the exposed surfaces of the core 60 of the electrical interconnect 34. In one embodiment, disposing the adherent coating 62 comprises depositing a layer of conductive material selected from the group comprising the noble metals gold, platinum, palladium, iridium, and the reactive elements titanium, zirconium and hafnium. Furthermore, the conductive material of the coating 62 can be deposited using a known electroless plating process, or a known chemical vapor deposition process (CVD), such as Plasma Enhanced Chemical Vapor Deposition (PECVD).In one embodiment, disposing the coating 62 comprises depositing an insulating material. For example, the insulating material can comprise an inorganic material, such as Si3N4, which can be deposited using PECVD. If diffusion of the coating 62 into the core 60 is a concern, such diffusion can be reduced by not exposing the bridge 50 to elevated temperatures. Preferably, the conductive material is deposited so that the material only deposits on the core 60.In one embodiment, the bridge section 50 of the first interconnect 34 has a rectangular cross-sectional shape with a width approximately equal to 0.25 microns and a height approximately equal to 0.25 microns. Consequently, the bridge section 50 comprising the copper core 60 is able to span a distance of 0.25 mm with a worst case sagging deflection approximately equal to 0.0065 microns. Furthermore, at this length, the bridge section 50 provides a resistance of only 67 ohms. Alternatively, if the core 60 is formed of silver, the bridge section 50 has a resistance approximately equal to 59 ohms and a sagging deflection approximately equal to 0.014 microns. In comparison, a similarly shaped aluminum bridge section would provide a substantially larger resistance approximately equal to 110 ohms and experience a sagging deflection approximately equal to 0.0035 microns.As can be seen from the above example the limiting factor for Long Aluminum Bridge Structures is the line resistivity not the tendency to sag. If 50 ohms were the limit then Copper and Silver would both be marginally acceptable at this dimension while Aluminum would be unacceptable.Preferably, the coating 62 has a thickness that substantially inhibits contaminants, such as oxygen, residing in the air space 52 from reaching the core as well as adhering tightly to the core such that surface diffusion is significantly reduced. For example, in one embodiment, the coating 62 including one of the conducting materials listed above has a thickness approximately between 20 Ȧ and 40 Ȧ. In another embodiment, the coating 62 including one of the insulating materials listed above and has a thickness approximately between 10 Ȧ and 100 Ȧ.Reference will now be made to FIGS. 6-7 which illustrate a method 200 of forming a plurality of adjacent electrical interconnects having overlapping air bridge sections in accordance with yet another embodiment of the present invention. As shown in FIG. 6, the method 200 comprises forming the mandril 104 in a state 202, and forming the core 60 of the first interconnect 34 above the mandril in a state 204 in the manner described above in connection with FIG. 4.As shown in FIGS. 6 and 7, The method further comprises, in a state 206, extending the mandril 104 with a second photoresist layer 208 that covers the core 60 of bridge section 50 of the first interconnect. The purpose of the second layer 208 is to support and elevate a core 41 of the second interconnect 40 above the first core 60. The second layer 208 includes an upper surface 210 which is displaced above an upper surface 212 of the first core 60. The thickness of the second layer 208 is selected so as to provide a desired distance between the upper surface 212 of the first core 60 and the upper surface 210 of the second photoresist layer 208.As shown in FIG. 6, the method further comprises forming the second interconnect 40, in a state 220, so that the second interconnect 40 extends between the third and fourth circuit nodes 42 and 44, of the integrated circuit (FIGS. 2 and 3). The second interconnect 40 is preferably formed using the methods described above in connection with FIG. 4; i.e. forming vias in the mandril layers 104, 208, depositing a layer of conducting material over the mandril, and patterning the conducting material. Furthermore, the overlapping core 41 of the second interconnect 40 preferably extends along a direction that is perpendicular to that of the first interconnect 34 and 40 so as to reduce capacitive coupling between the first and second interconnects.As shown in FIG. 6, the method further comprises, in a state 222, removing the mandril. In one embodiment, removing the mandril comprises removing the mandril layers 104 and 208 after completing the cores 60 and 41 of the respective electrical interconnects 34 and 40. In particular, after forming the first and second cores 60 and 41, the first and second photoresist layers 104 and 208 are removed in a single etching process. However, it will be appreciated that, in another embodiment, the first madril layer 104 could be removed subsequent to forming the first core 60 in a first etching process and the second madril layer 208 could be removed in a separate second etching process subsequent to forming the second core 41.As shown in FIG. 6, the method 200 further comprises depositing the adherent coating 62 in a state 224. In one embodiment, the adherent coating 62 is simultaneously deposited on the cores 60 and 41 of the first and second interconnects 34 and 40 in the manner described above in connection with FIG. 4. The advantage of simultaneously depositing the coating on both cores 60 and 41 is that fewer processing steps are needed. However, it will be appreciated that each of the cores 60 and 41 could be coated during separate deposition stages without departing from the spirit of the present invention.It will be appreciated that the electrical interconnect and methods for providing the same of the present invention provide many advantages. In particular, because the interconnect includes the air bridge which is surrounded by air, the interconnect is provided with a reduced capacitance. Consequently, the interconnect is less susceptible to the problems of signal delay and signal cross-talk. Furthermore, because the core of the air bridge is formed of highly conductive material, the air bridge is able to have a reduced resistance, thereby further reducing signal delays. Moreover, because the core of the air bridge is formed of a material having a relatively low ratio of p/E, the air bridge is less susceptible to the problems of sagging. Thus, the air bridge is less likely to fracture and/or contact adjacent structures of the integrated circuit, thereby allowing adjacent interconnects to be spaced more closely together and span larger distances. Additionally, because the core of the air bridge is surrounded by the adherent coating, the surface diffusion rate of the core material will be substantially reduced thus increasing the electromigration resistance of the structure. The oxygen from the air surrounding the air bridge is also inhibited from reacting with the core which would otherwise contaminate the core which could possibly increase the resistance of the core and decrease the mechanical strength of the core.FIG. 8 illustrates one embodiment of an integrated circuit 300. The integrated circuit 300 comprises a substrate 302 having an upper surface 304, a first circuit element 310 having a first mounting region 312, and a second circuit element 320 having a second mounting region 322. The circuit elements 310, 320 may comprise generally known transistors or other types of circuit elements, such as resistors and capacitors, having a plurality of mounting regions without departing from the scope of the present invention. The first and second mounting regions 312, 322 function as electrical contact points for the first and second circuit elements 310, 320, respectively. Additionally, the first and second mounting regions 312, 322 may be formed in a known manner with a conductive material, such as polysilicon, aluminum, copper, or silver using known metallization and/or deposition techniques, such as CVD and damascene processes.In one aspect, the illustrated substrate 302 may comprise a conventional silicon wafer, but may generally encompass structures comprising semiconductor material, including, but not limited to, bulk semiconductor materials such as a semiconductor wafer (either alone or in assemblies comprising other materials thereon), and semiconductive material layers (either alone or in assemblies comprising other materials). In addition, the term "substrate" may also encompass any supporting structures, including, but not limited to, the semiconductive substrates described above. Furthermore, when reference is made to substrate within the following description, previous process steps may have been utilized to form regions, structures, or junctions in or on its base semiconductor structure or foundation.FIG. 8 further illustrates one embodiment of the formation of an electrical interconnect 330. In this particular embodiment, the electrical interconnect 330 comprises an air bridge structure that is formed to laterally extend between the first and second circuit elements 310, 320 in a suspended manner above the substrate 302 so as to electrically interconnect the at least two circuit elements 310, 320. A first distal end of the air bridge structure 330 is attached to the first mounting region 312 of the first circuit element 310 and a second distal end of the air bridge structure 330 is attached to the second mounting region 322 of the second circuit element 320 so as to electrically interconnect the first and second circuit elements 310, 320. In one aspect, the plane of the air bridge structure 330 may substantially parallel the plane of the substrate surface 304. It should be appreciated that the plane of the air bridge structure 330 may vary in orientation, and the height at which the air bridge structure 330 is suspended may vary in magnitude without departing from the scope of the present invention.As illustrated in FIG. 8, the electrical interconnect or air bridge structure 330, in one embodiment, may comprise a laterally extending member 331, a first vertically extending leg 334, and a second vertically extending leg 336. The laterally extending member 331 may comprise a substantially planar structure that is substantially parallel to the substrate 302. The first vertically extending leg 334 may be interposed between the first distal end of the laterally extending member 331 and the first mounting region 312 of the first circuit element 310. The second vertically extending leg 336 may be interposed between the second distal end of the laterally extending member 331 and the second mounting region 322 of the second circuit element 322. In one aspect, the first and second vertically extending legs 334, 336 distally extend from the substrate 302 in a substantially perpendicular manner and form an electrical contact with the laterally extending member 331. It should be appreciated that the orientation and height at which the laterally extending member is suspended above the substrate 302 may vary and depend on the length of the vertically extending legs 334, 336 without departing from the scope of the present invention.It should be appreciated that the electrical interconnect or air bridge structure 330 may comprise the scope and functionality of the bridge section 50 as described with reference to FIGS. 1-3 and may be formed in a manner as previously described without departing from the scope of the present invention. In addition, the electrical interconnect or air bridge structure 330 may comprise a conductive material, such as copper or silver, that may adapted to substantially improve the electromigration as well as the electrical properties of the air bridge structure 330.As further illustrated in FIG. 8, the electrical interconnect or air bridge structure 330 may comprise a plurality of surfaces with exposed grain boundaries 332. Grain boundaries may appear as irregular crystalline lattice boundaries where grain interfaces coalesce. The exposed grain boundaries 332 may be the result of using a CMP process to planarize the upper surface of the air bridge structure 330. Otherwise, exposed grain boundaries 332 near the surfaces may be the result of deposition irregularities during metallization. CMP processes may cause mechanical deformation of surfaces due to the applied mechanical polishing effects of rotating components used during the CMP process. Work-hardened materials tend to adversely affect the electrical properties of crystalline structures.As layers of material are polished away in a substantially uniform manner, the surfaces become work-hardened and grain boundaries 332 are exposed near the surfaces. Beneficially, CMP process creates substantially uniform surfaces, but CMP processes tend to expose substantially large quantities of grain boundaries 332 near the surfaces. Unfortunately, an increase in the number of exposed grain boundaries 332 tends to increase surface diffusion and grain boundary diffusion of impurities into the structure, which may detrimentally effect the integrity and purity of composition. When impurities diffuse into the structure, a reduced integrity of composition results, wherein undesirable electrical properties may also result, which may lead to an increased resistivity or reduced conductivity of the structure. In addition, the number exposed grain boundaries 332 may also lead to increased surface electromigration and grain boundary electromigration, which may detrimentally effect the reliability of the device.In one aspect, electromigration may be induced by an electric current in the bulk material and may refer to the directed motion of atoms at solid surfaces, grain boundaries, and grain interfaces. The Applicant considers electromigration as a key factor in determining the reliability of integrated circuits. As integrated circuit miniaturization continues and component densities increase, failures may occur when the interconnect line dimensions are relatively similar in size to or smaller than the grain proportions of the material. In one embodiment, grain boundaries no longer provide connected diffusion paths along the conductive path. Instead, failure occurs due to intragranular voids, which may nucleate at the edges of the conductive path or interconnect path, migrate in the current direction, and collapse. The proposed failures may also comprise diffusive displacements at the terminals of the interconnect line that may inhibit electrical contact.Both of these failure modes may be affected by the microstructure of the interconnect line and may be delayed or overcome by metallurgical changes that alter the crystalline microstructure. When electrons are conducted through a metal, they interact with imperfections in the lattice structure of the atoms and scatter. Scattering occurs whenever an atom is out of place for any reason. Thermal energy produces scattering by causing atoms to vibrate. This may be considered the source of resistance of conductive materials. The higher the temperature, the more out of place the atom is, the greater the scattering and the greater the resistivity. Under these conditions, electromigration may lead to the electrical failure of interconnects in relatively short times, which may reduce the lifetime of the integrated circuit to an unacceptable level.Advantageously, electromigration and diffusion may be deterred by coalescing grain boundaries or re-crystallizing the crystalline microstructure of the electrical interconnect or air bridge structure 330 in a manner that will be described in greater detail herein below. As is illustrated in FIG. 8, by allowing the structural composition of the air bridge structure 330 to re-crystallize 340, the quantity of exposed grain boundaries 332 at the surfaces may be reduced. In one aspect, the process of re-crystallization comprises a change in the grain structure of a material during which the deformed grains, strain hardened by working, become new unstrained grains. Re-crystallization promotes grain development, wherein individual grains coalesce to form larger and fewer grains. As the material re-crystallizes, minute crystals may appear in the grains of the microstructure. These minute crystals may comprise the same composition and lattice structure as the original undeformed grains, which may comprise substantially uniform dimensions. The minute crystals may nucleate at the most drastically deformed portions of the grain, such as the grain boundaries. The cluster of atoms from which the re-crystallized grains are formed may comprise a nucleus. Re-crystallization takes place by a combination of nucleation of the strain free grains and the development of these nuclei.In one aspect, the temperature at which material re-crystallizes and/or coalesces is dependent on the characteristics of the material itself. The material used to form the air bridge structures may comprise, at least in part, various metals, such as copper, silver, gold, platinum, palladium, iridium, and various reactive elements, such as titanium, zirconium, and hafnium or some combination thereof. It should be appreciated that, for example, if one of the reactive elements is used, zirconium may be preferred due to its low solubility in both copper and silver. It should be appreciated by skilled in the art that the rate and temperature of recrystallization depends not only on the material but also the extent of prior cold work. Below is one embodiment of a table of approximate re-crystallization temperatures for the above-mentioned materials.<tb><sep>Material<sep>Approximate Re-crystallization Temperature<tb><sep>copper<sep>200-400[deg.] C.<tb><sep>silver<sep>200-400[deg.] C.<tb><sep>gold<sep>200[deg.] C.<tb><sep>platinum<sep>400[deg.] C.As a result of reducing the number of exposed grain boundaries near the surface of the electrical interconnect or air bridge structure 330, improved electrical properties may be achieved from an increase in the electromigration resistance and an increase in the diffusion resistance of the re-crystallized microstructure. To further enhance the electromigration resistance and the diffusion resistance, the air bridge structure 330 may be coated with an insulating material in a manner as previously described. Advantageously, improved electrical properties of the air bridge structure 330 may increase the reliability and structural integrity of the air bridge structure 330. Beneficially, by allowing re-crystallization to occur, a reduction in the quantity of exposed grain boundaries is achieved.FIGS. 9A-9H illustrate one embodiment of forming the electrical interconnect 300 of FIG. 8 with enhanced electrical properties. As illustrated in FIG. 9A, the substrate 302 may comprise an upper surface where the first and second mounting regions 312, 322 are positioned in a manner as described in FIG. 8. FIG. 9B illustrates the deposition of a support layer 350 that may be used to form a temporary support structure, such as the previously described mandril. The support layer 350 may comprise an insulating material, such as silicon-dioxide, that may be globally deposited so as to overlie the substrate 302 and the mounting regions 312, 322 in a generally known manner using deposition techniques, such as chemical vapor deposition (CVD). In one embodiment, the support layer 350 may be planarly etched to a first height 370 using generally known chemical-mechanical polishing (CMP) techniques.FIG. 9C illustrates the formation of a first and second via 352, 354 and a mandril or temporary support structure 356 in the support layer 350. The vias 352, 354 may be formed using a generally known pattern and etch technique, wherein the first via 352 is etched in a manner so as to expose the upper surface of the first mounting region 312, and the second via 354 is etched in a manner so as to expose the upper surface of the second mounting region 322. During etching of the vias 352, 354, the temporary support structure 356 may be formed using pattern and etch techniques in a similar manner that is generally known. In one embodiment, the temporary support structure 356 may be etched so as to retain the first height 370 as described with reference to the support layer 370 in FIG. 9B. Moreover, the temporary support structure 356 will provide support for the laterally extending member 331 during deposition in a manner that will be described in greater detail herein below.FIG. 9D illustrates the temporary support structure 356 in a modified form so as to define a second height 372 that is at least smaller than the first height 370. In one embodiment, the first height 370 of the temporary support structure 356 may be reduced using generally known pattern and etch techniques to the second height 372 so that the support layer remnants 350 retain the first height 370 and the temporary support structure 356 comprises the second height 372 as illustrated in FIG. 9D.FIG. 9E illustrates the deposition of a conductive layer 358 that may be used to form the laterally extending member 331 and the vertically extending legs 334, 336 of the air bridge structure 330. The conductive layer 358may comprise a conductive material, such as aluminum, copper, silver, or gold, that may be globally deposited so as to overlie the insulation layer 350 including the temporary support structure 356. The conductive material may also be deposited into the vias 352, 354 in a similar manner. The conductive material may be deposited in a generally known manner using known deposition techniques, such as a chemical vapor deposition (CVD), plasma enhanced CVD (PECVD), vacuum evaporation electroplating, or sputtering. As illustrated in FIG. 9D, it should be appreciated that, due to global deposition techniques, the conductive layer 358 may comprise a non-planar upper surface.FIG. 9F illustrates planar processing of the conductive layer 358 so as to form the laterally extending member 331 of the air bridge structure 330. In one embodiment, a chemical-mechanical polishing (CMP) process may be utilized to evenly planarize the non-planar surface of the conductive layer 358. The CMP planarization process applies a substantially uniform material removal rate across the plane of the substrate surface 304, which substantially ensures that the conductive layer 358 is uniformly reduced in height across the plane of the substrate surface 304 until the support layer remnants 350 are reached. Unfortunately, as previously described in FIG. 8, the planar processing of the conductive layer 358 may create work-hardened surfaces and expose grain boundaries adjacent the upper surface of the conductive layer 358. Consequently, the planar processing may adversely affect the electrical properties of the air bridge structure 330. Advantageously, the grain boundaries will be allowed to coalesce in FIG. 9G so as to improve the crystalline structure, which improves the electrical properties of the air bridge structure 330.FIG. 9F further illustrates one embodiment of forming the laterally extending member 331 of the air bridge structure 330. As illustrated in FIG. 9F, a first distal end 333a of the laterally extending member 331 forms an electrical contact with the upper portion of the first vertically extending leg 334, and a second distal end 333b of the laterally extending member 331 forms an electrical contact with the upper portion of the second vertically extending leg 336. It should be appreciated that the air bridge structure 330 electrically interconnects the first mounting region 312 with the second mounting region 322, wherein the formation of the laterally extending member 331 and the vertically extending legs 334, 336 forms the electrical interconnection between the first and second mounting regions 312, 322 as illustrated in FIG. 9F.FIG. 9G illustrates the removal of the support layer remnants 350 and the temporary support structure 356 in a manner so as to leave the air bridge structure 330 intact. As illustrated in FIG. 9G, the laterally extending member 331 is suspended above the substrate 302 via the vertically extending legs 334, 336 and positioned so as to laterally extend between the upper portions of the vertically extending legs 334, 336. In one embodiment, the support material may be removed in a generally known manner using pattern and etch techniques including acid washes.Unfortunately, the removal of the support material may deform the surfaces of the laterally extending member 331 and the vertically extending legs 334, 336, which may adversely affect the electrical properties of the air bridge structure 330. Therefore, at this point in the formation process, the air bridge components 331, 334, 336 are processed in a manner so as to coalesce the exposed grain boundaries so as to improve the crystalline structure of the components 331, 334, 336 and reduce the quantity of grain boundaries in a manner as previously described with reference to FIG. 8. Advantageously, as a result of coalescing the grain boundaries of the air bridge components 331, 334, 336, the air bridge structure 330 has improved electrical properties, which results in a more reliable device as previously described in FIG. 8.FIG. 9H illustrates the formation of a adherent coating 360 on the surfaces of the air bridge structure 330. The adherent coating 360 may be deposited in a manner as previous described. It should be appreciated that the adherent coating may further enhance the electrical properties of the air bridge structure 330 in a manner as previously described. It should also be appreciated that the adherent coating 360 may comprise the scope and functionality, including the dimensions and shape, of the adherent coating 62 as described with reference to FIGS. 4-7. Moreover, it should be appreciated that the adherent coating 62, in one embodiment, may comprise a protective coating that serves as a protective barrier which prevents surface diffusion as well as inhibiting contaminants, such as oxygen, from reaching the core 60.Advantageously, electromigration and electron scattering may be deterred by coalescing grain boundaries and/or re-crystallizing the crystalline microstructure of the electrical interconnect or air bridge structure 330. By re-crystallizing the structural composition of the air bridge structure 330, the quantity of exposed grain boundaries at the surfaces may be reduced. In one aspect, the process of re-crystallization may comprise improving the grain structure of the conductive material during which the deformed grains, strain hardened by planar processing, become new unstrained grains.As preciously described, re-crystallization promotes grain development, wherein individual grains coalesce to form larger and fewer grains. As the material re-crystallizes, minute crystals may appear in the grains of the microstructure. These minute crystals may comprise the same composition and lattice structure as the original undeformed grains, which may comprise substantially uniform dimensions. The minute crystals may nucleate at the most drastically deformed portions of the grain, such as the grain boundaries. The cluster of atoms from which the re-crystallized grains are formed may comprise a nucleus. Re-crystallization takes place by a combination of nucleation of the strain free grains and the development of these nuclei.FIG. 10 illustrates one embodiment of a method 400 that maybe used to form the electrical interconnect or air bridge structure 330 as described in FIGS. 8, 9. The method 400 initiates in a start state 402 and proceeds to a state 404, wherein the mandril 104 or temporary support structure 356 may be formed on the upper surface 304 of the substrate 302 in a generally known manner as previously described in FIGS. 4, 9A-9H. The temporary support structure 356 may be placed between the first and second mounting regions 312, 322 so as to distally extend above the surface 304. In one aspect, the purpose of the temporary support structure 356 is to provide a supporting surface that substantially supports the laterally extending member 331 during formation of the air bridge structure 330. The temporary support structure 356 may be patterned, etched, and formed using various materials and various deposition techniques that are generally known in the art in a manner such that the temporary support structure 356 may be removed to expose a lower surface of the air bridge structure 330.After forming temporary support structure 356 in the state 404, the method 400 advances to a state 406, wherein the temporary support structure 356 is modified, using conventional etching techniques, to form vias 352, 354 in the temporary support structure 356 that vertically extend from an upper surface of the temporary support structure 356 to the respective first and second mounting regions 312, 322. The correspond to the first and second vertically extending legs 334, 336. Also, in the state 406, the height of the temporary support structure 356 may be modified so as to comprise a reduced height 372 in a manner as described with reference to FIG. 9D. Advantageously, the reduced height 372 of the temporary support structure 356 allows the laterally extending member 331 of the air bridge structure 330 to be formed when the conductive layer 258 is deposited and planarized in a manner as described herein below.Next, in a state 408, the method 400 proceeds to a state 410, wherein the conductive layer 258 is formed in an overlying manner on the temporary support structure 356 such that the conductive layer 258 horizontally extends across the upper surface of the temporary support structure 356 between the vias 352, 354. The deposition of conductive layer 258 is used to form the air bridge structure 330 including the laterally extending member 331 and the first and second vertically extending legs 334, 336. In one aspect, the conductive layer 358 vertically extends through the vias 352, 354 to form the first and second legs 334, 336 and contact the first and second mounting regions 312, 322 of the first and second circuit elements 310, 320. The conductive layer 258 preferably comprises a highly conductive material having a relatively small ratio of ([rho]/E), such as copper or silver, thereby providing the air bridge structure 330 with a relatively small resistance and a reduced tendency to sag as previously mentioned above.The method then advances to a state 410, wherein the conductive layer 258 is modified so as to define the shape of the air bridge structure 330. For example, the laterally extending member 331 of the air bridge structure 330 may be planarized using a conventional chemical mechanical planarization (CMP) process. As previously described, the CMP process is used to remove material across a surface in a substantially uniform manner. As is known in the art, the formation of substantially uniform surfaces are desirable for the subsequent deposition of additional layers with uniform thickness.Next, the method 400 proceeds to a state 412, wherein the temporary support structure 356 is removed in a generally known manner so as to expose the lower surfaces of the air bridge structure 330. Once the temporary support structure 356 is removed in the state 412, the conductive layer 258 material of the air bridge structure 330 is allowed to re-crystallize in a state 414. According to the Applicant, some materials that may be used to form the air bridge structure 330 may re-crystallize at room temperature. For example, copper following a CMP process may re-crystallize at 200[deg.] C. In contrast, for example, a copper-tin(0.24%) alloy may require a heat treatment above 375[deg.] C. for re-crystallization.In one aspect, a pre-determined time allotment may granted for the purpose of microstructure re-crystallization of the material used to form the air bridge structure 330. In another aspect, it may be desirable to utilize environmental control techniques to control microstructure re-crystallization. For example, the integrated circuit 300 including the air bridge structure 330 may be placed in an environment conducive to allowing desirable re-crystallization. The environment may comprise a vacuum, wherein contaminants and impurities are removed from the atmosphere. Other factors that may influence desirable re-crystallization may include temperature and pressure control, wherein various heat treatments and pressure treatments may be utilized to control microstructure re-crystallization. It should be appreciated that the air bridge structure 330 may comprise similar features including dimensions, shape, and functionality as described with reference to the bridge section 50 as reference by FIGS. 4-7.After allowing microstructure re-crystallization in the state 414, the method 400 advances to a state 416, wherein disposing of a adherent coating on the exposed surfaces of the electrical interconnect may be performed. In one aspect, disposing the adherent coating may comprise depositing a layer of conductive material selected from the group comprising the noble metals gold, platinum, palladium, iridium, and the reactive elements titanium, zirconium and hafnium. Furthermore, the conductive material of the coating may be deposited using a known electroless plating process or a known CVD process. It should be appreciated that, for example, if one of the reactive elements is used, zirconium may be preferred due to its low solubility in both copper and silver.In one embodiment, disposing the adherent coating may comprise depositing an insulating material. For example, the insulating material may comprise an organic material, such as parylene, which can be deposited using a known vapor deposition polymerization process. Alternatively, the insulating material can comprise an inorganic material, such as Si3N4, which can be deposited using PECVD. In one aspect, diffusion of the adherent coating may be inhibited by not exposing the air bridge structure 330 to elevated temperatures. Preferably, the adherent coating is deposited in a manner such that the material only deposits on the air bridge structure 330. It should be appreciated that the adherent coating may comprise similar features including dimensions, shape, and functionality as described with reference to the coating 62 as reference by FIGS. 4-7. It should also be appreciated that, after applying the adherent coating on the air bridge structure 330, a heat treatment may be used to improve adhesion along the grain boundaries 332 on the surfaces of the air bridge structure 330, which may further enhance the electromigration resistance of the air bridge structure 330. After depositing the adherent coating on the air bridge structure 330 in the state 416, the method 400 proceeds to terminate in an end state 418.In one embodiment, mechanical stability of the air bridge structure 330 may be increased by back-filling the spaces or vacancies at least in part with an insulating material between the air bridge structure 330 and the substrate 302. In addition, a foamed polymer may be used in a manner as disclosed in the Applicant's issued patent entitled "Method of Forming Foamed Polymeric Material for an Integrated Circuit" (U.S. Pat. No. 6,077,792) which is hereby incorporated by reference in its entirety. The issued patent discloses a method of forming an insulating material, such as a polymetric material, for use in an integrated circuit, wherein at least a portion of the polymeric material is converted to a foamed polymeric material. The converting of the polymeric material includes exposing at least a portion of the polymeric material to a supercritical fluid. The integrated circuit may include a substrate of the integrated circuit and a foamed polymeric material on at least a portion of the substrate. The integrated circuit may further include a conductive layer adjacent the foamed polymeric material.By allowing the microstructure of the electrical interconnect or air bridge structure 330 to re-crystallize in the state 414, the quantity of exposed grain boundaries may advantageously be reduced. A reduction in the quantity of exposed grain boundaries may lead to an improvement in the electrical properties, including enhanced electromigration resistance and enhanced diffusion resistance, of the electrical interconnect or air bridge structure 330. Enhanced electrical properties may improve the reliability of the electrical interconnect or air bridge structure 330 by improving the crystalline orientation and compositional integrity of the microstructure.Although the preferred embodiment of the present invention has shown, described and pointed out the fundamental novel features of the invention as applied to this embodiment, it will be understood that various omissions, substitutions and changes in the form of the detail of the device illustrated may be made by those skilled in the art without departing from the spirit of the present invention. Consequently, the scope of the invention should not be limited to the foregoing description, but should be defined by the appending claims.
Systems, apparatuses, and methods related to arithmetic and logical operations in a multi-user network are described. Circuitry may be part of a pool of shared computing resources in a multi-user network. Data (e.g., one or more bit strings) received by the circuitry may be selectively operated upon. The circuitry can perform operations on data to convert the data between one or more formats, such as floating-point and/or universal number (e.g., posit) formats and can further perform arithmetic and/or logical operations on the converted data. For instance, the circuitry may be configured to receive a request to perform an arithmetic operation and/or a logical operation using at least one posit bit string operand. The request can include a parameter corresponding to performance of the operation. The circuitry can perform the arithmetic operation and/or the logical operation based, at least in part, on the parameter.
1.A device including:A circuit system that is communicatively coupled to a shared computing resource pool deployed in a multi-user network, wherein the circuit system is configured to:Receiving a request for an arithmetic operation or a logical operation or both using at least one hypothetical digit string operand, wherein the request includes a parameter corresponding to the operation using the at least one hypothetical digit string; andPerforming the arithmetic operation or the logical operation or both using the at least one hypothetical digit string operand based at least in part on the received parameter.2.The apparatus according to claim 1, wherein the circuit system is configured to use the at least one hypothetical digit string operand to access all specified by the parameters for performing the arithmetic operation or the logical operation or both Describes the amount of computing resources in the shared computing resource pool.3.The apparatus of claim 1, wherein the circuitry is configured to use the at least one hypothetical digit string operand to perform the arithmetic operation or the logical operation or both within a specific amount of time specified by the parameter .4.The device according to any one of claims 1 to 3, wherein the parameter corresponds to the bit length of the at least one hypothetical digit string operand, the number of exponent bits of the at least one hypothetical digit string operand, or Both.5.The device according to any one of claims 1 to 3, wherein the logic circuit system is configured to:At least one floating-point bit string is received;Before performing at least one of the arithmetic operation and the logical operation, the at least one hypothetical digit string is generated by converting the at least one floating-point bit string into a hypothetical digit string.6.The apparatus according to any one of claims 1 to 3, wherein the circuit system is further configured to request processing resources from the shared computing resource pool in response to receiving the request to perform the operation The allocation of the amount and the amount of memory resources for the arithmetic operation or the logical operation or both using the at least one hypothetical digit string operand.7.The apparatus according to any one of claims 1 to 3, wherein the circuit system is further configured to retrieve from a memory location in the shared computing resource pool before performing the arithmetic operation or the logical operation or both The at least one hypothetical digit string operand.8.A system including:Multi-user network, which includes a shared computing resource pool;A computing node configured to access the multi-user network; andA circuit system, which is communicatively coupled to the shared computing resource pool, wherein the circuit system is configured to:Receiving a request from the computing node to perform arithmetic operations or logical operations or both using at least one hypothetical digit string operand;Receiving, from the computing node, a parameter corresponding to using the at least one hypothetical digit string to perform the operation;The shared computing resource pool is used to perform the arithmetic operation or the logical operation or both based at least in part on the request and the received parameter.9.The system according to claim 8, wherein the circuit system is configured to:Requesting the allocation of the amount of computing resources from the shared computing resource pool for performing the arithmetic operation or the logical operation or both based on the received parameter; andThis allows the arithmetic operation or the logical operation or both to be performed using the allocated amount of computing resources.10.The system according to any one of claims 8 to 9, wherein the parameter includes the amount of time allowed to perform the arithmetic operation or the logical operation or both, and whereinThe circuit system is configured such that the arithmetic operation or the logical operation or both are performed within the allowed amount of time.11.The system according to any one of claims 8 to 8, wherein the parameters include the bit string length and exponent bit length of the first bit string operand, and the second bit of the at least one hypothetical bit string String length and second exponent bit length, and whereThe circuit system is configured to set the bit string length and the exponent bit length of the at least one hypothetical digit string operand based on the parameter before performing the arithmetic operation or the logical operation or both.12.The system according to any one of claims 8 to 9, wherein the circuit system is configured to:At least one floating-point bit string is received;Before performing the arithmetic operation or the logical operation or both, the at least one hypothetical digit string is generated by converting the at least one floating-point bit string into a hypothetical number format.13.The system according to any one of claims 8 to 9, wherein the circuit system is configured to access a memory location in the shared computing resource pool to retrieve before performing the arithmetic operation or the logical operation or both The at least one hypothetical digit string operand.14.A device including:An agent, which is deployed in a multi-user network, the agent has processing resources and can be executed by hardware, wherein the agent is configured to:Receive parameters corresponding to the use of one or more hypothetical digit strings for arithmetic operations or logical operations or both;Receiving a request to initiate the arithmetic operation or the logical operation or both using the one or more hypothetical digit strings; andCauses the arithmetic operation or the logical operation or both to be performed using the one or more hypothetical digit strings based at least in part on the received parameter.15.The apparatus according to claim 14, wherein the parameters include parameters corresponding to the amount of time for performing the operation, parameters corresponding to the amount of processing resources for performing the operation, and corresponding to the one or more hypothetical digits The parameter of the bit length of the string, the parameter of the number of exponent bits corresponding to the one or more hypothetical digit strings, or a combination thereof.16.The apparatus according to claim 14, wherein the agent is further configured to allocate computing resources available to the multi-user network based on the parameter for performing the arithmetic operation or the arithmetic operation using the one or more hypothetical digit strings The logical operation or both.17.The device of any one of claims 14-16, further comprising a logic circuit system communicatively coupled to the agent, wherein the agent is further configured to cause the one or more hypothetical digit strings to be transmitted To the logic circuit system, and whereThe logic circuit system is configured to use the one or more hypothetical digit strings to perform the arithmetic operation or the logic operation or both.18.The apparatus according to claim 17, wherein the agent is further configured to make the logic circuit system available from The memory resource accessed by the multi-user network retrieves the one or more hypothetical digit strings.19.The apparatus according to any one of claims 14 to 16, wherein the agent is further configured to cause the arithmetic operation or the logical operation or both of the one or more hypothetical digit strings to be used before The one or more floating-point bit strings are converted into a hypothetical number format to generate the one or more hypothetical bit string operands.20.A system including:Virtual Computing Cluster (VCC);An agent, which is deployed in the VCC, the VCC has computing resources and can be executed by hardware, and the agent is configured to:Receive a request for arithmetic operation or logic operation or both between the operand of the first hypothetical digit string and the operand of the second hypothetical digit string;Allocating the amount of computing resources available for performing the arithmetic operation or the logical operation or both between the first hypothetical digit string operand and the second hypothetical digit string operand; andSo that the arithmetic operation or the logical operation or both are performed between the first hypothetical digit string operand and the second hypothetical digit string operand.21.The system of claim 20, further comprising a logic circuit system communicatively coupled to the VCC, wherein the agent is further configured to:Access the logic circuit system; andSo that the first hypothetical digit string operand and the second hypothetical digit string operand are loaded into the logic circuit system, and whereinThe logic circuit system is configured to perform the arithmetic operation or the logical operation or both between the first hypothetical digit string operand and the second hypothetical digit string operand.22.The system according to claim 20, wherein the logic circuit system includes at least one of an application specific integrated circuit and a field programmable gate array.23.The system of claim 20, further comprising a logic circuit system communicatively coupled to the VCC, wherein the agent is further configured to:Access the logic circuit system;So that the first floating-point bit string and the second floating-point bit string are loaded into the logic circuit system, and wherein the logic circuit system is configured as:Converting the first floating-point bit string into a first hypothetical digit string to generate the first hypothetical digit string operand;Converting the second floating-point bit string into a second hypothetical digit string to generate the second hypothetical digit string operand; andThe arithmetic operation or the logical operation or both are performed between the first hypothetical digit string operand and the second hypothetical digit string operand.24.The system according to any one of claims 20 to 23, wherein the agent is configured to:Receiving processing resource parameters corresponding to performing the arithmetic operation or the logical operation or both; andThe amount of the computing resources available for performing the arithmetic operation or the logical operation or both is allocated based at least in part on the processing resource parameter.25.The system according to any one of claims 20 to 23, wherein the agent is configured to:Receiving a processing time parameter corresponding to performing the arithmetic operation or the logical operation or both; andThe amount of time available to perform the arithmetic operation or the logical operation or both is allocated based at least in part on the processing time parameter.26.The system according to any one of claims 20 to 23, wherein the agent is configured to:Receiving a hypothetical number precision parameter corresponding to performing the arithmetic operation or the logical operation or both;Setting the bit lengths of the first hypothetical digit string operand and the second hypothetical digit string operand based at least in part on the hypothetical number precision parameter; andThe exponent bit lengths of the first hypothetical digit string operand and the second hypothetical digit string operand are set based at least in part on the hypothetical number precision parameter.27.The system according to any one of claims 20 to 23, wherein the agent is deployed in a hypervisor deployed in the VCC, a virtual computing instance deployed in the VCC, or a virtual computing instance deployed in the VCC Run on the container.
Arithmetic and logical operations in multi-user networksTechnical fieldThe present disclosure generally relates to semiconductor memories and methods, and more particularly, to devices, systems, and methods related to arithmetic and logical operations in multi-user networks.Background techniqueMemory devices are usually provided as internal semiconductor integrated circuits in computers or other electronic systems. There are many different types of memory, including volatile and non-volatile memory. Volatile memory may require power to maintain its data (for example, host data, error data, etc.) and include random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), synchronous Dynamic random access memory (SDRAM) and thyristor random access memory (TRAM) and so on. Non-volatile memory can provide permanent data by keeping stored data when power is not supplied, and can include NAND flash memory, NOR flash memory and resistance variable memory, such as phase change random access memory (PCRAM), resistance Random Access Memory (RRAM) and Magnetoresistive Random Access Memory (MRAM), such as Spin Torque Transfer Random Access Memory (STT RAM) and so on.The memory device may be coupled to a host (eg, a host computing device) to store data, commands, and/or instructions for use by the host when operating a computer or electronic system. For example, data, commands, and/or instructions may be transferred between the host and the memory device during the operation of computing or other electronic systems.The host and/or memory device may operate in a multi-user network (for example, a software-defined data center) in which virtual machines (VM), virtual workloads, data computing nodes, clusters, containers, etc. are deployed. A VM is a software implementation of a computer that executes application software similar to a physical computer. The VM has the advantage of not being bound to physical resources, which allows the VM to move and scale around to meet the changing needs of the enterprise without affecting the use of the enterprise's applications. The VM can be deployed on a hypervisor with a pool of computing resources (for example, processing resources, memory devices that can contain memory resources, etc.).Description of the drawingsFIG. 1 is a functional block diagram in the form of a computing system including a device including a host and an acceleration circuit system according to several embodiments of the present disclosure.2A is a functional block diagram in the form of a computing system including a device including a host and a memory device according to several embodiments of the present disclosure.2B is a functional block diagram in the form of a computing system deployed in a multi-user network including a host, a memory device, an application specific integrated circuit, a field programmable gate array, and a virtual computing cluster according to several embodiments of the present disclosure.Figure 3 is an example of n-bit posits with es exponent bits.Fig. 4A is an example of a positive value of a 3-digit hypothetical number.Figure 4B is an example of a hypothetical number construction using two exponent bits.Fig. 5 is a functional block diagram in the form of an acceleration circuit system according to several embodiments of the present disclosure.Figure 6 is a diagram of a host, a hypervisor, multiple virtual computing instances, and agents according to several embodiments of the present disclosure.Figure 7A is a diagram of a virtual computing cluster according to several embodiments of the present disclosure.Figure 7B is another diagram of a virtual computing cluster according to several embodiments of the present disclosure.Figure 8 is a diagram of a device according to several embodiments of the present disclosure.Figure 9 is a diagram of a machine according to several embodiments of the present disclosure.FIG. 10 is a flowchart showing an example method involving arithmetic and logical operations in a multi-user network according to several embodiments of the present disclosure.Detailed waysDescribe systems, devices, and methods related to arithmetic and logical operations in multi-user networks. The circuit system can be part of a shared computing resource pool in a multi-user network. The data (for example, one or more bit strings) received by the circuitry can be selectively manipulated. The circuit system can operate on the data to convert the data between one or more formats (such as floating point and/or universal number (for example, hypothetical number) format), and can further perform arithmetic and/or logic on the converted data Operation. For example, the circuit system may be configured to receive a request to perform arithmetic operations and/or logical operations using at least one hypothetical digit string operand. The request may contain parameters corresponding to the operation to be performed. The circuit system may perform arithmetic operations and/or logic operations based at least in part on the parameters.The computing system may perform a wide range of operations that may include performing various calculations, which may require varying degrees of accuracy. However, computing systems and/or multi-user networks have a limited amount of resources to perform such operations. For example, the memory resources in which the operands used to perform calculations and/or the processing resources used to perform such calculations will be stored may be limited in a computing system or a multi-user network. In order to facilitate the use of operands stored by a computing system or a multi-user network to perform operations within constraints imposed by limited resources, in some methods, the operands are stored in a specific format. For simplicity, one such format is called the "floating point" format or "floating point number" (for example, the IEEE 754 floating point format).According to the floating-point standard, in terms of three integer sets or bit sets-the bit set called the "base", the bit set called the "exponent", and the bit set called the "mantissa" (or significant number) A string (for example, data, a bit string that can represent a number, etc.), such as a binary number string. The set of integers or bits that define the format in which a string of binary numbers is stored may be referred to as a "format" herein. For example, the three integer sets (e.g., base, exponent, and mantissa) that define the above-described bits of a floating-point bit string may be referred to as a format (e.g., a first format). As described in more detail below, it is assumed that the digit string may include four integer sets or bit sets (e.g., sign, base, exponent, and mantissa), which may also be referred to as a "format" (e.g., a second format). In addition, according to the floating-point standard, two infinite values (for example, +∞ and -∞) and/or two kinds of "not a number" (NaN) (quiet NaN and signaling NaN) may be included in the bit string.The floating-point standard has been used in computing systems for several years and defines arithmetic formats, exchange formats, rounding rules, operations, and exception handling for calculations performed by many computing systems. The arithmetic format may contain binary and/or decimal floating point data, which may contain finite numbers, wireless values, and/or special NaN values. The interchange format may include codes (e.g., bit strings) that can be used to exchange floating point data. Rounding rules may include a set of attributes that can be satisfied when rounding numbers during arithmetic operations and/or conversion operations. Floating-point operations may include arithmetic operations and/or other calculation operations, such as trigonometric functions. Exception handling can include indications of abnormal conditions, such as division by zero, overflow, etc.The alternative format for floating point is called the "unum" format. There are several forms of unum formats that can be called "hypothetical numbers" and/or "significant numbers"-type I unum, type II unum, and type III unum. Type I unum is a superset of the IEEE 754 standard floating-point format that uses "ubit" at the end of the fraction to indicate whether a real number is an accurate floating-point number or whether it is between adjacent floating-point numbers. The sign bit, exponent bit, and fraction bit in the I type unum are defined according to the IEEE754 floating point format. However, the length of the exponent and fraction fields of the I type unum can be significantly changed from a single bit to the maximum user-definable length. By obtaining the sign bit, exponent bit, and fraction bit according to the IEEE 754 standard floating point format, the I-type unum can behave like a floating-point number. However, compared to the floating-point number, the exponent bit and the fractional bit of the I-type unum are represented Variable bit length may require additional management.Type II unums are generally not compatible with floating point numbers, which permit clean mathematical designs based on projected real numbers. Type II unum can contain n bits and can be described in terms of "u lattice", where the quadrants of the circular projection are filled with an ordered set of 2n-3-1 real numbers. The value of the type II unum can be reflected around the axis that bisects the circular projection, so that the positive value is located in the upper right quadrant of the circular projection, and its negative corresponding value is located in the upper left quadrant of the circular projection. The lower half of the circular projection representing the type II unum may contain the reciprocal of the value located in the upper half of the circular projection. Type II unums usually rely on lookup tables for most operations. For example, in some cases, the size of the lookup table may limit the effect of type II unum. However, under some conditions Type II unum can provide improved computing functionality compared to floating point numbers.Type III unum format is referred to herein as "hypothetical number format" or "hypothetical number" for simplicity. In contrast to floating-point bit strings, under certain conditions, hypothetical numbers can allow a wider dynamic range and higher accuracy (for example, precision) than floating-point numbers with the same bit width. This can allow calculations performed by computing systems or multi-user networks to be performed at a higher rate (e.g., faster) when using hypothetical numbers than when using floating-point numbers, which in turn can be achieved by, for example, reducing the clock used in performing calculations. The number of cycles thereby reduces the processing time and/or power consumed when performing such operations to improve the performance of the computing system or multi-user network. In addition, the use of hypothetical numbers in computing systems or multi-user networks may allow for higher accuracy and/or precision compared to floating-point numbers, as compared to some methods (for example, methods that rely on floating-point bit strings) The functions of computing systems or multi-user networks can be further improved.The embodiments herein are directed to hardware circuit systems (for example, logic circuit systems, arithmetic logic units, field programmable gate arrays, application-specific integrated circuits, etc.) that are configured to use bit strings to perform various operations to improve calculations The overall functionality of the device and/or multi-user network (eg, software-defined data center, cloud computing environment, etc.). For example, the embodiments herein are for deployment in a computing device or a multi-user network and configured to perform conversion operations to convert the format of a bit string from a first format (e.g., floating point format) to a second format (e.g., , Unum format, hypothetical number format, etc.) hardware circuit system. Once the bit string has been converted into the second format, the circuitry is operable to perform operations on the converted bit string (for example, arithmetic operations, logic operations, bit-by-bit operations, vector operations, etc.).In some embodiments, the circuit system may be further operable to convert the result of the operation back to a first format (for example, into a floating-point format), which in turn may be transmitted to a different circuit of a computing system or a multi-user network System (e.g., host, memory device, part of shared computing resources, etc.). By performing operations in this manner, the circuit system can improve the accuracy and/or precision of operations performed, improve the speed of operations, and/or reduce the number of bits before, during, or after performing arithmetic operations, logical operations, or other operations. The storage space required by the string to help improve the performance of a computing system or a multi-user network.In some embodiments, the circuitry may be deployed as part of a shared computing resource pool in a multi-user network. As used herein, "multi-user network" generally refers to a collection of computing systems in which one or more hosts (eg, host computing systems) are configured to provide computing functionality via a network such as the Internet. Multi-user networks are dynamic in nature. For example, virtual computing instances (VCI) and/or various application services can be created, used, moved, or destroyed within a multi-user network. When a VCI is created (for example, when the container is initialized), various processes and/or services start to run and consume resources.In a multi-user network, resources can be accessed by multiple users in different geographic locations, and the different geographic locations may not necessarily be the same geographic locations where computing resources are located. As used herein, "resources" are physical or virtual components that have limited availability within a computer or multi-user network. For example, resources include processing resources, memory resources, power, and/or input/output resources. A multi-user network may include a shared resource pool (e.g., processing resources, memory resources, etc.) shared by multiple users. In the alternative, the multi-user network may be referred to herein as a software-defined data center or cloud computing environment.In some embodiments, the circuitry can be accessed by the VCI as part of a shared pool of computing resources available for the VCI. For example, the circuitry may be deployed in a memory device that is provided as part of a shared pool of computing resources available to a multi-user network. However, the embodiment is not limited to this, and the circuit system may be deployed on a host, a blade server, a graphics processing unit, a field programmable gate array, an application specific integrated circuit, or other physical or virtualized hardware components provided as available The part of the shared pool of computing resources of a multi-user network.The term "virtual computing instance" (VCI) covers a range of computing functionality. The VCI may include a data computing node such as a virtual machine (VM) running on the hypervisor. In contrast, a container can run on a host operating system without a hypervisor or a separate operating system, such as a container running within Linux. The container may be provided by a virtual machine (VM) including a container virtualization layer (eg, Docker). VM usually refers to an isolated end-user space instance, which can be executed within a virtualized environment. Technologies other than hardware virtualization that can provide isolated end-user space instances can also be referred to as VCI. The term "VCI" encompasses these instances and combinations of different types of VCI, and so on.In some embodiments, the VM uses the resources of the host virtualized by virtualization software (for example, a hypervisor, a virtual machine monitor, etc.) to operate on the host with its own guest operating system. The tenant (ie, the owner of the VM) can choose which applications to operate on top of the guest operating system. On the other hand, construct some containers that run on top of the host operating system without the need for a hypervisor or a separate guest operating system.The host operating system can use the namespace to isolate containers from each other and therefore can provide operating system hierarchical separation of different application program groups operating in different containers. This separation is equivalent to the VM separation that can be provided in a hypervisor virtualization environment that virtualizes system hardware, and therefore can be regarded as a form of virtualization that separates different application program groups operating in different containers. Such a container can be "lightweight" compared to a VM, at least because it shares an operating system instead of operating on its own guest operating system.Multiple VCIs can be configured to communicate with each other in a multi-user network. In such a system, information can be propagated from end users to at least one of the VCIs in the system, between the VCIs in the system, and/or between at least one of the VCIs in the system and the non-virtualized physical host.Containerized cloud-native applications can be used to accelerate application delivery in multi-user networks. As used herein, "containerized" or "containerized" refers to virtualization technology, in which as an alternative to complete machine virtualization, the application (or part of the application, such as a stream corresponding to the application) Encapsulated into a container (for example, Docker, Linux container, etc.). Because containerization can include loading the application onto the VCI, the application can run on any suitable physical machine without worrying about application dependencies. In addition, as used herein, "cloud native application" refers to an application (for example, computer program, software package, etc.) assembled as a containerized workload in a container deployed in a multi-user network. "Containerized workload" refers to a computing architecture in which an application is structured as a collection of loosely coupled (eg, containerized) services. The containerized workload architecture may allow for improved application modularity, scalability, and continuous deployment compared to traditional application development environments.In an embodiment in which a circuit system for converting bit strings between various formats and/or arithmetic and/or logical operations using bit strings is provided in a multi-user network, one or more VCI and / Or the part of the calculation with the assistance of a container (for example, a containerized workload). For example, one or more VCIs or containers can be deployed in a multi-user network and can be configured to access circuitry to request operations to convert bit strings between various formats and/or to request the use of bit strings for arithmetic and / Or logical operation.In some embodiments, operations to convert bit strings between various formats and/or arithmetic and/or logical operations using bit strings may be performed based on parameters received by the multi-user network. For example, a request to perform an operation to convert a bit string between various formats and/or a request to perform arithmetic and/or logical operations on a bit string may be accompanied by a request corresponding to performing an operation to convert a bit string between various formats. One or more parameters for computing and/or performing arithmetic and/or logical operations on a bit string. The parameters may include the amount of processing resources to be used for the operation, the amount of time to be allocated for the operation, the bit length of the operand to be used for the operation, and/or the exponent bit length of the operand to be used for the operation and many more.By performing operations that convert bit strings between various formats and/or arithmetic and/or logical operations using bit strings in a multi-user network based on parameters, when such operations are requested, the application developer or the multi-user network Other users of may be able to fine-tune their resource consumption. Compared to methods that do not take calculations into account for such parameters, this may allow reduction of the costs associated with both money and resources associated with large calculations in a multi-user network.In the following detailed description of the present disclosure, reference is made to the accompanying drawings that form a part of the present disclosure, and the figures illustrate the manner in which one or more embodiments of the present disclosure may be practiced by way of illustration. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the embodiments of the present disclosure, and it should be understood that other embodiments may be utilized and the process, electrical, and structure may be performed without departing from the scope of the present disclosure Sexual change.As used herein, designators such as "N", "M", etc. specifically relative to the reference numerals in the figure indicate that a number of such designated specific features may be included. It should also be understood that the terms used herein are only for the purpose of describing specific embodiments and are not intended to be limiting. As used herein, unless the context clearly dictates otherwise, the singular forms "a/an" and "the" may include both singular and plural indicators. In addition, "number," "at least one," and "one or more" (for example, a number of memory banks) may refer to one or more memory banks, and "plurality" means more than one such thing. In one.In addition, throughout this application, the word "can/may" is used in the permissible sense (ie, possible, capable) rather than in the mandatory sense (ie, must). The term "including" and its derivatives mean "including but not limited to". Depending on the context, the term "coupled/coupling" means physically directly or indirectly connected or used to access and move (transmit) commands and/or data. Depending on the context, the terms "bit string", "data" and "data value" are used interchangeably herein and may have the same meaning.The figures in this article follow the numbering rule, where the first one or more numbers correspond to the figure number, and the remaining figures identify the elements or components in the figure. Similar elements or components between different figures can be identified by using similar numbers. For example, 120 may refer to element "20" in FIG. 1, and similar elements may be referred to as 220 in FIG. Generally, a single element number may be used herein to refer to a group or multiple similar elements or components. For example, the multiple reference elements 433-1, 433-2,... 433-N may be collectively referred to as 433. As will be appreciated, elements shown in the various embodiments herein can be added, exchanged, and/or removed in order to provide several additional embodiments of the present disclosure. In addition, the ratios and/or relative scales of the elements provided in the figures are intended to illustrate certain embodiments of the present disclosure, and should not be regarded as restrictive.FIG. 1 is a functional block diagram in the form of a computing system 100 including a device including a host 102 and an acceleration circuit system 120 according to several embodiments of the present disclosure. As used herein, "device" may refer to, but is not limited to, any of various structures or combinations of structures, for example, such as a circuit or a circuit system, one or more dies, one or more modules, one or Multiple devices or one or more systems. Each of the components (eg, host 102, acceleration circuitry 120, logic circuitry 122, and/or memory resource 124) may be individually referred to herein as a "device."As illustrated in FIG. 1, the host 102 may be coupled to the acceleration circuitry 120. In various embodiments, the host 102 may be coupled to the acceleration circuitry 120 via one or more channels 103 (eg, buses, interfaces, communication paths, etc.). The channel 103 can be used to transfer data between the acceleration circuit system 120 and the host 102, and can be in the form of a standardized interface. For example, the channel 103 may be Serial Advanced Technology Attachment (SATA), Peripheral Component Interconnect Express (PCIe) or Universal Serial Bus (USB), double data rate (DDR) interface, and other connectors and interfaces. However, in general, the channel 103 may provide an interface for transferring control, address, data, and other signals between the acceleration circuitry 120 and the host 102 having a compatible receiver for the channel 103.The host 102 can be a host system, such as a personal laptop computer, a desktop computer, a digital camera, a mobile phone, a device with Internet of Things (IoT) function, or a memory card reader, a graphics processing unit (for example, a video card) ), and various other types of hosts. The host 102 may include a system motherboard and/or a backplane and may include several memory access devices, for example, several processing devices (for example, one or more processors, microprocessors, or some other type of control circuit system). Those of ordinary skill in the art will understand that "processor" can mean one or more processors, such as a parallel processing system, several coprocessors, and so on. The host 102 may be provided in a multi-user network (for example, the multi-user network 201 illustrated in FIG. 2B herein). Therefore, in some embodiments, the host 102 may include physical and/or virtualized hardware configured to execute the host operating system.The system 100 may include a separate integrated circuit, or both the host 102 and the acceleration circuit system 120 may be on the same integrated circuit. The system 100 may be, for example, a server system and/or a high performance computing (HPC) system and/or a part thereof. Although the example shown in FIG. 1 illustrates a system with a Von Neumann architecture, the embodiments of the present disclosure can be implemented in a non-Von Neumann architecture that One or more components (e.g., CPU, ALU, etc.) usually associated with the von Neumann architecture may not be included.In some embodiments, the host 102 may be responsible for executing an operating system for the computing system 100, which includes an acceleration circuit system 120 and/or other components, such as the memory device 204 illustrated in FIGS. 2A and 2B, and FIG. 2B The field programmable gate array 221 described in FIG. 2B, the application specific integrated circuit 223 described in FIG. 2B, the virtual computing cluster 251 described in FIG. 2B, and so on. Therefore, in some embodiments, the host 102 may be responsible for controlling the operation of the acceleration circuit system 120. For example, the host 102 can execute instructions (for example, in the form of an operating system) for managing hardware of the computing system 100 (for example, scheduling tasks, executing applications, controlling peripheral devices, etc.).The acceleration circuit system 120 may include a logic circuit system 122 and a memory resource 124. The logic circuit system 122 may be provided in the form of an integrated circuit, such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a system on a chip, or a hardware and/or circuit system configured to perform the operations described in more detail herein Other combinations. In some embodiments, the logic circuitry 122 may include an arithmetic logic unit (ALU). The ALU may include a circuit system (e.g., hardware , Logic, one or more processing devices, etc.). However, the embodiments are not limited to ALU, and in some embodiments, the logic circuit system 122 may include a state machine and/or an instruction set architecture (or a combination thereof) in addition to or instead of ALU, as described herein in conjunction with FIGS. 2B and 2B and 5Describe in more detail.The logic circuit system 122 may be configured to receive one or more bit strings (e.g., multiple bits) stored in a first format (e.g., multiple bits in a floating-point format), and convert the bit strings into a second format (e.g., , Convert the bit string into a hypothetical number format), and/or use the bit string with the second format to perform operations such as arithmetic and/or logical operations. As used herein, a bit string stored in the second format (for example, a bit string in a hypothetical number format) contains at least one bit called "sign", a set of bits called "base", called "exponent" The bit set and the bit set called the "mantissa" (or significant number). As used herein, a bit set is intended to refer to a subset of the bits contained in a bit string. Examples of the sign bit set, base bit set, exponent bit set, and mantissa bit set are described in more detail herein with reference to FIGS. 3 and 4A to 4B.For example, once the floating-point bit string is converted into a bit string in a hypothetical number format, the logic circuit system 122 can be configured to use the hypothetical bit string to perform (or cause): arithmetic operations, such as addition, subtraction, multiplication, and division , Fusion multiplication and addition, multiplication and accumulation, dot product unit, greater than or less than, absolute value (for example, FABS()), fast Fourier transform, inverse fast Fourier transform, sigmoid function, convolution, square root, exponent and/ Or logarithmic operations; and/or logical operations, such as AND, OR, XOR, NOT, etc.; and trigonometric operations, such as sine, cosine, tangent, etc. As will be appreciated, the foregoing list of operations is not intended to be exhaustive, nor is the foregoing list of operations intended to be limiting, and the logic circuit system 122 may be configured to perform (or cause to perform) other arithmetic operations, logical operations, and other operations. Bit operations, vector operations, etc.The acceleration circuitry 120 may further include a memory resource 124 that may be communicatively coupled to the logic circuitry 122. The memory resources 124 may include volatile memory resources, non-volatile memory resources, or a combination of volatile and non-volatile memory resources. In some embodiments, the memory resource may be random access memory (RAM), such as static random access memory (SRAM). However, the embodiment is not limited to this, and the memory resources may be caches, one or more registers, NVRAM, ReRAM, FeRAM, MRAM, PCM), "emerging" such as 3-D cross-point (3D XP) memory devices, etc. Memory device or combination thereof. The 3D XP array of non-volatile memory can be combined with a stackable cross-grid data access array for bit storage based on changes in body resistance. In addition, compared with many flash-based memories, 3D XP non-volatile memory can perform in-place write operations, in which non-volatile memory cells can be processed without pre-erasing the non-volatile memory cells. Programming.The embodiment of FIG. 1 may include additional circuitry that is not described in order to avoid obscuring the embodiment of the present disclosure. For example, the system 100 may include address circuitry to latch address signals provided on the I/O connection through the I/O circuitry. The address signals can be received and decoded by the row decoder and the column decoder to access the devices in the system 100. Those skilled in the art should understand that the number of address input connections may depend on the density and architecture of the system 100.2A is a functional block diagram in the form of a computing system including a device 200 including a host 202 and a memory device 204 according to several embodiments of the present disclosure. The memory device 204 may include an acceleration circuitry 220, which may be similar to the acceleration circuitry 120 illustrated in FIG. 1. Similarly, the host 202 may be similar to the host 102 illustrated in FIG. 1. Each of the components (eg, host 202, acceleration circuitry 220, logic circuitry 222, memory resource 224, and/or memory array 230, etc.) may be individually referred to herein as a "device."The host 202 may be communicatively coupled to the memory device 204 via one or more channels 203, 205. The channels 203, 205 may be interfaces or other physical connections that allow data and/or commands to be transferred between the host 202 and the memory device 205. For example, the operation to be performed by the acceleration circuit system 220 (for example, the operation of converting a bit string in a floating-point format into a bit string in a hypothetical number format is initiated, and the subsequent operations on the bit string in a hypothetical number format Arithmetic and/or logical operations) commands can be transmitted from the host via the channels 203 and 205. It should be noted that, in some embodiments, the acceleration circuitry 220 may perform operations in response to a start command transmitted from the host 202 via one or more of the channels 203, 205 in the absence of an intervention command from the host 202. . That is, once the acceleration circuit system 220 has received the command to start the operation from the host 202, the acceleration circuit system 220 can perform the operation without an additional command from the host 202.The memory device 204 may include one or more memory modules (for example, a single in-line memory module, a dual in-line memory module, etc.). The memory device 204 may include volatile memory and/or non-volatile memory. In several embodiments, the memory device 204 may include a multi-chip device. The multi-chip device may contain several different memory types and/or memory modules. For example, the memory device 204 may include non-volatile or volatile memory on any type of module.The memory device 204 may provide a main memory for the computing system 200 or may be used as an additional memory or storage device in the entire computing system 200. The memory device 204 may include one or more memory arrays 230 (e.g., memory cell arrays), which may include volatile and/or non-volatile memory cells. For example, the memory array 230 may be a flash array with a NAND architecture. The embodiments are not limited to a specific type of memory device. For example, the memory device 204 may include RAM, ROM, DRAM, SDRAM, PCRAM, RRAM, flash memory, and so on.In embodiments where the memory device 204 includes non-volatile memory, the memory device 204 may include a flash memory device, such as a NAND or NOR flash memory device. However, the embodiment is not limited thereto, and the memory device 204 may include other non-volatile memory devices such as non-volatile random access memory devices (eg, NVRAM, ReRAM, FeRAM, MRAM, PCM), such as 3-D "Emerging" memory devices such as cross-point (3D XP) memory devices or combinations thereof.As shown in FIG. 2A, the memory device 204 may include a register access component 206, a high-speed interface (HSI) 208, a controller 210, one or more extended row address (XRA) components 212, a main memory input/output (I/ O) Circuit system 214, row address strobe (RAS)/column address strobe (CAS) chain control circuit system 216, RAS/CAS chain component 218, acceleration circuit system 220 and memory array 230. As shown in FIG. 2A, the acceleration circuitry 220 is located in an area of the memory device 204 that is physically different from the memory array 230. That is, in some embodiments, the acceleration circuit system 220 is located in a peripheral position of the memory array 230.The register access component 206 can facilitate data transfer and retrieval from the host 202 to the memory device 204 and from the memory device 204 to the host 202. For example, the register access component 206 may store an address (or facilitate address lookup) corresponding to data to be transferred from the memory device 204 to the host 202 or from the host 202 to the memory device 204, such as a memory address. In some embodiments, the register access component 206 can facilitate the transfer and retrieval of data to be operated by the acceleration circuitry 220, and/or the register access component 206 can facilitate the transmission and retrieval of data that has been operated by the acceleration circuitry 220 for transmission to the host 202 data.The HSI 208 may provide an interface between the host 202 and the memory device 204 for commands and/or data to traverse the channel 205. HSI 208 can be a double data rate (DDR) interface, such as DDR3, DDR4, DDR5, etc. However, the embodiment is not limited to the DDR interface, and the HSI 208 may be a quadruple data rate (QDR) interface, a peripheral component interconnect (PCI) interface (for example, Peripheral Component Interconnect Express (PCIe)) interface, or used in the host 202 Other suitable interfaces for transferring commands and/or data with the memory device 204.The controller 210 may be responsible for executing instructions from the host 202 and accessing the acceleration circuit system 220 and/or the memory array 230. The controller 210 may be a state machine, a sequencer, or some other type of controller. The controller 210 may receive commands from the host 202 (for example, via the HSI 208), and control the operation of the acceleration circuitry 220 and/or the memory array 230 based on the received commands. In some embodiments, the controller 210 may receive commands from the host 202 to use the acceleration circuitry 220 to perform operations (for example, to convert bit strings between various formats, use bit strings to perform arithmetic and/or logic operations, etc.) . In response to receiving such a command, the controller 210 may instruct the acceleration circuitry 220 to start the operation.In some embodiments, the controller 210 may be a global processing controller and may provide power management functions to the memory device 204. The power management function may include control of the power consumed by the memory device 204 and/or the memory array 230. For example, the controller 210 may control the power provided to the various groups of the memory array 230 to control which groups of the memory array 230 operate at different times during the operation of the memory device 204. This may include turning off certain groups of the memory array 230 when power is provided to other groups of the memory array 230 to optimize the power consumption of the memory device 230. In some embodiments, the controller 210 that controls the power consumption of the memory device 204 may include controlling power to various cores of the memory device 204 and/or to the acceleration circuitry 220, the memory array 230, and the like.The XRA component 212 is intended to provide additional functionality (e.g., peripheral amplifier) that senses (e.g., reads, stores, caches) the data value of the memory cells in the memory array 230 and is different from the memory array 230. The XRA component 212 may include latches and/or registers. For example, additional latches may be included in the XRA component 212. The latches of the XRA component 212 may be located on the periphery of the memory array 230 of the memory device 204 (e.g., on the periphery of one or more memory cell groups).The main memory input/output (I/O) circuitry 214 may facilitate the transfer of data and/or commands to and from the memory array 230. For example, the main memory I/O circuitry 214 can facilitate the transfer of bit strings, data, and/or commands from the host 202 and/or acceleration circuitry 220 to and from the memory array 230. Or order. In some embodiments, the main memory I/O circuitry 214 may include one or more direct memory access (DMA) components that can transfer a bit string (eg, a hypothetical bit string stored as a data block) from the acceleration circuitry 220 is transferred to the memory array 230, and vice versa.In some embodiments, the main memory I/O circuitry 214 can facilitate the transfer of bit strings, data, and/or commands from the memory array 230 to the acceleration circuitry 220 so that the acceleration circuitry 220 can perform operations on the bit strings. Similarly, the main memory I/O circuitry 214 can facilitate the transfer of a bit string on which one or more operations have been performed by the acceleration circuitry 220 to the memory array 230. As described in more detail herein, operations may include operations that convert a bit string formatted according to a floating-point standard into a bit string formatted as a hypothetical number (and vice versa), and perform operations on a bit string formatted as a hypothetical number. Arithmetic operations, logical operations on bit strings formatted as hypothetical numbers, etc.Row address strobe (RAS)/column address strobe (CAS) chain control circuitry 216 and RAS/CAS chain component 218 can be used in conjunction with memory array 230 to latch row addresses and/or column addresses to initiate a memory cycle. In some embodiments, the RAS/CAS chain control circuit system 216 and/or the RAS/CAS chain component 218 can resolve the row address and/or column address of the memory array 230, and read and write operations associated with the memory array 230 It will start or end at the row address and/or column address. For example, after the acceleration circuit system 220 is used to complete the operation, the RAS/CAS chain control circuit system 216 and/or the RAS/CAS chain component 218 can latch and/or resolve a specific location in the memory array 230, and the acceleration circuit system 220 The bit string operated by the system 220 will be stored in that location. Similarly, before the acceleration circuitry 220 performs operations on the bit string, the RAS/CAS chain control circuitry 216 and/or the RAS/CAS chain component 218 can latch and/or parse a specific location in the memory array 230, and the bit string will be It is transmitted from the position to the acceleration circuit system 220.As described above in conjunction with FIG. 1 and described in more detail below in conjunction with FIG. 5, the acceleration circuitry 220 may be configured to receive one or more bit strings in a first format (for example, a plurality of bits in a floating point format), according to The second format converts one or more bit strings (for example, multiple bits are encoded in a hypothetical number format), and/or makes use of one or more bit strings in the second format to perform operations such as arithmetic and/or logic Operation.The acceleration circuitry 220 may include logic circuitry (for example, the logic circuitry 122 illustrated in FIG. 1) and/or memory resources (for example, the memory resource 124 illustrated in FIG. 1). The bit string (eg, data, multiple bits, etc.) may be received by the acceleration circuitry 220 from, for example, the host 202 and/or the memory array 230 and stored by the acceleration circuitry 220 in a memory resource such as the acceleration circuitry 220. The acceleration circuit system (for example, the logic circuit system of the acceleration circuit system 220) can perform operations on the bit string (or cause operations on the bit string) to convert the bit string from a floating point format to a hypothetical number format, and perform arithmetic on the hypothetical number string And/or logical operations, and/or convert the results of arithmetic and/or logical operations into different formats (such as floating-point format), as described in more detail in this article in conjunction with FIG. 5.As described in more detail in conjunction with FIGS. 3 and 4A to 4B, hypothetical numbers can provide improved accuracy and may require less storage space than the corresponding bit string in floating-point format (for example, may contain a smaller number of bits) . Therefore, by using the acceleration circuit system 220 to convert the floating-point bit string into a hypothetical bit string, the performance of the memory device 202 can be improved compared with the method using only the floating-point bit string, because the hypothetical bit string can be faster. Perform operations (for example, because the bit string in the hypothetical number format is smaller and therefore requires less time to operate on it), and because less memory space is required in the memory device 204 to store the bit string in the hypothetical number format, this The extra space in the memory device 202 can be freed for other bit strings, data, and/or other operations to be performed.Once the acceleration circuitry 220 has performed the operation to convert the data from the floating-point format to the hypothetical number format, the acceleration circuitry can perform (or cause) arithmetic and/or logic operations on the hypothetical digit string. For example, as discussed above, the acceleration circuit system 220 may be configured to perform (or cause): arithmetic operations, such as addition, subtraction, multiplication, division, fusion multiplication and addition, multiplication and accumulation, dot product unit, greater than or less than, Absolute value (for example, FABS()), fast Fourier transform, inverse fast Fourier transform, sigmoid function, convolution, square root, exponential and/or logarithmic operations; and/or logical operations such as AND, OR, XOR, NOT Etc.; and trigonometric function operations, such as sine, cosine, tangent, etc. As will be appreciated, the foregoing list of operations is not intended to be exhaustive, nor is the foregoing list of operations intended to be limiting, and the acceleration circuitry 220 may be configured to perform (or cause to perform) other arithmetic and/ Or logical operation.In some embodiments, the acceleration circuit system 220 may perform the operations listed above in conjunction with the execution of one or more machine learning algorithms. For example, the acceleration circuit system 220 may perform operations related to one or more neural networks. Neural networks may allow algorithms to be trained over time to determine output responses based on input signals. For example, over time, neural networks can basically learn to better maximize the likelihood of accomplishing a particular goal. This can be advantageous in machine learning applications, because neural networks can be trained with new data over time to better maximize the likelihood of accomplishing specific goals. Neural networks can be trained over time to improve the operation of specific tasks and/or specific goals.However, in some methods, machine learning (e.g., neural network training) may be processing-intensive (e.g., may consume a lot of computer processing resources) and/or may be time-intensive (e.g., may require multiple Lengthy calculations for loops). In contrast, by using the acceleration circuit system 220 to perform this operation, for example, by performing this operation on a bit string that has been converted into a hypothetical number format by the acceleration circuit system 220, the amount of processing resources and/ Or the amount of time can be reduced compared to the method of using a bit string in a floating-point format to perform this operation.The acceleration circuitry 220 may be communicatively coupled to the memory array 230 via one or more channels, interfaces, and/or buses. For example, the memory array 230 may be a DRAM array, SRAM array, STT RAM array, PCRAM array, TRAM array, RRAM array, NAND flash array, and/or NOR flash array, but the embodiment is not limited to these specific examples. The memory array 230 may serve as a main memory for the computing system. In some embodiments, the memory array 230 may be configured to store bit strings operated by the acceleration circuitry 220 and/or store bit strings to be transferred to the acceleration circuitry 220. The array 230 may include rows coupled through access lines (which may be referred to herein as word lines or select lines) and columns coupled through sense lines (which may be referred to herein as data lines or digit lines). Memory unit. Although a single array 230 is shown in FIG. 2A, the embodiment is not limited thereto. For example, the memory device 204 has several memory arrays 230 (eg, several groups of DRAM cells, NAND flash cells, etc.).The embodiment of FIG. 2A may include additional circuitry that is not illustrated in order not to obscure the embodiment of the present disclosure. For example, the memory device 204 may include address circuitry to latch address signals provided on the I/O connection through the I/O circuitry. The address signals can be received and decoded by the row decoder and the column decoder to access the memory device 204 and/or the memory array 230. Those skilled in the art should understand that the number of address input connections may depend on the density and architecture of the memory device 204 and/or the memory array 230.FIG. 2B is a schematic diagram of several embodiments of the present disclosure, including a host 202, a memory device 204 (which may include a logic circuit system 222), an application specific integrated circuit 223, a field programmable gate array 221, and a virtual computing cluster (VCC). ) 251 is a functional block diagram of the computing system 200 in the multi-user network 201 of the shared computing resource pool 246. As shown in FIG. 2B, the shared computing resource pool 246 may further include processing resources 245 and memory resources 247, which may be included in the host 202, separate from the host 202, or a combination thereof. Each of the components (for example, the host 202, the conversion component 211, the memory device 204, the FPGA 221, the ASIC 223, the VCC 251, etc.) may be individually referred to herein as a "device".The multi-user network 201 can be a software-defined data center, cloud computing environment, data center, or other such network or computing environment, in which virtual computing instances (VCI), virtual machines (VM), virtual workloads, data computing are deployed Nodes, clusters, containers, etc. The multi-user network 201 can extend virtualization concepts, such as abstraction, aggregation, and automation of data center resources and services, to provide information technology (ITaaS) as a service. In the multi-user network 201, the infrastructure (such as networking, processing, and security) can be virtualized and delivered as a service. The multi-user network 201 may include software-defined networking and/or software-defined storage devices. In some embodiments, the components of the multi-user network 201 may be provided, operated, and/or managed through an application programming interface (API). Therefore, multiple users can access resources associated with the multi-user network 201 from different locations via, for example, the computing node 207 communicatively coupled to the multi-user network 201. Although a single computing node 207 is shown in FIG. 2B, it should be understood that multiple computing nodes may be communicatively coupled to the multi-user network 201.The computing node 207 may be a user device, such as a personal computer, a laptop computer, a tablet computer, a tablet phone, a smart phone, or other devices that can access the multi-user network 201 via, for example, an edge device. The computing node 207 may be configured to send commands to the multi-user network 201 to facilitate operations using the bit string described herein (e.g., a hypothetical digit string). The command may include a command to use the bit string to initiate the progress of the operation, and/or the command may include one or more parameters that specify the criteria on which the operation will be performed. Table 1 shows several non-limiting examples of parameters that specify criteria according to which operations will be performed.Processing time Processing resources Assumed number of parameters Assumed number of parameters ) (16,1) (32,1) (64,1) 45 minutes with 8 cores (8,2) (16,2) (32,2) (64,2) 60 minutes with 16 cores (8,3) ) (16,3) (32,3) (64,3) 32 cores in 90 minutes (8,4) (16,4) (32,4) (64,4)Table 1The non-limiting example parameters shown in Table 1 may include processing time (e.g., the amount of time to be allocated for operations), processing resources (e.g., the amount of processing resources to be allocated for operations), and hypothetical number parameters ( For example, assume numerical accuracy parameters, such as the requested bit length and requested exponent length of the bit string to be used for the operation).As indicated in Table 1, the processing time can be selected from various preset time ranges (for example, 15 minutes, 30 minutes, 45 minutes, 60 minutes, 90 minutes). Because access to resources in a multi-user network may be costly and can be based on the amount of time to provide access to the resource, by allowing an optional time range within which calculations can be completed, users may be able to better plan and operate in a multi-user network Costs associated with the operations described in this article are included. However, although a specific time range is shown in Table 1, the embodiment is not limited to this, and an additional processing time time range may be provided, or the processing time may be customized via user input (for example, 20 minutes, 161.80339 minutes, etc.).As indicated in Table 1, the processing resource parameters can be selected from various preset processing resource parameters available for the multi-user network 201. For example, the number of processing cores (e.g., 2 cores, 4 cores, 8 cores, 16 cores) of the processing resources 245 to be allocated by the multi-user network 201 for operation can be selected before starting the operation. , 32 cores, etc.). Because access to resources in a multi-user network may be costly and can be based on the amount of processing cores requested, by allowing optional processing resources 245 to complete operations, users may be able to better plan and stay within the multi-user network Costs associated with the calculations described in this article. However, although specific processing resources are shown in Table 1, embodiments are not limited thereto, and additional processing resources may be provided, or the amount of requested processing resources may be customized via user input, for example.As indicated in Table 1, the assumed number parameters (for example, assumed number accuracy parameters) can be selected from various preset assumed number parameters (for example, (8,0), (16,1), (32,4), etc. )choose. The hypothetical number parameters shown in Table 1 may correspond to the bit length and exponent bit length of the hypothetical bit string to be used as an operand when performing arithmetic and/or logical operations. The bit length may correspond to the total number of bits in the hypothetical digit string, and the exponent bit length may correspond to the number of exponent bits (for example, the exponent bits es described in more detail herein in conjunction with FIGS. 3 and 4A to 4B). In the notation in Table 1, a hypothetical bit string with a bit length of eight bits and an exponent bit length of two bits can be written as (8, 2), and a bit length of 64 bits and a bit length of four bits The assumed digit string of the exponent bit length can be written as (64,4).In some embodiments, the computing node 207 may be configured to use the host 202, the memory device 204, the FPGA 221, the ASIC 223, and/or the VCC 251 to display a graphical user interface (GUI) to facilitate the use of bit strings for operations. The computing node 207 may be configured to display a GUI in which a request for performing an operation is selected or otherwise inputted and/or a parameter specifying a criterion according to which the operation is to be performed is selected. For example, the GUI may be similar to the example shown in Table 1, and may allow the user to select processing time, processing resources, and/or hypothetical number parameters for operations using hypothetical digit strings as operands. However, the embodiment is not limited to this, and in some embodiments, the GUI of the computing node 207 may allow the user to input specific parameters or parameter values that are not necessarily listed in Table 1.As shown in FIG. 2B, the host 202 may be coupled to the memory device 204 via a channel 203, which may be similar to the channel 103 illustrated in FIG. A field programmable gate array (FPGA) 221 may be coupled to the host 202 via a channel 217, and an application specific integrated circuit (ASIC) 223 may be coupled to the host 202 via a channel 219. In some embodiments, the channel 217 and/or the channel 219 may include a Peripheral Serial Interconnect Express (PCIe) interface, however, the embodiment is not limited thereto, and the channel 217 and/or the channel 219 may include other types of interfaces, buses , Communication channels, etc. to facilitate data transfer between the host 202 and the FPGA 221 and/or ASIC 223. For example, the channels 203, 217, and/or 219 may be communication paths that can utilize the multi-user network communication 201 protocol such as TCP/IP, MQTT, HTTP, and the like.In some embodiments, FPGA 221 and/or ASIC 223 may receive a bit string, convert the bit string from a first format (e.g., floating-point format) to a second format (e.g., hypothetical number format), and perform processing on the hypothetical number string. Arithmetic and/or logical operations to generate a resulting hypothetical digit string representing the result of an operation performed on the received hypothetical digit string, and/or to convert the resulting bit string from the second format based on parameters (such as those shown in Table 1) Into the first format.As described above, non-limiting examples of arithmetic and/or logical operations that can be performed by FPGA 221 and/or ASIC 223 using hypothetical digit strings include: arithmetic operations such as addition, subtraction, multiplication, division, fusion multiplication and addition, and multiplication accumulation and addition. , Dot product unit, greater than or less than, absolute value (for example, FABS()), fast Fourier transform, inverse fast Fourier transform, sigmoid function, convolution, square root, exponential and/or logarithmic operations; and/or logic operations , Such as AND, OR, XOR, NOT, etc.; and trigonometric operations, such as sine, cosine, tangent, etc.The FPGA 221 may include a state machine 227 and/or a register 229. The state machine 227 may include one or more processing devices configured to perform operations on inputs and generate outputs. In some embodiments, the FPGA 221 may (eg, from the computing node 207) receive a command to initiate an operation using one or more bit strings. The command can contain one or more parameters, such as those shown in Table 1.For example, the FPGA 221 may be configured to receive a bit string, convert the bit string from a first format (for example, a floating point format) to a second format (for example, a hypothetical number format), and perform arithmetic and/or logic on the hypothetical number string. Operate to generate a resulting hypothetical digit string representing the result of an operation performed on the received hypothetical digit string, and/or convert the resulting bit string from the second format to the first format based on the parameters received with the command of the initial operation.The register 229 of the FPGA 221 may be configured to buffer and/or store the bit string before the state machine 227 performs operations on the received hypothetical bit string. In addition, the register 229 of the FPGA 221 can be configured to transfer the result of the operation performed on the received hypothetical digit string to the circuit system outside the ASIC 233 (such as the host 202, the memory device 204, the computing node 207, the memory resource 247, etc.) The resulting hypothetical digit string is previously buffered and/or stored, and the resulting hypothetical digit string represents the result.The ASIC 223 may include logic 241 and/or cache 243. The logic 241 may include circuitry configured to perform operations on inputs and generate outputs. In some embodiments, the ASIC 223 may (e.g., from the computing node 207) receive a command to initiate an operation using one or more bit strings. The command can contain one or more parameters, such as those shown in Table 1.In some embodiments, the ASIC 223 may be configured to receive a bit string, convert the bit string from a first format (e.g., a floating point format) to a second format (e.g., a hypothetical number format), perform arithmetic on the hypothetical number string, and/ Or logical operation to generate a resultant hypothetical digit string representing the result of an operation performed on the received hypothetical digit string, and/or convert the resulting bit string from the second format to the first based on the parameters received with the command of the initial operation Format.The cache 243 of the ASIC 223 may be configured to buffer and/or store the hypothetical digit string before the logic 241 performs an operation on the received hypothetical digit string. In addition, the cache 243 of the ASIC 223 can be configured to transmit the result of the operation performed on the received hypothetical digit string to the circuit system outside the ASIC 233 (such as the host 202, the memory device 204, the computing node 207, the memory resource 247, etc.). ) Before buffering and/or storing the resulting hypothetical digit string, the resulting hypothetical digit string represents the result.Although FPGA 227 is shown as including state machine 227 and register 229, in some embodiments, in addition to state machine 227 and/or register 229 or instead of state machine 227 and/or register 229, FPGA 221 may also include logic 241, for example. Logic and/or cache such as cache 243. Similarly, in some embodiments, in addition to the logic 241 and/or the cache 243 or instead of the logic 241 and/or the cache 243, the ASIC 223 may also include a state machine such as the state machine 227 and/or a state machine such as the register 229. register.The VCC 251 may include a scheduling agent, multiple virtual computing instances (VCI), and/or a hypervisor, which are described in more detail herein in conjunction with FIGS. 6 and 7A to 7B. The VCC 251 may be communicatively coupled to the host 202, the memory device 204, the FPGA 221 and/or the ASIC 223 of the multi-user network 201, and/or the VCC 251 may be communicatively coupled to the computing node 207. As described in more detail in conjunction with FIGS. 7A and 7B, VCC 251 may facilitate operations that convert bit strings between various formats and/or VCC 251 may facilitate arithmetic and/or logical operations using bit strings. For example, the VCI (or hypervisor) of VCC 251 may have a hypothetical number arithmetic agent running on it, which can facilitate operations that convert bit strings between various formats and/or use bit strings for arithmetic and / Or logical operation.The agent may be an instruction set, code or script in software, firmware, or hardware residing on a computer or computing device, or some combination of the three. The agent may communicate with another device or program periodically or periodically. Agents can act with or without explicit commands (e.g., monitor activities, execute commands, access memory or storage devices). In some instances, the agent is an autonomous agent. For example, the agent may be configured to execute instructions using computing resources (such as hardware), which may be used for agents in a computing resource pool (e.g., the shared computing resource pool 246 illustrated in FIG. 2B).In some embodiments, the circuitry (for example, the logic circuitry 122 illustrated in FIG. 1, the acceleration circuitry 220, FPGA 221, and/or ASIC 223 illustrated in FIG. 2A) may be configured to receive the use of at least one hypothesis A request for arithmetic operations and/or logic operations on the operands of the digit string. The request may include at least one of the parameters described above in conjunction with Table 1. In some embodiments, the request may be received by the circuit system from the computing node 207. The circuitry may perform arithmetic operations and/or logic operations using hypothetical digit string operands based at least in part on the received parameters in response to the request.For example, if the parameter specifies the amount of computing resources (for example, the amount of processing resources and/or the amount of memory resources) from the shared computing resource pool 246 available for the multi-user network 201, then the circuitry can be configured to access and allocate The amount of computing resources specified by the parameter is used to perform arithmetic operations and/or logical operations using at least one hypothetical digit string operand. In some embodiments, the circuit system may generate a request to allocate a specified amount of computing resources to the multi-user network 201 for arithmetic operations and/or logical operations using at least one hypothetical digit string operand to access the specified amount of computing resources .In another example, if the parameter specifies the amount of time allowed for arithmetic and/or logical operations using at least one hypothetical digit string operand (for example, a specific amount of time), then the circuit system may be configured to Perform calculations within the amount of time. In some embodiments, the parameter may specify a hypothetical number parameter (for example, a hypothetical number accuracy parameter), as described above in conjunction with Table 1. In the embodiment where the parameter specifies the hypothetical number parameter, the circuitry may be configured to generate the hypothetical bit string operand so that the bit length and/or exponent bit length of the hypothetical operand corresponds to the bit length and/or exponent bit specified by the parameter length. The circuitry can then perform arithmetic and/or logical operations using hypothetical digit string operands based on the specified parameters.In some embodiments, the circuitry may retrieve the hypothetical bit string operand from a memory location in the shared computing resource pool 246 before performing arithmetic operations and/or logical operations. For example, if it is assumed that the digit string operand is stored in the memory device 204 (or another memory resource, such as the memory resource 247 accessible by the multi-user network 201), then the circuitry can generate a request for the assumed digit string operand and The hypothetical digit string operand is retrieved from its stored memory location before performing arithmetic and/or logic operations. If the bit string operand is not yet in a hypothetical number format (for example, if the bit string operand is stored in a different format (such as a floating point format) in a memory location accessible by the multi-user network 201), then the circuit system can perform arithmetic and/ Before the OR logic operation, an operation to convert the bit string into a hypothetical digital bit string is performed.Figure 3 is an example of an n-digit universal number or "unum" with an es exponent bit. In the example of FIG. 3, the n-bit unum is the hypothetical digit string 331. As shown in FIG. 3, the n-bit hypothetical number 331 may include a sign bit set (e.g., sign bit 333), a base bit set (e.g., base bit 335), an exponent bit set (e.g., exponent bit 337), and a mantissa bit set. (For example, mantissa 339). The mantissa place 339 may be referred to as a "fractional part" or as a "fractional place" in the alternative, and may represent a part of the string of digits after the decimal point (e.g., a number).The sign bit 333 can be zero (0) for positive numbers and one (1) for negative numbers. The base bit 335 is described below in conjunction with Table 2, which shows the (binary) bit string and its related numerical meaning k. In Table 2, the value meaning k is determined by the extension length of the bit string. The letter x in the binary part of Table 2 indicates that the bit value is not relevant to the determination of the base, because the (binary) bit string terminates in response to successive bit flips or when the end of the bit string is reached. For example, in the (binary) bit string 0010, the bit string terminates in response to zero flipping to one and then back to zero. Therefore, the last zero is not related to the base and all that is considered for the base are the leading identical bit and the first relative bit of the terminating bit string (if the bit string contains such a bit).Binary 0000 0001 001X 01XX 10XX 110X 1110 1111 Value (k) -4 -3 -2 -1 0 1 2 3Table 2In Fig. 3, the bottom bit 335r corresponds to the same bit in the bit string, and the bottom bit corresponds to the relative bit of the end bit string. For example, for the value k value -2 shown in Table 2, the bottom digit r corresponds to the first two leading zeros, and the bottom digit corresponds to one. As mentioned above, the final bit corresponding to the value k represented by X in Table 2 is not related to the base.If m corresponds to the number of identical bits in the bit string, then if the bits are zero, then k=-m. If the bit is one, then k=m-1. This is illustrated in Table 1, where, for example, the (binary) bit string 10XX has a single one and k=m-1=1-1=0. Similarly, the (binary) bit string 0001 contains three zeros, so k=-m=-3. The base number can indicate the zoom factor of usedk, where several example values of used are shown in Table 3 below.es 0 1 2 3 4 used 2 22=4 42=16 162=256 2562=65536table 3The exponent bit 337 corresponds to the exponent e which is an unsigned number. Compared to floating-point numbers, the exponent bit 337 described herein may not have an offset associated with it. Therefore, the exponent bit 337 described herein may indicate scaling by a factor of 2e. As shown in FIG. 3, depending on how many bits are reserved to the right of the base 335 of the n-bit hypothetical number 331, there may be at most es exponent bits (e1, e2, e3...ees). In some embodiments, this may allow for gradually decreasing accuracy of the n-bit hypothetical number 331, where a number closer to one in magnitude has a higher accuracy than a very large or very small number. However, since extremely large or extremely small numbers can be used infrequently in certain types of operations, the gradually decreasing accuracy of the n-bit hypothetical number 331 shown in FIG. 3 may be suitable in a wide range of situations. needs.The mantissa bit 339 (or fractional bit) represents any extra bit that can be part of the n-bit hypothetical number 331 to the right of the exponent bit 337. Similar to a floating-point bit string, the mantissa digit 339 represents a fraction f that can be similar to the fraction 1.f, where f includes one or more digits to the right of the decimal point after one. However, compared with the floating-point bit string, in the n-bit hypothetical number 331 shown in FIG. 3, the "hidden bit" (for example, one) may always be one (for example, uniform), and the floating-point bit string may include The "hidden bit" is a subnormal number of zero (for example, 0.f).FIG. 4A is an example of a positive value used for the 3-digit hypothetical number 431. FIG. In FIG. 4A, only the right half of the real number is projected, however, it should be understood that the negative projected real number corresponding to its positive corresponding value shown in FIG. 4A may exist on a curve that represents the curve around the curve shown in FIG. 4A The transformation of the y-axis.In the example of Figure 4A, es=2, so the accuracy of the hypothetical number 431 can be increased by appending bits to the bit string, as shown in Figure 4B. For example, appending a bit with a value of one (1) to the bit string of the hypothetical number 431 increases the accuracy of the hypothetical number 431 as shown by the hypothetical number 431-2 in FIG. 4B. Similarly, appending a bit with a value of one to the bit string of the hypothetical number 431-2 in FIG. 4B increases the accuracy of the hypothetical number 431-2 as shown by the hypothetical number 431-3 shown in FIG. 4B. The following are examples of interpolation rules that can be used to append bits to the bit string of the hypothetical number 431 shown in FIG. 4A to obtain the hypothetical numbers 431-2, 431-3 illustrated in FIG. 4B.If maxpos is the largest positive value of the bit string of the hypothetical numbers 431-1, 431-2, 431-3 shown in Figure 4B and minpos is the smallest bit string of the hypothetical numbers 431-1, 431-2, 431-3 Value, then maxpos can be equivalent to used and minpos can be equivalent to between maxpos and ±∞, the new bit value can be maxpos*useed, and between zero and minpos, the new bit value can be these new bit values. Corresponds to the new base 335. Between the existing values x=2m and y=2n, where the difference between m and n is greater than one, the new bit value can be given by the geometric mean: it corresponds to the new index bit 337. If the new bit value is midway between the existing x and the immediately following y value, then the new bit value can represent the arithmetic mean, which corresponds to the new mantissa bit 339.Figure 4B is an example of a hypothetical number construction using two exponent bits. In FIG. 4B, only the right half of the real number is projected, however, it should be understood that the negative projected real number corresponding to its positive corresponding value shown in FIG. 4B may exist on a curve that represents the curve around the curve shown in FIG. 4B The transformation of the y-axis. The hypothetical numbers 431-1, 431-2, and 431-3 shown in Figure 4B each contain only two outliers: zero (0) when all bits of the bit string are zero, and after the bit string is all zeros ±∞ at one (1) hour. It should be noted that the values of the hypothetical numbers 431-1, 431-2, and 431-3 shown in FIG. 4 are exactly usedk. That is to say, for the power of the k value represented by the base number (for example, the base bit 335 described above in conjunction with FIG. 3), the values of the hypothetical numbers 431-1, 431-2, and 431-3 shown in FIG. 4 are exactly Is used. In Figure 4B, it is assumed that the number 431-1 has es=2, so the number 431-2 has es=3, and therefore the number 431-3 has es=4, soAs an illustrative example of adding bits to the 3-bit hypothetical number 431-1 to create the 4-bit hypothetical number 431-2 of FIG. 4B, used=256, so the used bit string corresponding to 256 has an additional base appended to it The useded bit and the previous 16 has a terminating base bit appended to it. As described above, between the existing values, the corresponding bit string has an extra exponent bit appended to it. For example, the values 1/16, 1/4, 1, and 4 will have exponent bits appended to them. That is, the final one corresponding to the value 4 is the exponent bit, the final zero corresponding to the value 1 is the exponent bit, and so on. This pattern can be further seen in the hypothetical number 431-3, which is a 5-digit hypothetical number generated from the 4-digit hypothetical number 431-2 according to the above rules. If another digit is added to the hypothetical number 431-3 in FIG. 4B to produce a 6-digit hypothetical number, then the mantissa bit 339 will be appended to the value between 1/16 and 16.The following is a non-limiting example of decoding a hypothetical number (for example, hypothetical number 431) to obtain its numerical equivalent value. In some embodiments, the bit string corresponding to the hypothetical number p is an unsigned integer in the range of -2n-1 to 2n-1, k is an integer corresponding to base 335, and e is an integer corresponding to exponent bit 337. Unsigned integer. If the set of mantissa 339 is represented as {f1f2...ffs} and f is the value represented by 1.f1f2...ffs (for example, through the one after the decimal point after the mantissa 339), then p can be expressed by the following equation 1 is given.Equation 1The following provides another illustrative example of decoding the hypothetical digit string in conjunction with the hypothetical digit string 0000110111011101 shown in Table 4 below.Symbol Base Exponent Mantissa 0 0001 101 11011101Table 4In Table 4, it is assumed that the digit string 0000110111011101 is decomposed into its constituent bit sets (for example, sign bit 333, base bit 335, exponent bit 337, and mantissa bit 339). Since es=3 in the hypothetical digit string shown in Table 4 (for example, because there are three exponent bits), used=256. Because the sign bit 333 is zero, the value of the numerical expression corresponding to the hypothetical digit string shown in Table 4 is positive. The base bit 335 has an extension of three consecutive zeros corresponding to the value -3 (as described above in conjunction with Table 2). Therefore, the scaling factor contributed by the base bit 335 is 256-3 (for example, usedk). The exponent bit 337 represents five (5) as an unsigned integer and therefore contributes an additional scaling factor of 2e=25=32. Finally, the mantissa bit 339 given as 11011101 in Table 4 represents two hundred and twenty one (221) as an unsigned integer, so the mantissa bit 339 given as f above uses these values and Equation 1, which corresponds to The value of the assumed digit string given in Table 4 isFIG. 5 is a functional block diagram in the form of a device 500 including an acceleration circuit system 520 according to several embodiments of the present disclosure. The acceleration circuitry 520 may include a logic circuitry 522 and a memory resource 524, which may be similar to the logic circuitry 122/222 and memory resources 124/224 described in FIGS. 1 and 2 herein. The logic circuit system 522 and/or the memory resource 524 may be individually regarded as "devices."The acceleration circuitry 520 may be configured to receive from a host (for example, the host 102/202 illustrated in FIGS. 1 and 2 herein) and/or a controller (for example, the controller 210 illustrated in FIG. 2 herein) A command (for example, a start command) to perform one or more operations (for example, format conversion operations, arithmetic operations, logical operations, bitwise operations, etc.) on the data stored in the memory resource 524 is initiated. Once the start command has been received by the acceleration circuitry 520, the acceleration circuitry can perform the operations described above without an intervention command from the host and/or controller. For example, the acceleration circuitry 520 may include sufficient processing resources and/or instructions to perform operations on the bit string stored in the memory resource 524 without receiving additional commands from circuitry external to the acceleration circuitry 520.The logic circuit system 522 may be an arithmetic logic unit (ALU), a state machine, a sequencer, a controller, an instruction set architecture, or other types of control circuit systems. As described above, the ALU may include operations that perform operations as described above on integer binary numbers such as bit strings in hypothetical number format (for example, converting the bit string from the first format (floating point format) to the second format (Assumed number format) operation and/or arithmetic operation, logic operation, bit-by-bit operation, etc.) circuit system. The instruction set architecture (ISA) may include reduced instruction set computing (RISC) devices. In an embodiment in which the logic circuit system 522 includes a RISC device, the RISC device may include processing resources that can adopt an instruction set architecture (ISA) such as the RISC-V ISA, however, the embodiment is not limited to the RISC-V ISA and other processing may be used. Device and/or ISA.In some embodiments, the logic circuit system 522 may be configured to execute instructions (eg, instructions stored in the INSTR 525 portion of the memory resource 524) to perform the above operations. For example, the logic circuit system 524 has sufficient processing resources to perform such operations on the data (for example, bit string) received by the acceleration circuit system 520.Once the operation is performed by the logic circuit system 522, the resulting bit string may be stored in the memory resource 524 and/or the memory array (e.g., the memory array 230 illustrated in FIG. 2 herein). The resulting bit string stored can be addressed so that it can be used to perform operations. For example, the bit string may be stored in the memory resource 524 and/or a memory array at a specific physical address (which may have a corresponding logical address corresponding thereto) so that the bit string can be accessed when performing operations.In some embodiments, the memory resource 524 may be a memory resource, such as a random access memory (e.g., RAM, SRAM, etc.). However, the embodiment is not limited thereto, and the memory resource 524 may include various registers, caches, buffers, and/or memory arrays (for example, 1T1C, 2T2C, 3T, etc. DRAM arrays). Here, the memory resource 524 may be configured to receive bits from, for example, a host (host 102/202 as illustrated in FIGS. 1 and 2) and/or a memory array (memory array 130/230 as illustrated in FIGS. 1 and 2). string. In some embodiments, the memory resource 538 may have a size of approximately 256 kilobytes (KB), however, embodiments are not limited to this particular size, and the memory resource 524 may have a size greater than or less than 256 KB.The memory resource 524 may be divided into one or more addressable memory regions. As shown in FIG. 5, the memory resource 524 can be partitioned into addressable memory areas so that various types of data can be stored therein. For example, one or more memory areas may store instructions ("INSTR") 525 used by memory resources 524, and one or more memory areas may store data 526-1...526-N (e.g., from the host and /Or the data of the bit string retrieved by the memory array), and/or one or more memory regions may serve as part of the local memory ("LOCAL MEM") 528 of the memory resource 538. Although 20 different memory regions are shown in FIG. 5, it should be understood that the memory resource 524 can be partitioned into any number of different memory regions.As discussed above, it may be retrieved from the host and/or the memory array in response to messages and/or commands generated by the host, the controller (eg, the controller 210 illustrated herein in FIG. 2), or the logic circuitry 522 Bit string. In some embodiments, commands and/or messages may be processed by logic circuitry 522. Once the bit string is received by the acceleration circuitry 520 and stored in the memory resource 524, it can be processed by the logic circuitry 522. Processing the bit string by the logic circuit system 522 may include converting the bit string from the first format to the second format, performing arithmetic operations and/or logical operations on the converted bit string, and/or removing the bit string that has been manipulated from it The second format is converted to the first format.In a non-limiting neural network training application, the acceleration circuit system 520 can convert the floating-point bit string into an 8-bit hypothetical number with es=0. Compared with some methods of neural network training using half-precision 16-bit floating-point bit strings, 8-bit hypothetical digital bit strings with es=0 can provide comparable nerves that are two to four times faster than half-precision 16-bit floating-point bit strings. Network training results.The commonly used function used when training neural networks is the sigmoid function f(x) (for example, a function that gradually approaches zero when x→-∞ and gradually approaches 1 when x→∞). An example of a sigmoid function that can be used in neural network training applications is that it may require up to one hundred clock cycles to perform calculations using semi-precision 16-bit floating point bit strings. However, using an 8-bit hypothetical number with es=0, the acceleration circuitry 520 can evaluate the same function by flipping the first bit of the hypothetical number representing x and shifting it by two bits to the right, compared to using a semi-precision 16 For the evaluation of the same function of the bit-floating-point bit string, the operation can take at least one order of magnitude less clock signal.In this example, the floating-point bit string is converted into an 8-bit hypothetical digit string with es=0 by operating the acceleration circuitry 520, and then the acceleration circuitry 520 is operated to evaluate the example S-type function on the 8-bit hypothetical digit string Compared with the method that does not include the acceleration circuit system 520 configured to perform such conversion and/or subsequent operations, the processing time, resource consumption and/or storage space can be reduced. This reduction in processing time, resource consumption and/or storage space can improve the functionality of the computing device by reducing the number of clock signals used in performing such operations (this can reduce the amount of power consumed by the computing device and/or The amount of time to perform such operations) and to operate the acceleration circuitry 520 by freeing up processing and/or memory resources for other tasks and functions.FIG. 6 is a diagram of a host 602, a hypervisor 642, a plurality of virtual computing instances (VCI) 641-1, 641-2...641-N, and a hypothetical number arithmetic agent 643 according to several embodiments of the present disclosure. The system may include processing resources 645 (e.g., one or more processors), memory resources 647 (e.g., one or more main memory devices, such as the memory device 204 illustrated in FIGS. 2A and 2B herein), and/ Or storage memory device) and/or host 602 of the network interface 649. The host 602 may be included in a multi-user network (such as the multi-user network 201 illustrated in FIG. 2B). Multi-user networks can extend virtualization concepts, such as the abstraction, aggregation, and automation of data center resources and services, to provide information technology (ITaaS) as a service. In a multi-user network, infrastructure (such as networking, processing, and security) can be virtualized and delivered as a service. Multi-user networks may include software-defined networking and/or software-defined storage devices. In some embodiments, the components of the multi-user network may be provided, operated, and/or managed through an application programming interface (API).The host 602 may incorporate a hypervisor 642 that can execute several VCIs 641-1, 641-2,... 641-N (collectively referred to herein as "VCI 641"). The VCI may have processing resources 645 and/or memory resources 647, and may communicate via a network interface 649. The processing resources 647 and memory resources 647 provided to the VCI 641 may be local and/or remote to the host 602 (for example, the VCI 641 may ultimately be executed by hardware that may not be physically associated with the VCI 641). For example, in a multi-user network, the VCI 641 may have resources that are generally available for use in a multi-user network and are not associated with any specific hardware device. As an example, the memory resource 647 may include volatile and/or non-volatile memory that can be used for the VCI 647. The VCI 641 can be moved to different hosts (not specified) so that different hypervisors can manage the VCI 641. In some embodiments, the host 602 may be connected to (eg, communicate with) a hypothetical number arithmetic agent 643, which may be deployed on the VCI 641 or a container (not explicitly shown).The VCI 641 may include one or more containers, which may have a containerized workload running on it. The containerized workload may correspond to one or more applications or parts of applications executed by the VCI 641 and/or the host 602. The application program can be configured to perform certain tasks and/or functions for the VCI 641 and/or the host 602, such as converting bit strings between various formats and performing arithmetic and/or logic operations using hypothetical digital bit strings. By using multiple containerized workloads to execute an application, compared to a monolithic approach to the application, the scalability and/or portability of the application can be improved.It is assumed that the arithmetic agent 643 can be configured to cause operations, such as operations that convert bit strings between various formats and/or operations that perform arithmetic and/or logical operations on bit strings, as described in more detail herein. In some embodiments, it is assumed that the number operation agent 643 may be deployed on one or more of the host 602 and/or VCI 641 (for example, it may run on one or more of the host 602 and/or VCI 641).In some embodiments, the assumed arithmetic agent 643 may include a combination of software and hardware, or the assumed arithmetic agent 643 may include software and may be provided by the processing resource 645. In this article, an example of the hypothetical number arithmetic agent 643 is explained and described in more detail with respect to FIGS. 7A and 7B. In some embodiments, the operations performed by the hypothetical number operation agent 643 may be scheduled by a container scheduling agent (for example, the scheduling agent 742 illustrated in FIGS. 7A and 7B herein) (such as DOCKER, etc.).It is assumed that the number operation agent 643 can be deployed in a multi-user network (such as the multi-user network 201 illustrated in FIG. 2B herein). The hypothetical number operation agent 643 may be configured to receive a parameter corresponding to at least one of an arithmetic operation and a logical operation using one or more hypothetical digit strings. The parameter may be at least one of the parameters described above in conjunction with Table 1. For example, the parameters may include processing time parameters, parameters corresponding to the amount of processing resources for operations using one or more hypothetical digit strings, parameters corresponding to the bit length of one or more hypothetical digit strings, parameters corresponding to one or more hypothetical digit strings. A number of parameters that assume the number of exponent bits of a digit string, or a combination thereof.The hypothetical arithmetic agent 643 may be configured to allocate computing resources available for the multi-user network based on the parameters for performing arithmetic operations and/or logical operations using one or more hypothetical digit strings. For example, the hypothetical arithmetic agent 643 can be configured to allocate the amount of time available for performing arithmetic and/or logic operations, the amount of processing resources available for performing arithmetic and/or logic operations, and the amount of processing resources that will be used for performing arithmetic and/or logic operations. The bit length of the assumed digit string for the operation, and/or the exponent length of the assumed digit string to be used for arithmetic and/or logical operations.In some embodiments, the hypothetical arithmetic agent 643 may receive a request to perform arithmetic and/or logical operations on the initial use of one or more hypothetical digit strings, and/or cause the use of one or Arithmetic operations and/or logical operations are performed on multiple hypothetical digit strings. For example, suppose that the number operation agent 643 can access the circuit system (such as the logic circuit 122 illustrated in FIG. 1), the FPGA 221 and/or the ASIC 223 illustrated in FIG. Arithmetic operations and/or logical operations. The request and/or parameters may be received from a computing node (such as the computing node 207 illustrated in FIG. 2B herein).If the bit string used for performing arithmetic and/or logical operations is stored in a storage library of a multi-user network (for example, the memory device 204 or the memory resource 247 illustrated in FIG. 2B, or other data storage associated with the multi-user network) Area or data repository), then the hypothetical arithmetic agent 643 can be configured to retrieve one or more hypothetical digits from a multi-user network accessible before making use of one or more hypothetical digit strings for arithmetic and/or logical operations. string.In some embodiments, the bit string may be stored according to a format other than the hypothetical number format. For example, the bit string can be stored in floating point format. If the bit string requested for performing arithmetic and/or logical operations is stored in a format different from the hypothetical number, then the hypothetical number arithmetic agent 643 can be configured to perform (or use, for example, the logic circuit system 122, diagrams illustrated in FIG. 1 The FPGA 221 and/or ASIC 223 described in 2B performs) arithmetic operations and/or logic operations before performing operations to convert the bit string into a hypothetical number format.Figure 7A is a diagram of a virtual computing cluster (VCC) 751 according to several embodiments of the present disclosure. The VCC 751, which may be similar to the VCC 251 illustrated in FIG. 2B, may be deployed in the multi-user network of the multi-user network 201 as illustrated herein in FIG. 2B. As shown in FIG. 7A, the cluster 751 (for example, VCC) may include multiple virtual computing instances (VCI) 741, which have a computing resource pool (for example, the shared computing resource pool 246 illustrated in FIG. 2B) and can be executed by hardware . In some embodiments, at least a first VCI (e.g., VCI 741-1) is deployed on a first hypervisor (e.g., hypervisor 742-1) of VCC 751 and at least a second VCI (e.g., VCI 741-2) is deployed on the second hypervisor of VCC 751 (for example, hypervisor 742-M). Although not explicitly shown, in some embodiments, the VCI 741 may include a container running on it.The VCI 741 may include a corresponding hypothetical number arithmetic agent 743. For example, the first hypothetical number operation agent 743-1 can be deployed on the first VCI 741-1, the second hypothetical number operation agent 743-2 can be deployed on the second VCI 741-2, and the Nth hypothetical number operation The agent 743-N can be deployed on the Nth VCI 741-N. As described above, the hypothetical number operation agent 743 can be configured to perform or cause operations such as converting bit strings between various formats, and arithmetic and/or logical operations using the converted (eg, hypothetical number) bit strings . In some embodiments, the hypothetical number arithmetic agent may be provided as a hypothetical number arithmetic engine and/or a hypothetical number arithmetic module, as described in more detail herein in conjunction with FIGS. 8 and 9.The scheduling agent 752 may be equipped with computing resources and may be configured to coordinate the deployment of the VCI 741 and/or the container within the VCC 751. In some embodiments, the dispatch agent 752 may be a container dispatcher such as DOCKER. The dispatch agent 752 may determine when to deploy the VCI 741 (or container) to run the hypothetical number operation agent 743 in response to a request received by the VCC 751 to perform an operation using a hypothetical digit string. For example, if a request is received for a specific arithmetic and/or logical operation using a hypothetical digit string, the dispatch agent 752 may deploy a VCI (e.g., VCI 741-1) and/or a container to run the hypothetical arithmetic agent (e.g., , Assume that the number operation agent 743-1) to facilitate the requested operation.FIG. 7B is another diagram of a virtual computing cluster 751 according to several embodiments of the present disclosure. The VCC 751 may be deployed in a multi-user network (such as the multi-user network 201 illustrated in FIG. 2B herein). As shown in FIG. 7B, the cluster 751 (e.g., VCC) may include multiple virtual computing instances (VCI) 741, which have a pool of computing resources (e.g., the processing resources 645 and/or memory described in FIG. 6 herein). Resource 647) and can be executed by hardware. In some embodiments, at least a first VCI (e.g., VCI 741-1) is deployed on a first hypervisor (e.g., hypervisor 742-1) of VCC 751 and at least a second VCI (e.g., VCI 741-2) is deployed on the second hypervisor of VCC 751 (for example, hypervisor 742-M). Although not explicitly shown, in some embodiments, the VCI 741 may include a container.The hypervisors 742-1...742-M may include corresponding hypothetical number operation agents 743. For example, the first hypothetical number operation agent 743-1 may be deployed on the first hypervisor 742-1, and the M-th hypothetical number operation agent 743-M may be deployed on the M-th hypervisor 741-M. As described above, the hypothetical number operation agent 743 can be configured to perform or cause operations such as converting bit strings between various formats, and arithmetic and/or logical operations using the converted (eg, hypothetical number) bit strings . This article describes the hypothetical number arithmetic agent in more detail in conjunction with Figures 8 and 9.In some embodiments, it is assumed that the number arithmetic agent 743 can have computing resources and can be executed by hardware. For example, it is assumed that the number operation agent 743 may have computing resources (eg, processing resources, memory resources, etc.) that can be used in a multi-user network (such as the multi-user network 201 illustrated in FIG. 2B herein). As described in more detail herein, due to the dynamic nature of the multi-user network, it is assumed that the number operation agent 743 can be deployed on the VCI (as shown in FIG. 7A), or the number operation agent 743 can be deployed on the hypervisor 742. Above, as shown in Figure 2B. However, no matter where the hypothetical arithmetic agent 743 is deployed, it can eventually be executed by hardware that can be used in a multi-user network or VCC 751.The hypothetical number operation agent 743 may be configured to receive a request to perform at least one of arithmetic operations and logical operations between the first hypothetical digit string operand and the second hypothetical digit string operand, as described above. In some embodiments, the hypothetical arithmetic agent 743 may be configured to allocate the amount and/or amount of computing resources that can be used to perform arithmetic operations and/or logical operations between the first hypothetical digit string operand and the second hypothetical digit string operand. Or, arithmetic operation and/or logic operation are performed between the first hypothetical digit string operand and the second hypothetical digit string operand. The amount of computing resources allocated by the hypothetical number arithmetic agent 743 for performing operations may be based on various parameters, such as the parameters described in conjunction with Table 1 above.In some embodiments, the circuitry (for example, the logic circuitry 122 illustrated in FIG. 1, the acceleration circuitry 220 illustrated in FIG. 2A, the FPGA 221 and/or the ASIC 223 illustrated in FIG. 2B) may be in a communication manner Coupled to VCC751, as shown in Figure 2B. For example, since the VCC 751 can be deployed in a multi-user network (such as the multi-user network 201 illustrated in FIG. 2B), the circuit system can be accessed by the VCC 751. In some embodiments, the hypothetical number operation agent 743 may cause the first hypothetical digit string operand and the second hypothetical digit string operand to be loaded into the logic circuit system, and the circuit system may be configured to perform the first hypothetical digit string operand Perform arithmetic operations and/or logical operations with the second hypothetical bit string operand, as described above.If the bit string is not available in the hypothetical number format (for example, if the requested bit string is stored in, for example, a floating-point format), then the hypothetical number arithmetic agent 743 has access to the circuitry and causes the first floating-point bit string and the second floating-point The point string is loaded into the circuit system. However, the embodiment is not limited to the floating-point bit string, and the bit string can be in other numerical formats, such as a fixed-width format.Once the bit string is loaded into the circuitry, the circuitry can convert the first floating-point bit string into a hypothetical number format to generate the first hypothetical bit string operand, and convert the second floating-point bit string into the hypothetical number format to Generate the second hypothetical bit string operand. After converting the floating-point bit string into a hypothetical number format, the circuit system can perform arithmetic operations and/or logical operations between the first hypothetical bit string operand and the second hypothetical bit string operand, as shown in Figure 1 in this article. 2A to 2B and 5.As described above, it is assumed that the number operation agent 743 can receive various parameters and perform (or cause) operations to convert bit strings between various formats, and use the bit strings to perform arithmetic and/or logical operations. For example, assume that the number operation agent 743 may receive parameters as part of a request command to perform an operation received from a computing node (such as the computing node 207 illustrated in FIG. 2B herein).For example, it is assumed that the arithmetic operation agent 743 may receive processing resource parameters corresponding to performing arithmetic operations and/or logic operations, and allocate the amount of computing resources available for performing arithmetic operations and/or logic operations based at least in part on the processing resource parameters. . In another example, the hypothetical number operation agent 743 may receive processing time parameters corresponding to performing arithmetic operations and/or logic operations, and allocate the amount of time available for performing arithmetic operations and/or logic operations based at least in part on the processing time parameters .In yet another example, the hypothetical number operation agent 743 may receive a hypothetical number accuracy parameter corresponding to at least one of performing arithmetic operations and logical operations, and set the first hypothetical number string operation based at least in part on the hypothetical number accuracy parameter The number and the bit length of the second hypothetical digit string operand, and/or the exponent bit length of the first hypothetical digit string operand and the second hypothetical digit string operand are set based at least in part on the hypothetical precision parameter.Figure 8 is a diagram of a device 853 according to several embodiments of the present disclosure. The device 853 may include a database 854, a subsystem 855, and/or several engines, such as a hypothetical arithmetic engine 856, and may communicate with the database 854 via a communication link. The device 853 may include additional or fewer engines than those illustrated to perform various functions described herein. The device 853 may represent program instructions and/or hardware of a machine (for example, the machine 957, etc. referred to in FIG. 9). As used herein, "engine" may include program instructions and/or hardware, but at least includes hardware. Hardware is the physical component of the machine that enables it to function. Examples of hardware may include processing resources, memory resources, logic gates, and so on. In some embodiments, the device 853 may be similar to the hypothetical number operation agent 643 illustrated and described herein in conjunction with FIG. 6.The engine (e.g., 856) may include a combination of hardware and program instructions configured to perform the several functions described herein. Program instructions (e.g., software, firmware, etc.) can be stored in memory resources (e.g., machine-readable media) and hard-wired programs (e.g., logic). Hard-wired program instructions (eg, logic) can be regarded as both program instructions and hardware.In some embodiments, the hypothetical arithmetic engine 856 may include a combination of hardware and program instructions that can be configured to perform the operations described above in conjunction with the hypothetical arithmetic agents 643 and/or 743 of FIGS. 6 and 7A to 7B.Figure 9 is a diagram of a machine 957 according to several embodiments of the present disclosure. The machine 957 may utilize software, hardware, firmware, and/or logic to perform several functions. The machine 957 may be a combination of hardware and program instructions configured to perform several functions (eg, actions). The hardware, for example, may include a number of processing resources 945 and a number of memory resources 947, such as machine-readable media (MRM) or other memory resources 947. The memory resource 947 may be internal and/or external to the machine 957 (e.g., the machine 957 may include internal memory resources and have access to external memory resources). In some embodiments, the machine 957 may be a virtual machine, or the machine 957 may be a server. Program instructions (e.g., machine-readable instructions (MRI)) may include instructions stored on the MRM to implement specific functions (e.g., actions related to a logic circuit system in a multi-user network, such as among various formats in a multi-user network) Convert bit strings between, perform arithmetic and/or logic operations on the converted bit strings, etc.). The collection of MRIs may be executed by one or more of the processing resources 945. The memory resource 947 may be coupled to the machine 957 in a wired and/or wireless manner. For example, the memory resource 947 may be an internal memory, a portable memory, a portable disk, and/or a memory associated with another resource, for example, enabling transmission and/or execution of MRI across a network such as the Internet. As used herein, "module" may include program instructions and/or hardware, but at least includes program instructions.The memory resource 947 may be non-transitory, and may include volatile and/or non-volatile memory. Volatile memory may include memory that depends on the power to store information, such as various types of dynamic random access memory (DRAM), and so on. Non-volatile memory may include memory that does not depend on the power to store information. Examples of non-volatile memory may include solid-state media such as flash memory, electrically erasable programmable read-only memory (EEPROM), phase-change random access memory (PCRAM), magnetic memory, optical memory, and/or solid-state drives (SSD), etc., and other types of machine-readable media.The processing resource 945 may be coupled to the memory resource 947 via the communication path 958. The communication path 958 may be local or remote to the machine 957. An example of the local communication path 958 may include an electronic bus inside the machine, where the memory resource 947 communicates with the processing resource 945 via the electronic bus. Examples of such electronic buses may include industry standard architecture (ISA), peripheral component interconnection (PCI), advanced technology attachment (ATA), small computer system interface (SCSI), universal serial bus (USB), and other types of Electronic bus and its variants. The communication path 958 may keep the memory resource 947 away from the processing resource 945, such as in a network connection between the memory resource 947 and the processing resource 945. That is, the communication path 958 may be a network connection. Examples of such a network connection may include a local area network (LAN), a wide area network (WAN), a personal area network (PAN), the Internet, and so on.As shown in FIG. 9, the MRI stored in the memory resource 947 can be segmented into several modules (eg, 959), which can perform several functions when executed by the processing resource 945. As used herein, a module includes a set of instructions that are included to perform a specific task or action. The module 959 may be a sub-module of other modules. The example is not limited to the specific module 959 illustrated in FIG. 9.The module (59 may include program instructions and/or a combination of hardware and program instructions, and the program instructions and/or a combination of hardware and program instructions, when executed by the processing resource 945, may act as a corresponding counterpart as described with respect to FIG. 8 For example, the hypothetical arithmetic module 959 may include program instructions and/or a combination of hardware and program instructions, and the program instructions and/or a combination of hardware and program instructions may act as a combination when executed by the processing resource 945. FIG. Illustrated and described hypothetical number arithmetic engine 856.FIG. 10 is a flowchart representing an example method 1060 involving arithmetic and logical operations in a multi-user network according to several embodiments of the present disclosure. At block 1062, the method 1060 may include receiving a request to perform an arithmetic operation and/or logical operation between the first operand and the second operand. As shown at block 1062 of method 1060, the request may include parameters corresponding to the amount of shared computing resources to be allocated for performing arithmetic operations and/or logical operations. Receiving a request for performing arithmetic operations and/or logical operations between the first operand and the second operand may further include receiving a request for performing an arithmetic operation and/or logic operation between the first operand and the second operand as at least one of the first operand and the second operand. Arithmetic operation and/or logical operation request. For example, in some embodiments, at least one of the first operand and the second operand may be a hypothetical digit string operand.As described above in conjunction with Table 1, the parameters may include parameters corresponding to the amount of time allowed to perform at least one of arithmetic operations and logical operations, the amount of processing resources allowed to perform at least one of arithmetic operations and logical operations, and/or The first bit string length and first exponent bit length of the first bit string operand and the second bit string length and second exponent bit length of the second bit string.In some embodiments, the method 1060 may further include causing at least one of the arithmetic operation and the logical operation to be performed during the amount of time allowing at least one of the arithmetic operation and the logical operation, and/or causing the use of the permitted A large amount of processing resources perform at least one of arithmetic operations and logical operations. However, the embodiment is not limited thereto, and in some embodiments, the method 1060 may further include setting the first bit string length and the first exponent bit length of the first bit string operand based on parameters, and setting the second bit string based on the parameters. The second bit string length and the second exponent bit length of the string operand, and/or the first bit string operand and the second bit string operand are used to perform at least one of arithmetic operations and logical operations.At block 1064, the method 1060 may include allocating an amount of shared computing resources to be used for performing arithmetic operations and/or logical operations based at least in part on the parameters. For example, the method 1060 may include allocating a certain amount of processing resources for performing arithmetic operations and/or logical operations as described above in conjunction with Table 1.At block 1066, the method 1060 may further include causing arithmetic operations and/or logic operations to be performed using the allocated amount of shared computing resources. In some embodiments, enabling the use of the allocated amount of shared computing resources to perform arithmetic operations and/or logic operations may further include enabling a logic circuit system communicatively coupled to the shared computing resources to perform arithmetic operations and/or logic operations. The logic circuit system may be similar to the logic circuit system 122 described herein in FIG. 1.The method 1060 may further include generating a graphical user interface to be displayed by a computing node connected to a shared computing resource pool containing an amount of shared computing resources, and/or receiving a request via input provided to the graphical user interface. The graphical user interface may contain prompts and/or selectable items, which may allow the user to select parameters for performing the operations described herein. For example, the graphical user interface may include an amount corresponding to the amount of processing resources to be used for performing operations (for example, the number of computing cores), the amount of time for performing operations in them (for example, processing time parameters), and the amount of processing resources to be used for performing operations. Tips and/or optional items of the bit length of the operand and/or the number of exponent bits of the operand to be used for the operation.Although specific embodiments have been illustrated and described herein, those skilled in the art should understand that arrangements of calculations to achieve the same result may replace the specific embodiments shown. The present disclosure is intended to cover modifications or changes of one or more embodiments of the present disclosure. It should be understood that the above description has been made in an illustrative manner, not a restrictive manner. For those skilled in the art, the combination of the above embodiments and other embodiments not specifically described herein will be obvious after reviewing the above description. The scope of one or more embodiments of the present disclosure includes other applications in which the above structures and processes are used. Therefore, the scope of one or more embodiments of the present disclosure should be determined with reference to the appended claims along with the full range of equivalents to which this claim is entitled.In the foregoing specific embodiments, some features are grouped into a single embodiment for the purpose of simplifying the present disclosure. This method of the present disclosure should not be construed as reflecting the intention that the disclosed embodiments of the present disclosure must use more features than those explicitly stated in each claim. The truth is that, as reflected in the appended claims, the subject of the invention is less than all the features of a single disclosed embodiment. Therefore, the appended claims are hereby incorporated into the detailed description, with each claim as a separate embodiment on its own.
A system having a processing device and a controller, operatively connected to a memory sub-system via a communication channel, to: store information identifying an amount of available capacity of a buffer of the memory sub-system; transmit, through the communication channel to the memory sub-system, one or more write commands to store data in memory components of the memory sub-system, where thememory sub-system queues the one or more write commands in the buffer; update the information by deducting, from the amount of available capacity, an amount of buffer capacity used by the one or morewrite commands to generate a current amount of available capacity of the buffer; and determine whether to calculate an information request to the memory sub-system based at least in part on the current amount of available capacity.
1.A host system, which includes:Processing device; andA controller, which is operatively connected to the memory subsystem through a communication channel to perform the following operations:Storing information identifying the amount of available capacity of the buffer of the memory subsystem;One or more write commands are transmitted to the memory subsystem through the communication channel to store data in the memory component of the memory subsystem, wherein the memory subsystem performs a check on the one or Multiple write commands are queued;Update the information by subtracting the amount of buffer capacity used by the one or more write commands from the amount of available capacity to calculate the amount of current available capacity of the buffer; andIt is determined whether to generate an information request for the memory subsystem based at least in part on the amount of currently available capacity.2.The host system according to claim 1, wherein the controller further performs the following operations:It is determined whether to generate the information request for the memory subsystem based at least in part on whether the memory subsystem has responded to a read command transmitted from the host system to the memory subsystem.3.The host system according to claim 2, wherein the controller further performs the following operations:The generation of the information request is postponed until the amount of the current available capacity is less than the threshold.4.The host system according to claim 3, wherein the controller further performs the following operations:The generation of the information request is postponed until the amount of the current available capacity is less than a threshold value and the memory subsystem does not have a read command for which the memory subsystem has not yet provided a corresponding response to the host system.5.The host system according to claim 1, wherein the controller further performs the following operations:The information request is generated based at least in part on the time interval.6.The host system according to claim 5, wherein the controller further performs the following operations:The generation of the information request is postponed until the time elapsed after the first communication between the host system and the memory subsystem regarding the available capacity of the buffer is longer than the time interval.7.The host system according to claim 6, wherein the controller further performs the following operations:The time interval is updated based on a time period between the first communication and the second communication regarding the available capacity of the buffer between the host system and the memory subsystem.8.The host system according to claim 7, wherein the time interval is further updated based on the amount of available capacity of the buffer allocated to the host system in the first communication.9.The host system according to claim 1, wherein the processing device further performs the following operations:Predicting the amount of the buffer capacity available in the memory subsystem to be allocated to the host system to transfer write commands; andThe generation of the information request is postponed until the predicted amount is higher than the threshold.10.The host system of claim 1, wherein the memory component includes non-volatile memory; the communication channel between the host system and the memory subsystem includes:A command bus, which is used to transmit the one or more write commands;A data bus for transmitting the data requested by the one or more write commands; andThe transaction bus is used to transmit the response signal to the information request from the memory system to the host system.11.The host system according to claim 10, wherein the write command, the information request, and the response signal conform to a communication protocol for a non-volatile dual in-line memory module.12.A method including:Storing information in a host system coupled to the memory subsystem through a communication channel, the information identifying the amount of available capacity of the buffer of the memory subsystem;One or more write commands are transmitted to the memory subsystem through the communication channel to store data in the memory component of the memory subsystem, wherein the memory subsystem performs a response to the one in the buffer. Or multiple write commands are queued;Update the information by subtracting the amount of buffer capacity used by the one or more write commands from the amount of available capacity to calculate the amount of current available capacity of the buffer; andThe generation of the information request for the memory subsystem is controlled based at least in part on the time interval and the amount of the current available capacity.13.The method of claim 12, further comprising:The generation of the information request is postponed until the time elapsed after the previous communication between the host system and the memory subsystem is longer than the time interval and the amount of current available capacity is less than a threshold.14.The method of claim 13, further comprising:The time interval is updated based on a time period between the previous communication and the communication between the host system and the memory subsystem regarding the available capacity of the buffer before the previous communication.15.The method of claim 14, wherein the time interval is further updated based on the amount of available capacity of the buffer allocated to the host system in the previous communication.16.The method of claim 12, further comprising:Predicting the amount of the buffer capacity available in the memory subsystem to be allocated to the host system to transfer write commands; andThe generation of the information request is postponed until the predicted amount is higher than the threshold.17.The method according to claim 12, wherein the write command and the information request comply with a communication protocol for a non-volatile dual in-line memory module; and the host system and the memory subsystem are The communication channel between includes:A command bus, which is used to transmit the one or more write commands;A data bus for transmitting the data requested by the one or more write commands; andThe transaction bus is used to transmit the response signal to the information request from the memory system to the host system.18.A non-transitory computer-readable storage medium that stores instructions that, when executed by a processing device, cause the processing device to perform the following operations:Information is stored in a host system coupled to the memory subsystem through a communication channel, the information identifying the amount of available capacity of the buffer of the memory subsystem, wherein the host system communicates to the memory subsystem through the communication channel Transmitting one or more write commands to store data in the memory component of the memory subsystem, and wherein the memory subsystem queues the one or more write commands in the buffer;Update the information by subtracting the amount of buffer capacity used by the one or more write commands from the amount of available capacity to calculate the amount of current available capacity of the buffer;Predicting the amount of the buffer capacity available in the memory subsystem to be allocated to the host system to transfer write commands; andThe generation of the information request for the memory subsystem is postponed until the predicted amount is higher than the threshold.19.The non-transitory computer-readable storage medium according to claim 18, wherein the instructions, when executed by the processing device, further cause the processing device to perform the following operations:Calculating the time interval based on the predicted amount being higher than the threshold;Wherein, the generation of the information request is postponed until the time elapsed after the previous communication between the host system and the memory subsystem is longer than the time interval.20.The non-transitory computer-readable storage medium according to claim 18, wherein the instructions, when executed by the processing device, further cause the processing device to perform the following operations:The time interval is updated based on the amount of available capacity of the buffer allocated to the host system in the previous communication.
Optimize the information request to the memory systemRelated applicationThis application claims the rights and interests of the filing date of U.S. Patent Application No. 16/058,645 entitled "Optimize Information Requests to a Memory System" filed on August 8, 2018, and its entire disclosure It is hereby incorporated by reference into this article.Technical fieldEmbodiments of the present disclosure generally relate to memory systems, and more specifically, to optimize the frequency of information requests transmitted from the host system to the memory system.Background techniqueThe memory subsystem may be a storage system, such as a solid state drive (SSD), or a memory module, such as a non-volatile dual in-line memory module (NVDIMM), and may include one or more memory components that store data. The memory component may be, for example, a non-volatile memory component and a volatile memory component. In general, the host system can utilize the memory subsystem to store data at and retrieve data from the memory component.The standardized communication protocol allows the host system to communicate with the memory subsystem to store and retrieve data.For example, the JEDEC (Electronic Device Engineering Design Association) Solid State Technology Association has proposed a "DDR5 NVDIMM-P bus protocol" for communication between a host system and NVDIMM-P memory modules. JEDEC Committee Letter Voting Committee: JC-45.6, Committee Project Number 2261.13D, subject: "Proposed DDR5 NVDIMM-P Bus Protocol" This protocol is described in detail, which is incorporated herein by reference in its entirety.Description of the drawingsThe present disclosure will be more fully understood from the detailed description given below and the accompanying drawings of various embodiments of the present disclosure.Figure 1 illustrates an example computing system with a memory subsystem according to some embodiments of the present disclosure.Figure 2 illustrates an example computing system including an information request manager according to some embodiments of the present disclosure.Figure 3 is a flowchart of an example method of optimizing information requests from a host system to a memory subsystem according to some embodiments of the present disclosure.FIG. 4 is a flowchart of a detailed example method of optimizing information requests according to some embodiments of the present disclosure.Figure 5 is a block diagram of an example computer system in which embodiments of the present disclosure can be operated.Detailed waysAt least some aspects of the present disclosure relate to optimizing information requests transmitted from the host system to the memory subsystem to reduce communication traffic and/or reduce power consumption. Alternatively, such optimization of requests can be used to increase responsiveness, or create an intermediate point between responsiveness and business volume. The memory subsystem is also referred to as "memory device" hereinafter. Examples of memory subsystems are memory modules connected to a central processing unit (CPU) through a memory bus, such as dual in-line memory modules (DIMMs), small form-factor DIMMs (SO-DIMMs), non-volatile dual in-line memory modules Memory module (NVDIMM), etc. Another example of a memory subsystem is a storage system, such as a solid state drive (SSD). In some embodiments, the memory subsystem is a hybrid memory/storage subsystem that provides both memory functions and storage functions. Generally speaking, a host system can utilize a memory subsystem that includes one or more memory components. The host system can provide data for storage at the memory subsystem, and can request data to be retrieved from the memory subsystem.In some computer systems (for example, the host system and the memory subsystem connected by the NVDIMM-P bus), the write command used to store data in the memory subsystem can be buffered in the memory subsystem for unscheduled time. Execute within the paragraph. The host system may issue a command to request information from the memory subsystem, the information including information indicating the available capacity of the memory subsystem to accept the new write command and its data. Such information indicating available capacity may be referred to as a write credit. In some cases, such as when the memory subsystem is currently expected to have no capacity that can be allocated to the host system to send a new write command, sending such a request may not produce useful results. Sending a request to generate a response that is unlikely to be useful may result in inefficient use of communication resources and/or may increase power consumption.At least some aspects of the present disclosure address the above and other shortcomings by adjusting the frequency of information requests by the host system based on the statistical data of the timing of past requests that have produced useful results.Figure 1 illustrates an example computing system 100 having a memory subsystem 110 according to some embodiments of the present disclosure. The memory subsystem 110 may include media, such as memory components 109A to 109N. The memory components 109A to 109N may be volatile memory components, non-volatile memory components, or a combination of such components. In some embodiments, the memory subsystem 110 is a memory module. Examples of memory modules include DIMM, NVDIMM, and NVDIMM-P. In some embodiments, the memory subsystem is a storage system. An example of a storage system is SSD. In some embodiments, the memory subsystem 110 is a hybrid memory/storage subsystem. Generally, the computing environment may include a host system 120 that uses the memory subsystem 110. For example, the host system 120 may write data to the memory subsystem 110 and read data from the memory subsystem 110.The host system 120 may be a computing device, such as a desktop computer, a laptop computer, a web server, a mobile device, or such a computing device including memory and processing devices. The host system 120 may include or be coupled to the memory subsystem 110 such that the host system 120 can read data from the memory subsystem 110 or write data to the memory subsystem 110. The host system 120 may be coupled to the memory subsystem 110 through a physical host interface. As used herein, "coupled to" generally refers to a connection between components, which can be an indirect communication connection or a direct communication connection (for example, without intervening components), whether wired or wireless, including, for example, electrical connections, optical connections, and magnetic connections. Connect and other connections. Examples of physical host interfaces include but are not limited to Serial Advanced Technology Attachment (SATA) interface, Peripheral Component Interconnect High Speed (PCIe) interface, Universal Serial Bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), Double data rate (DDR) memory bus, etc. The physical host interface can be used to transfer data between the host system 120 and the memory subsystem 110. When the memory subsystem 110 is coupled with the host system 120 through a PCIe interface, the host system 120 can also use an NVM high-speed (NVMe) interface to access the memory components 109A to 109N. The physical host interface may provide an interface for transferring control, address, data, and other signals between the memory subsystem 110 and the host system 120. Figure 1 illustrates a memory subsystem 110 as an example. Generally speaking, the host system 120 can access multiple memory subsystems through the same communication connection, multiple separate communication connections, and/or a combination of communication connections.The host system 120 includes a processing device 118 and a controller 116. The processing device 118 of the host system 120 may be, for example, a microprocessor, a central processing unit (CPU), a processing core of a processor, an execution unit, and the like. In some cases, the controller 116 may be referred to as a memory controller, a memory management unit, and/or an initiator. In one example, the controller 116 controls communication through a bus coupled between the host system 120 and the memory subsystem 110.Generally, the controller 116 may send a command or request to the memory subsystem 110 to access the memory components 109A to 109N. The controller 116 may also include an interface circuit system to communicate with the memory subsystem 110. The interface circuitry may convert the response received from the memory subsystem 110 into information for the host system 120.The controller 116 of the host system 120 may communicate with the controller 115 of the memory subsystem 110 to perform operations, such as reading data, writing data, or erasing data at the memory components 109A to 109N, and other such operations. In some cases, the controller 116 is integrated in the same package of the processing device 118. In other cases, the controller 116 is separated from the packaging of the processing device 118. The controller 116 and/or the processing device 118 may include hardware, such as one or more integrated circuits and/or discrete components, buffer memory, cache memory, or a combination thereof. The controller 116 and/or the processing device 118 may be a microcontroller, a dedicated logic circuit system (such as a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor.The memory components 109A to 109N may include different types of non-volatile memory components and/or any combination of volatile memory components. Examples of non-volatile memory components include NAND flash memory. Each of the memory components 109A to 109N may include one or more arrays of memory cells, such as single-level cells (SLC) or multi-level cells (MLC) (e.g., three-level cells (TLC) or four-level cells). Level Unit (QLC)). In some embodiments, a particular memory component may include both the SLC portion and the MLC portion of the memory cell. Each of the memory units may store one or more bits of data (e.g., blocks of data) used by the host system 120. Although a non-volatile memory component such as a NAND-type flash memory is described, the memory components 109A to 109N may be based on any other type of memory, such as a volatile memory. In some embodiments, the memory components 109A to 109N may be, but are not limited to, random access memory (RAM), read only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), Phase change memory (PCM), magnetic random access memory (MRAM), spin transfer torque (STT)-MRAM, ferroelectric random access memory (FeTRAM), ferroelectric RAM (FeRAM), conductive bridge RAM (CBRAM), Resistive random access memory (RRAM), oxide-based RRAM (OxRAM), NOR flash memory, electrically erasable programmable read-only memory (EEPROM), non-volatile memory based on nanowires , Incorporating memory with memristor technology, and a cross-point array of non-volatile memory cells. The cross-point array of the non-volatile memory can be combined with a stackable cross-grid data access array to perform bit storage based on changes in body resistance. In addition, compared with many flash-based memories, cross-point non-volatile memories can perform in-place write operations, in which non-volatile memory cells can be processed without pre-erasing the non-volatile memory cells. Programming. In addition, the memory cells of the memory components 109A to 109N may be grouped into memory pages or data blocks, and the memory pages or data blocks may refer to the cells of the memory components used to store data.The controller 115 of the memory subsystem 110 may communicate with the memory components 109A to 109N to perform operations, such as reading data, writing data, or erasing data at the memory components 109A to 109N, and other such operations (e.g., in response to Commands scheduled by the controller 116 on the command bus). The controller 115 may include hardware, such as one or more integrated circuits and/or discrete components, buffer memory, or a combination thereof. The controller 115 may be a microcontroller, a dedicated logic circuit system (for example, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor. The controller 115 may include a processing device 117 (processor) configured to execute instructions stored in the local memory 119. In the illustrated example, the local memory 119 of the controller 115 includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines to control the operation of the memory subsystem 110 , Including processing the communication between the memory subsystem 110 and the host system 120. In some embodiments, the local memory 119 may include memory registers that store memory pointers, extracted data, and the like. The local memory 119 may also include read-only memory (ROM) for storing microcode. Although the example memory subsystem 110 in FIG. 1 is illustrated as including the controller 115, in another embodiment of the present disclosure, the memory subsystem 110 may not include the controller 115, and may instead rely on (for example, an external host Or provided by a processor or controller separate from the memory subsystem) external control.In general, the controller 115 can receive commands or operations from the host system 120, and can convert the commands or operations into instructions or suitable commands to achieve the desired access to the memory components 109A to 109N. The controller 115 may be responsible for other operations, such as wear leveling operations, garbage collection operations, error detection and error correction code (ECC) operations, encryption operations, cache operations, and logical block addresses and addresses associated with the memory components 109A to 109N Address translation between physical block addresses. The controller 115 may also include a host interface circuit system to communicate with the host system 120 through a physical host interface. The host interface circuitry can convert commands received from the host system into command instructions to access the memory components 109A to 109N, and convert responses associated with the memory components 109A to 109N into information for the host system 120.The memory subsystem 110 may also include additional circuitry or components that are not illustrated. In some embodiments, the memory subsystem 110 may include a cache or buffer (such as DRAM) and an address circuitry (such as a row decoder and a column decoder), which may receive addresses from the controller 115 and compare The addresses are decoded to access the memory components 109A to 109N.The computing system 100 includes an information request manager 113 in the host system 120 that is configured to optimize the generation of information requests to the memory subsystem 110. In some embodiments, the controller 116 in the host system 120 includes at least a part of the information request manager 113. For example, the controller 116 may include a logic circuit system that implements the information request manager 113. For example, the controller 116 uses a processing device 118 (processor) configured to execute instructions stored in a local memory to perform the operations of the information request manager 113 described herein. In some embodiments, the information request manager 113 is part of the operating system, device driver, or application program of the host system 120.The information request manager 113 of the host system 120 controls the generation of the information request to improve the effectiveness of the request and its response. The information request may require the memory subsystem 110 to allocate available buffer capacity in the memory subsystem 110. Based on the allocated buffer capacity, the host system 120 can transmit the new write command and its data to the memory subsystem 110; and the memory subsystem 110 can store the new write command and its data in the allocated buffer capacity to It is used to execute at a time determined by the memory subsystem 110. The information request manager 113 may delay the generation of the information request based on the amount of currently allocated buffer capacity available at the host system 120 and/or the time interval after the previous communication regarding the allocation of buffer capacity. For example, if the current allocated buffer capacity available at the host system 120 to send write commands is higher than a threshold, the information request manager 113 may delay the generation of the information request. For example, if the currently allocated buffer capacity available at the host system 120 for sending write commands is lower than the threshold, and the time elapsed after the previous buffer capacity allocation by the memory subsystem 110 is less than the threshold period of time, the information request manager 113 may delay the generation of the information request until the elapsed time reaches the threshold time period. The threshold time period may be a predetermined time period, or a time period calculated based on the average speed of the buffer capacity that the memory subsystem 110 can allocate for accepting new write commands. Controlling the generation of information requests in this way can improve the efficiency of the use of communication resources, such as the bus between the host system 120 and the memory subsystem 110. In addition, increasing the effectiveness of information requests can reduce power consumption by eliminating and/or combining certain communications. Other details regarding the operation of the information request manager 113 are described below.Figure 2 illustrates an example computing system including an information request manager 113 according to some embodiments of the present disclosure.For non-limiting illustrative purposes only when describing FIG. 2, the controller 116 of the host system 120 is sometimes referred to as the memory controller 116 hereinafter, and the controller 115 of the memory subsystem 110 is sometimes referred to as the media controller 115 hereinafter.In FIG. 2, the communication channel between the host system 120 and the memory subsystem 110 includes a command bus 121, a data bus 123, a transaction bus 125 and a metadata bus 127. The communication protocol used for the communication channel allows asynchronous access to the memory subsystem 110 for data storage and retrieval by the host system 120. For example, the memory subsystem 110 may be an NVDIMM; and the host system 120 may use the command bus 121, the data bus 123, the transaction bus 125, and the metadata bus 127 to access the memory controller 116 according to the JEDEC NVDIMM-P bus protocol.For example, the memory controller 116 may issue a write command to store data in the memory subsystem 110. After a fixed and predetermined time window after the write command is transmitted on the command bus 121, the memory controller 116 starts to transmit data on the data bus 123. The memory subsystem 110 does not need to complete the operation of the write command within a predetermined period of time. Examples of such write commands include XWRITE and PWRITE identified in the JEDEC NVDIMM-P bus protocol.For example, the memory controller 116 may issue a read command to request information from the memory subsystem 110. The memory subsystem 110 does not need to generate a response within a predetermined time window after reading the command. Examples of such read commands include XREAD and SREAD identified in the JEDEC NVDIMM-P bus protocol. XREAD can be given a predetermined read ID to indicate that it is an information request (status_read) that will return the system status but will not directly access the media.In response to the read command, the memory subsystem 110 prepares the data requested by the read command. For example, the media controller 115 can retrieve data from the media (e.g., 109A,..., or 109N) and buffer the retrieved data in the local storage 119 or another storage so that when such a transfer is requested, the data is available at a predetermined time The window is successfully transferred to the memory controller 116.When the requested data is ready for transmission, the memory subsystem 110 may provide a response signal in the transaction bus 125. When the memory controller 116 is informed that the memory subsystem 110 is ready to transmit some data, the memory controller 116 may provide a send command to request the memory subsystem 110 to start transmitting on the data bus 123 within a predetermined time window from the time the command is sent. data. When responding to a sending command, the memory subsystem 115 may also send transaction status information, such as a read ID that identifies the corresponding read command, write quota information as discussed further below, metadata corresponding to the transaction, and/or errors Correction code (ECC). An example of such a sending command is SEND identified in the JEDEC NVDIMM-P bus protocol.The memory subsystem 110 may buffer the read commands and write commands received from the command bus 121 in the local memory 119 or another memory. The media controller 115 may execute the buffered commands in an order different from the order in which the commands are received.The memory subsystem 110 has a certain amount of capacity for buffering pending read commands and write commands and their associated data. The memory controller 116 and the media controller 115 may communicate with each other to prevent buffer overflow in the memory subsystem 110.For example, the write quota can be used to indicate a buffer capacity unit available for buffering write commands and associated data of a predetermined size. In some cases, a write command may have data larger than a predetermined size; and such a write command requires multiple write credits for buffering the command and its data in the memory subsystem 110.The memory controller 116 may maintain a count of the write quota it can use to transmit write commands to the memory subsystem 110 on the command bus 121. When a write command is sent through the command bus 121, the memory controller 116 deducts the write quota used for the write command. To avoid buffer overflow, when the memory controller 11 does not have a write quota sufficient to transmit the write command to the memory subsystem 110, the memory controller 116 should not transmit the write command.The media controller 115 can maintain a count of the write quota that can be returned to the memory controller 116 to complete the write command. After the write command buffered in the memory subsystem 110 is completed, the buffer space used by the write command can be released to accept other write commands from the memory controller 116. The write credit for the completed write command may be added to the count of write credits that can be returned to the memory controller 116.The memory subsystem 110 can use the metadata bus 127 to specify the number of write credits that it returns to the memory controller 116. For example, after sending a response signal on the transaction bus 125 to enable the memory controller 116 to issue a send command, the media controller 115 can use the metadata bus 127 to transfer the number of written credits returned. The memory subsystem 110 may transmit such response signals in response to read commands (such as XREAD and SREAD identified in the JEDEC NVDIMM-P bus protocol). An example of the response signal is RSPx_n identified in the JEDEC NVDIMM-P bus protocol.When the memory controller 116 uses a read command to request to retrieve data of a certain address, the memory controller 116 may place an address command immediately after the read command to specify the address. Similarly, when the memory controller 116 uses a write command to store data at a certain address, the memory controller 116 may place an address command immediately after the write command to specify the address. An example of such an address command is XADR identified in the JEDEC NVDIMM-P bus protocol.The memory controller 116 of the host system 120 has a write quota counter at the host system 120, which represents the amount of buffer space known to be available in the memory subsystem 110 for buffering write commands transmitted from the host system 120.When the host system 120 transmits a write command to the memory subsystem, the memory controller 116 of the host system 120 reduces its write quota counter at the host system 120 to the buffer capacity occupied by the write command and its data. The corresponding amount. When the host system 120 does not have a write quota sufficient to transmit the write command, the host system 120 does not transmit the command to avoid buffer overflow at the memory subsystem 110.The media controller 115 of the memory subsystem 110 is operable to monitor a write buffer that may be located in the local memory 119 or another memory in the memory subsystem 110. The write quota total count at the memory subsystem 110 identifies the total buffer capacity available for allocation to the host system 120 for transferring write commands from the host system 120 to the memory subsystem 110. The total count of write credits at the memory subsystem 110 can reduce the write credits transferred from the memory subsystem 110 to the host system 120. The transferred write quota represents the amount of buffer capacity allocated for the host system 120 to send a new write command. After the write command is executed and the write command is cleared from the buffer, the total count of the write quota can be increased by the write quota metric corresponding to the amount of buffer space occupied by the write command in the buffer. When a write command is buffered, the amount of buffer space occupied by the write command in the buffer identifies the write quota used by the write command. The write quota metric may be determined based on the size of the data associated with the write command. The total count of write credits at the memory subsystem 110 can reduce the write credits transferred from the memory subsystem 110 to the host system 120. The transferred write quota represents the amount of buffer capacity allocated for the host system 120 to send a new write command. After the write command is executed and/or the write command is cleared from the buffer, the total count of the write credit may increase the write credit metric used by the write command.The information request manager 113 of the host system 120 may generate an information request for receiving a write quota from the memory subsystem 110; and the write quota in the host system 120 indicates the amount of buffer capacity allocated to the host system 120 to transmit the write command. Generally, the write credit is transferred from the memory subsystem 110 to the host system 120 as a response to a read command from the host system 120, such as an information request (for example, RSPx_n identified in the JEDEC NVDIMM-P bus protocol). ) Or a request to retrieve data from a specific address (such as SREAD or XREAD identified in the JEDEC NVDIMM-P bus protocol).The information request manager 113 determines whether the host system 120 has a write quota exceeding a threshold number that can be used to send a write command to the memory subsystem 110. If the total count of the write credits available at the host system 110 is currently greater than the threshold number, the information request manager 113 defers generation of the information request for the write credits.When the total count of write credits available at the host system 110 is currently less than the threshold number, the information request manager 113 determines whether the time elapsed since the write credit was previously transferred from the memory subsystem 110 is longer than the threshold time period. The previous transfer of the write credit may be in response to an information request for the write credit, or in response to a read command that retrieves data from a designated address associated with the read command.The threshold time period may be a predetermined configuration parameter. Alternatively, the information request manager 113 may calculate that the memory subsystem 110 transmits at least a threshold number of write credits (for example, after the total count of write credits available at the host system 110 is lower than a predetermined level) to the host system 110 The threshold time period of the average time interval. The average time interval represents the average speed at which the memory subsystem 110 can execute buffered write commands to release the write quota for new write commands. For example, based on the estimated speed of executing write commands at the memory subsystem 110, the information request manager 113 estimates or predicts the total write quota available at the memory subsystem 110 to be transferred to the host system 120. When the estimated or predicted total write quota available at the memory subsystem 110 is higher than the threshold, the information request manager 113 may cause the memory controller 116 to transmit an information request that may cause the memory subsystem 110 to transmit the write quota.Optionally, the information request manager 113 calculates a moving average of the time interval between adjacent transfers of the write credit from the memory subsystem 110 to the host system 120. The threshold time period may be dynamically adjusted based on the moving average of the time interval.Optionally, the information request manager 113 calculates the write quota generated in the memory subsystem 110 based on the time interval between two adjacent write quota transfers and the write quota metric transmitted at the end of the time interval. The average speed. The average speed can be used to predict the time interval from the time when the memory subsystem 110 has a write quota exceeding the threshold number that can be transferred to the host system 120; and the information request manager 113 can delay the information request according to the predicted time interval. produce. In some cases, the information request manager 113 may calculate a moving average of the speed at which the write credit is generated according to the several recent time intervals of the write credit transmission. The moving average can be used to calculate the threshold time period.Optionally, when the memory subsystem 110 has a pending read command from the host system 120, the information request manager 113 may also postpone the generation of the information request. The memory subsystem 110 may use the response to the pending read command to transmit the write quota to the host system 120.Therefore, the information request manager 113 may combine and/or reduce the communication services related to the request for allocation of buffer capacity for buffering write commands, and reduce the power consumption associated with the reduced communication services.FIG. 3 is a flowchart of an example method of optimizing information requests from the host system 120 to the memory subsystem 110 according to some embodiments of the present disclosure. The method of FIG. 3 may be executed by processing logic, which may include hardware (for example, processing device, circuit system, dedicated logic, programmable logic, microcode, hardware of the device, integrated circuit, etc.), software (for example, in the processing Instructions run or executed on the device), or a combination thereof. In some embodiments, the method of FIG. 3 is at least partially executed by the information request manager 113 of FIG. 1 or 2. Although shown in a specific order or order, unless otherwise specified, the order of the processes can be modified. Therefore, it should be understood that the illustrated embodiments are only examples, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in every embodiment. Other process flows are also possible.In block 301, the information request manager 113 stores information that identifies the amount of available capacity in the buffer of the memory subsystem for write commands. For example, the buffer may be implemented in the local memory 119 or another memory. The amount of available capacity of the buffer has been allocated to the host system 120 for transmitting the write command and its data.At block 303, the memory controller 116 transmits one or more write commands to the memory subsystem 110 to store data in the memory components 109A to 109N of the memory subsystem 110. The memory subsystem 110 queues one or more write commands in the buffer for execution at a time determined by the media controller 115 of the memory subsystem 110.At block 305, the information request manager 113 subtracts the amount of buffer capacity used for one or more write commands in the buffer from the amount of available capacity to calculate the amount of buffer capacity allocated to the host system 120 in the buffer for transmission and writing. The amount of currently available capacity for the command and its data.At block 307, the information request manager 113 determines whether to generate an information request for the memory subsystem 110 based at least in part on the amount of current available capacity. The information request enables the memory subsystem 110 to allocate buffer capacity for the host system 120 to transmit new write commands.For example, the information request manager 113 may postpone the generation of the information request until the current amount of available buffer capacity is lower than the threshold. The amount of buffer capacity currently available/allocated can be used at the host system 120 to transmit write commands without causing the buffer in the memory subsystem 110 to overflow.Optionally, the information request manager 113 may postpone the generation of the information request until the time period that has passed since the previously communicated the allocated buffer capacity for the host system 120 to transmit the write command is longer than a certain time interval. By delaying according to the time interval, the information request is likely to be transmitted at the time when the memory subsystem 110 has completed one or more write commands and released the buffer capacity previously used by the completed write commands, so that the released Capacity can be allocated and identified to the host system 120 for transmission of other write commands.Optionally, the information request manager 113 may postpone the generation of the information request until the information request manager 113 predicts that the memory subsystem 110 has released more than a threshold amount of write buffer capacity by completing the previously received write command, so that The information request is sent at the time when the memory subsystem 110 can respond by identifying the write buffer capacity of the host system 120 that is allocated to the host system 120 to transmit a new write command and its data is greater than a threshold amount.In some cases, the information request manager 113 may further postpone the generation of the information request as needed until the memory controller 116 of the host system 120 has received the response to all read commands previously transmitted to the memory subsystem 110. In conjunction with the response to any of the pending read commands, the memory subsystem may allocate write buffer capacity and transmit an indication of the amount of allocated buffer capacity.FIG. 4 is a flowchart of a detailed example method of optimizing information requests according to some embodiments of the present disclosure. The method of FIG. 4 may be executed by processing logic, which may include hardware (for example, processing device, circuit system, dedicated logic, programmable logic, microcode, hardware of the device, integrated circuit, etc.), software (for example, in the processing Instructions run or executed on the device), or a combination thereof. In some embodiments, the method of FIG. 4 is at least partially performed by the information request manager 113 of FIG. 1 or 2. Although shown in a specific order or order, unless otherwise specified, the order of the processes can be modified. Therefore, it should be understood that the illustrated embodiments are only examples, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in every embodiment. Other process flows are also possible.At block 321, the memory controller 116 of the host system 120 receives a communication to transfer a certain amount of write credit from the memory subsystem 110 to the host system 120. For example, the communication protocol identified in the JEDEC NVDIMM-P bus protocol can be used to transfer the write quota. The metadata bus 127 may be used to communicate the amount of write credit in response to the memory controller 116 placing a send command in the command bus 121. The memory controller 116 of the host system 120 may issue the send command in response to the response signal provided by the memory subsystem 110 on the transaction bus 125. An example of the response signal is RSPx_n identified in the JEDEC NVDIMM-P bus protocol. The response signal may be in response to a read command, such as XREAD or SREAD identified in the JEDECNVDIMM-P bus protocol.At block 323, the information request manager 113 adds the amount of the write credit specified in the communication to the total amount of write credit available at the host system 120. The total amount represents the total allocated/available buffer space for buffering write commands from the host system 120. When the total amount of write credit is sufficient to allow the host system 120 to transmit one or more write commands, the method may proceed to block 325.At block 325, the memory controller 116 transmits one or more write commands from the host system 120 to the memory subsystem 110. The memory subsystem 110 buffers the one or more write commands for execution at times that are not controlled by the host system 120. Examples of write commands include WRITE and PWRITE identified in the JEDEC NVDIMM-P bus protocol.At block 327, the information request manager 113 deducts the one or more write credits used by the one or more write commands from the total amount. By sending one or more write commands, the host system 120 sends the write quota used by the one or more write commands to the memory subsystem 110.At block 329, the information request manager 113 determines whether the total amount of write credits is now below the threshold. If the total amount is not lower than the threshold, the memory controller 116 may transmit skip boxes 331 to 335 and perform other operations, such as transmitting 325 other write commands. However, transmitting 325 one or more write commands does not require the total amount of write credits to be higher than the threshold. The write command may be transmitted when the total amount of the write quota is not less than the write quota used by the write command. Optionally, the memory controller 116 may also transmit a read command to retrieve data from the memory components 109A to 109N, such as XREAD or SREAD in the JEDEC NVDIMM-P bus protocol. The memory subsystem 110 may provide a write quota in combination with the response to the read command as needed, and enable the memory controller 116 to receive 321 the write quota increment communication.At block 331, if the total amount of write credits at the host system 120 is below the threshold, the information request manager 113 may delay for a period of time until the elapsed time after the most recent write credit communication is longer than the threshold time period. The threshold time period allows the memory subsystem 110 to process the buffered write commands and reclaim the write quota used by the completed write commands, so that the reclaimed write quota can be transferred to the host system 120.The threshold time period may be a predetermined time interval, or a time interval calculated by the information request manager 113 based on statistical data related to the speed at which the memory subsystem 110 can reclaim the write quota. For example, the time interval may be calculated as a dynamic average of the time period between adjacent write quota communications (for example, a communication that transmits a write quota greater than a predetermined level). For example, based on the time period between two adjacent write quota communications and the write quota provided in the latter of the two write quota communications, the information request manager 113 may calculate that the memory subsystem 110 completes buffering. Estimated speed of the write command and reclaims the write quota. For example, the information request manager 113 may calculate a dynamic average value of the estimated speed in a plurality of time intervals between adjacent write credit communications. The information request manager 113 may use the speed to determine the threshold time period. After the threshold period of time after the previous write quota communication, the memory subsystem 110 predicts that a set of buffered write commands has been completed. The completed write command allows the memory subsystem 110 to reclaim at least a predetermined level of write credit. The recovered write quota may be transmitted to the host system 120 in the response to the information request sent after the threshold time period.Figure 5 illustrates an example machine of a computer system 600 within which a set of executable instructions is used to cause the machine to perform any one or more of the methods discussed herein. In some embodiments, the computer system 600 may correspond to a host system (for example, the host system 120 of FIG. 1), which includes, is coupled to, or utilizes a memory subsystem (for example, the memory subsystem 110 of FIG. 1), or may be used for The operation of the information request manager 113 is performed (for example, an instruction is executed to perform the operation corresponding to the information request manager 113 described with reference to FIGS. 1, 2, 3, and 4). In alternative embodiments, the machine may be connected (eg, networked) to other machines in the LAN, intranet, extranet, and/or the Internet. The machine can be used as a peer-to-peer machine in a peer-to-peer (or distributed) network environment or as a server or client machine in a cloud computing infrastructure or environment. Capacity to operate.The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a cellular phone, a network device, a server, a network router, a switch or a bridge, or it can (in sequence or with Other ways) Any machine that executes a set of instructions that specify actions to be taken by the machine. In addition, although a single machine is described, the term "machine" should also be considered to encompass any collection of machines that individually or collectively execute a set (or sets of) instructions to perform any one or more of the methods discussed herein.The example computer system 600 includes a processing device 602, a main memory 604 (e.g., read only memory (ROM), flash memory, dynamic random access memory (DRAM), such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), A static random access memory (SRAM) and a data storage system 618 communicate with each other through a bus 630 (which may include multiple buses).The processing device 602 represents one or more general processing devices, such as a microprocessor, a central processing unit (CPU), and so on. More specifically, the processing device may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets , Or a processor that implements a combination of instruction sets. The processing device 602 may also be one or more dedicated processing devices, such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), a graphics processing unit (GPU), a network processor, etc. . The processing device 602 is configured to execute instructions 626 for performing the operations and steps discussed herein. The computer system 600 may also include a network interface device 608 to communicate through the network 620.The data storage system 618 may include a machine-readable storage medium 624 (also referred to as a computer-readable medium) having stored thereon one or more sets of instructions 626 or software embodying any one or more methods or functions described herein. The instructions 626 may also completely or at least partially reside in the main memory 604 and/or the processing device 602 during execution of the instructions 626 by the computer system 600, and the main memory 604 and the processing device 602 also constitute machine-readable storage media. The machine-readable storage medium 624, the data storage system 618, and/or the main memory 604 may correspond to the memory subsystem 110 of FIG.In one embodiment, the instructions 626 include instructions to implement functions corresponding to the information request manager 113 (eg, the information request manager 113 described with reference to FIGS. 1, 2, 3, and 4). Although the machine-readable storage medium 624 is shown as a single medium in the example embodiment, the term "machine-readable storage medium" should be considered to include a single medium or multiple media that store one or more sets of instructions. The term "machine-readable storage medium" shall also be considered to include any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure. Therefore, the term "machine-readable storage medium" should be considered to include, but is not limited to, solid-state memory, optical media, and magnetic media.Some parts of the previous detailed description have been presented in terms of algorithms and symbolic representations for operations on data bits in computer memory. These algorithm descriptions and representations are the most effective way for the technical personnel in the data processing field to convey the main idea of their work to other technical personnel in the field. Algorithms are here and generally considered to be a self-consistent sequence of operations that produce the desired result. The operations are operations that require physical manipulation of physical quantities. These quantities are usually but not necessarily in the form of electrical or magnetic signals that can be stored, combined, compared, and otherwise manipulated. Sometimes, mainly for general reasons, it has proven convenient to refer to these signals as bits, values, elements, symbols, characters, items, numbers, etc.However, it should be borne in mind that all these and similar terms should be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may refer to the actions and processes of a computer system or similar electronic computing device, which manipulates and transforms data represented as physical (electronic) quantities in the registers and memory of the computer system into similarly represented as computer system memory or registers or other such Other data of physical quantities in the information storage system.The present disclosure also relates to a device for performing the operations herein. This apparatus may be specially constructed for the intended purpose, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such computer programs can be stored in computer-readable storage media, such as but not limited to any type of disk (including floppy disk, optical disk, CD-ROM and magneto-optical disk), read only memory (ROM), random access memory (RAM) , EPROM, EEPROM, magnetic or optical card, or any type of media suitable for storing electronic instructions, each of which is coupled to the computer system bus.The algorithms and displays presented in this article are not essentially related to any particular computer or other equipment. Various general-purpose systems may be used with programs according to the teachings herein, or it may prove convenient to construct more specialized equipment to perform the methods. The structure of a variety of these systems will be presented as set forth in the description below. In addition, the present disclosure is not described with reference to any specific programming language. It should be appreciated that various programming languages may be used to implement the teachings of the present disclosure described herein.The present disclosure may be provided as a computer program product or software, which may include a machine-readable medium having instructions stored on the machine-readable medium, and the instructions may be used to program a computer system (or other electronic device) to execute the process. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (eg, a computer). In some embodiments, the machine-readable (eg, computer-readable) medium includes machine (eg, computer)-readable storage media, such as read-only memory ("ROM"), random access memory ("RAM"), disk storage media , Optical storage media, flash memory components, etc.In the foregoing specification, the embodiments of the present disclosure have been described with reference to specific example embodiments thereof. It should be apparent that various modifications can be made to the present disclosure without departing from the broader spirit and scope of the embodiments of the present disclosure as set forth in the appended claims. Therefore, the description and drawings should be viewed in an illustrative sense rather than a restrictive sense.
An integrated circuit and method having an extended drain MOS transistor with a buried drift region, a drain end diffused link between the buried drift region and the drain contact, and a concurrently formed channel end diffused link between the buried drift region and the channel, where the channel end diffused link is formed by implanting through segmented areas to dilute the doping to less than two-thirds the doping in the drain end diffused link.
1.An integrated circuit comprising:a planar extended drain metal oxide semiconductor MOS transistor, which is located in a substrate of a first conductivity type, the MOS transistor comprising:Buried drift region, below the top surface of the substrate in the substrate, the buried drift region having a second conductivity type opposite the first conductivity type;a drain-side diffusion chain in which the drain-end diffusion chain has the second conductivity type, and the drain-side diffusion chain is at a drain end of the buried drift region and the buried drift region Electrically connected and electrically connected to the drain contact;a channel end diffusion chain in which the channel end diffusion chain has the second conductivity type, and the channel end diffusion chain is at a channel end of the buried drift region and the buried drift region Electrically connected, the channel end diffusion chain has an average doping density that is less than two-thirds of the average doping density in the drain end diffusion chain.2.The integrated circuit of claim 1 wherein said channel end diffusion chain is formed by ion implantation through a plurality of segmented channel end opening regions arranged in a linear array.3.The integrated circuit of claim 1 wherein said channel end diffusion chain forms a circle around said drain terminal diffusion chain.4.The integrated circuit of claim 1 wherein said channel end diffusion chain has a racetrack shape surrounding said drain end diffusion chain.5.The integrated circuit of claim 1 wherein said channel end diffusion chain is formed by ion implantation through a plurality of segmented channel end opening regions arranged in a staggered configuration.6.The integrated circuit of claim 1 wherein:The conductivity type of the substrate is p-type;The conductivity type of the buried drift region is n-type;The conductivity type of the drain end diffusion chain is n-type;The conductivity type of the channel end diffusion chain is n-type.7.The integrated circuit of claim 1 wherein a depth of said top surface of said buried drift region in said substrate is between 2 microns and 4 microns.8.The integrated circuit of claim 1 wherein said average doping density in said channel end diffusion chain is between 25% and 33% of said average doping density in said drain end diffusion chain.9.The integrated circuit of claim 1 wherein:The average doping density in the drain-end diffusion chain is between 2.5×10 16 cm −3 and 3.5×10 16 cm −3 ;The average doping density in the channel end diffusion chain is between 5 x 10 15 cm -3 and 1 x 10 16 cm -3 .10.The integrated circuit of claim 1 wherein said channel end diffusion chain is separated by a distance having a lateral dimension between 1.5 microns and 3.0 microns and between 4 microns and 7 microns Formed by ion implantation of a plurality of segmented channel end opening regions.11.A method of forming an integrated circuit, comprising:A planar extended drain MOS transistor is formed in the first conductivity type substrate by:Forming a buried drift region under the top surface of the substrate in the substrate such that the buried drift region has a second conductivity type opposite the first conductivity type;Forming a chain ion implantation mask over the substrate such that the chain ion implantation mask has a drain end opening region above a drain end of the buried drift region and has a trench in the buried drift region a plurality of segmented channel end opening regions above the track end;Simultaneously implanting dopant ions into the substrate through the drain end opening region and the segmented channel end opening region to form a drain end chain implant region under the drain end opening region and Forming a plurality of channel end chain implant regions under the plurality of segmented channel end opening regions;Performing an annealing operation, the annealing operation:Dissipating the dopant in the drain end chain implant region to form a drain end diffusion chain that extends to and is electrically coupled to the buried drift region;Dispersing the dopant in the channel end chain implant region to form a channel end diffusion chain that extends to and is electrically connected to the buried drift region such that the channel end diffusion chain The average doping density is less than two-thirds of the average doping density in the drain-end diffusion chain;Forming a gate dielectric layer on the substrate adjacent to the channel end diffusion chain opposite to the buried drift region;Forming a gate on the gate dielectric layer;A drain contact is formed on the substrate to electrically connect to the drain terminal diffusion chain.12.The method of claim 11 wherein said segmented channel end opening regions are configured in a linear array.13.The method of claim 11 wherein said segmented channel end open regions are arranged in a circular array and said drain end diffusion chains are centrally located in said circular array.14.The method of claim 11 wherein said segmented channel end opening regions are configured as a racetrack shaped array and said drain end diffusion chains are centered in said racetrack shaped array.15.The method of claim 11 wherein said segmented channel end opening regions are arranged in a staggered configuration.16.The method of claim 11 wherein:The conductivity type of the substrate is p-type;The conductivity type of the buried drift region is n-type;The conductivity type of the drain end diffusion chain is n-type;The conductivity type of the channel end diffusion chain is n-type.17.The method of claim 11 wherein the top surface of the buried drift region has a depth in the substrate of between 2 microns and 4 microns.18.The method of claim 11 wherein said average doping density in said channel end diffusion chain is between 25% and 33% of said average doping density in said drain end diffusion chain.19.The method of claim 11 wherein:The average doping density in the drain-end diffusion chain is between 2.5×10 16 cm −3 and 3.5×10 16 cm −3 ;The average doping density in the channel end diffusion chain is between 5 x 10 15 cm -3 and 1 x 10 16 cm -3 .20.The method of claim 11 wherein said segmented channel end opening region has a lateral dimension between 1.5 and 3.0 microns and is spaced apart by a distance between 4 and 7 microns.
High voltage laterally extending drain MOS transistor with improved drift layer contactsTechnical fieldThe present invention relates to the field of integrated circuits. More particularly, the present invention relates to MOS transistors in integrated circuits.Background techniqueThe integrated circuit may include a planar extended drain metal oxide semiconductor (MOS) transistor having a buried drift region, for example, to provide an operating voltage that is higher than the dielectric strength of the gate dielectric layer in the MOS transistor. It may be desirable to form a low resistance drain terminal connection between the buried drift region and the drain contact, while it may be desirable to form a lightly doped channel end chain between the buried drift region and the channel of the MOS transistor. It may further be desirable to minimize the number of lithography and ion implantation operations in the fabrication sequence that forms the integrated circuit.Summary of the inventionThe Summary of the Invention is presented below to provide a basic understanding of one or more aspects of the invention. The summary is not an extensive overview of the invention, and is not intended to identify the critical or critical elements of the invention. Rather, the summary of the present invention is intended to be aAn integrated circuit can include a planar extended drain MOS transistor having a buried drift region between a drain contact and a channel of the MOS transistor. The drain end chain between the buried drift region and the drain contact and the channel end chain between the buried drift region and the channel are simultaneously formed. The drain end chain and the channel end chain are formed by ion implantation of a dopant, followed by an annealing operation, the annealing operation diffusing the implanted dopant to electrically connect to the buried drift region . The average doping density in the channel end chain is less than two-thirds of the average doping density in the drain end chain. The channel end chain is formed by segmenting the ion implanted region such that the implanted segmented diffused dopant is distributed in the channel end chain after the annealing operation more than in the drain end chain dilution.DRAWINGS1A through 1D are perspective views of an integrated circuit formed according to a first example, which is depicted in a continuous fabrication stage.2 is a top plan view of an integrated circuit including a planar extended drain MOS transistor formed in accordance with a second example, depicting a state after formation of a chain ion implantation mask.3 is a top plan view of an integrated circuit including a planar extended drain MOS transistor formed in accordance with a third example, depicting a state after formation of a chain ion implantation mask.4 is a top plan view of an integrated circuit including a planar extended drain MOS transistor formed in accordance with a fourth example, depicting a state after formation of a chain ion implantation mask.5 is a top plan view of an integrated circuit including a planar extended drain MOS transistor formed in accordance with a fifth example, depicting a state after formation of a chain ion implantation mask.6 is a top plan view of an integrated circuit including a planar extended drain MOS transistor formed in accordance with a sixth example, depicting a state after formation of a chain ion implantation mask.Detailed waysThe following copending patent application is hereby incorporated by reference in its entirety in its entirety in its entirety in its entirety in the the the the the the the the the the the the theThe invention is described with reference to the drawings, wherein like reference numerals are used to designate The figures are not drawn to scale and are provided only to illustrate the invention. Several aspects of the invention are described below with reference to illustrative applications for illustration. It will be appreciated that numerous specific details, relationships, and methods are set forth to provide an understanding of the invention. However, those skilled in the art will readily appreciate that the invention can be practiced without the use of one or more of the specific details. In other instances, well known structures or operations are not shown in detail to avoid obscuring the invention. The present invention is not limited by the illustrated ordering of actions or events, as some acts may occur in different orders and/or concurrently with other acts or events. Moreover, not all illustrated acts or events may be required to implement a method in accordance with the present invention.An integrated circuit can include a planar extended drain MOS transistor having a buried drift region between a drain contact and a channel of the MOS transistor. Forming, by ion implantation and annealing, a drain end chain between the buried drift region and the drain contact and a channel end chain between the buried drift region and the channel, the annealing The implanted dopant in each chain diffuses to electrically connect to the buried drift region. The ion implantation region in the channel end chain is segmented such that the dopant is laterally diluted during annealing to reduce the average doping density compared to the unsegmented implant region. The dopant profiles adjacent to the implanted segments in the channel end chain may overlap after the annealing operation. The average doping density in the channel end chain is less than two-thirds of the average doping density in the drain end chain. Segmentation of the ion implanted region of the channel end chain can be adjusted to provide the desired breakdown voltage and series resistance of the MOS transistor. A second planar extended drain MOS transistor having a buried drift region can be formed in the same integrated circuit, wherein the ion implanted region of the channel end chain of the second transistor that is formed simultaneously with the channel end chain of the first MOS transistor Different segments are performed to provide MOS transistors with different breakdown voltages and series resistances without additional process operation.1A through 1D are perspective views of an integrated circuit formed according to a first example, which is depicted in a continuous fabrication stage. Referring to FIG. 1A, an integrated circuit 1000 is formed in and on a p-type substrate 1002. The p-type substrate 1002 may be a single crystal silicon wafer, a silicon-on-insulator (SOI) wafer, or a hybrid orientation technique (HOT) having regions of different crystal orientations. A wafer or other material suitable for fabricating integrated circuit 1000. An n-type buried drift region 1004 of a planarly extending drain n-channel MOS transistor is formed in the substrate 1002. The buried drift region 1004 can be formed by implanting an n-type dopant (for example, phosphorous) ions into an existing top surface of the substrate 1002, followed by implantation of an n-type dopant. A p-type epitaxial semiconductor material is grown. In one version of the present example, the top surface of the buried drift region 1004 may have a depth in the substrate 1002 of between 2 microns and 4 microns. The region of substrate 1002 over buried drift region 1004 can provide a RESURF region during operation of integrated circuit 1000.A chain ion implantation mask 1006 is formed over the existing top surface of the substrate 1002. The chain ion implantation mask 1006 can comprise a photoresist and/or a dielectric layer (eg, silicon dioxide). The chain ion implantation mask 1006 has a drain end opening region 1008 over the drain terminal of the buried drift region 1004. The chain ion implantation mask 1006 also has a plurality of segmented channel end opening regions 1010 over the channel ends of the buried drift region 1004. In one version of the present example, each segmented channel end opening region 1010 can have a lateral dimension between 1.5 microns and 3.0 microns and be spaced apart by a distance between 4 microns and 7 microns.Referring to FIG. 1B, a chain ion implantation operation is performed on the integrated circuit 1000, which simultaneously passes through the drain end opening region 1008 of the chain ion implantation mask 1006 and the segmented channel end opening region 1010 to form an n-type. A dopant such as phosphorus and possibly arsenic is ion implanted into the substrate 1002. In one version of this example, the chain ion implantation operation can have a dose between 8 x 10 12 cm -2 and 1.5 x 10 13 cm -2 . The chain ion implantation operation forms a drain end chain implant region 1012 below the drain end opening region 1008 and a channel end chain implant region 1014 below the segmented channel end opening region 1010. In one version of this example, the channel end chain implant regions 1014 do not touch or overlap each other.Referring to FIG. 1C, an anneal operation is performed on integrated circuit 1000 that diffuses dopants in drain-drain implant region 1012 of FIG. 1B to form drain-side diffusion chains 1016, and drain-side diffusion chains 1016 extend to bury drift Zone 1004 is electrically connected thereto. The annealing operation also diffuses the dopant in the channel end chain implant region 1014 of Figure 1B to form a channel end diffusion chain 1018 that extends into and is electrically coupled to the buried drift region 1004. In one version of the present example, the diffusion regions from adjacent channel end chain implant regions 1014 overlap to form a contiguous channel end diffusion chain 1018, as depicted in Figure 1C. In an alternate version, there may be a gap between the diffusion regions from adjacent channel end chain implant regions 1014. The average doping density in the channel end diffusion chain 1018 is less than two thirds of the average doping density in the drain end diffusion chain 1016. In one version of this example, the average doping density in the trench-end diffusion chain 1018 can be between 25% and 33% of the average doping density in the drain-end diffusion chain 1016. In one example, the average doping density in the drain-end diffusion chain 1016 can be between 2.5×10 16 cm −3 and 3.5×10 16 cm −3 , and the average doping density in the channel-end diffusion chain 1018 can be Between 5 × 10 15 cm -3 and 1 × 10 16 cm -3 .Referring to FIG. 1D, a gate dielectric layer 1020 of a MOS transistor is formed on the substrate 1002 adjacent to the trench-end diffusion chain 1018 and the buried drift region 1004. A gate 1022 of the MOS transistor is formed on the gate dielectric layer 1020. An optional heavily doped drain diffusion region 1024 can be formed at the top surface of substrate 1002 in drain terminal diffusion chain 1016. A drain contact 1026 is formed over the substrate 1002 to electrically connect to the drain end diffusion chain 1016 through the drain diffusion region 1024 (if formed). During operation of integrated circuit 1000, channel end diffusion chain 1018 provides an electrical connection from buried drift region 1004 to a channel below gate dielectric layer 1020. The lateral dimension and spacing of the segmented channel end opening regions 1010 of Figure 1B can be adjusted to provide the desired breakdown voltage and series resistance of the MOS transistor.It will be appreciated that the p-channel version of the MOS transistor described with reference to Figures 1A through 1D can be formed by appropriately inverting the doping polarity. It will be appreciated that a second planar extended drain MOS transistor having a buried drift region can be formed in the integrated circuit 1000 such that the channel end chain of the second MOS transistor is formed simultaneously with the channel end chain 1018 of the first MOS transistor, wherein The configuration of the segmented channel end opening regions is different to provide a breakdown and resistance value different from the first MOS transistor without additional process operation.2 is a top plan view of an integrated circuit including a planar extended drain MOS transistor formed in accordance with a second example, depicting a state after formation of a chain ion implantation mask. A chain ion implantation mask 2002 is formed over the substrate 2004 on which the integrated circuit 2000 is formed. The chain ion implantation mask 2002 has a linear drain end opening region 2006 centered between two linear arrays of segmented channel end opening regions 2008. After ion implantation and annealing using the chain ion implantation mask 2002 as described above with reference to FIGS. 1B and 1C, a channel end diffusion chain is formed under the segmented channel end opening region 2008 in the substrate, and is lined A drain end diffusion chain is formed under the linear drain end opening region 2006 in the bottom.3 is a top plan view of an integrated circuit including a planar extended drain MOS transistor formed in accordance with a third example, depicting a state after formation of a chain ion implantation mask. A chain ion implantation mask 3002 is formed over the substrate 3004 on which the integrated circuit 3000 is formed. The chain ion implantation mask 3002 has a circular drain end opening region 3006 centered in a circular array of segmented channel end opening regions 3008. After ion implantation and annealing using the chain ion implantation mask 3002 as described above with reference to FIGS. 1B and 1C, a circular channel end is formed under the circular array of segmented channel end opening regions 3008 in the substrate. A diffusion chain, the circular channel end diffusion chain surrounds a drain end diffusion chain formed in a substrate below the circular drain end opening region 3006.4 is a top plan view of an integrated circuit including a planar extended drain MOS transistor formed in accordance with a fourth example, depicting a state after formation of a chain ion implantation mask. A chain ion implantation mask 4002 is formed over the substrate 4004 on which the integrated circuit 4000 is formed. The chain ion implantation mask 4002 has a linear drain end opening region 4006 having a rounded end centered in a racetrack shaped array of segmented channel end opening regions 4008. The channel end opening region 4008 is configured to provide a desired uniformity of the electric field in the channel end chain that is subsequently formed below the channel end opening region 4008. Forming a MOS transistor as depicted in Figure 4 can consume less area of 4000 of an integrated circuit than other configurations having comparable current capacity and breakdown voltage. After ion implantation and annealing using the chain ion implantation mask 4002 as described above with reference to FIGS. 1B and 1C, a racetrack-shaped channel end diffusion is formed under the runway array of the segmented channel end opening region 4008 in the substrate. A chain, the racetrack shaped channel end diffusion chain surrounds a drain end diffusion chain formed in a substrate below the linear drain end opening region 4006.5 is a top plan view of an integrated circuit including a planar extended drain MOS transistor formed in accordance with a fifth example, depicting a state after formation of a chain ion implantation mask. A chain ion implantation mask 5002 is formed over the substrate 5004 on which the integrated circuit 5000 is formed. The chain ion implantation mask 5002 has a drain end opening region 5006 and a segmented channel end opening region 5008. The segmented channel end opening regions 5008 are disposed in a staggered configuration, for example, to obtain a desired doping density and total width of a channel end chain subsequently formed under the channel end opening region 5008. It will be appreciated that a staggered configuration of segmented channel end open regions can be formed in a MOS transistor having a non-linear channel end chain (e.g., a circular or racetrack shaped channel end chain).6 is a top plan view of an integrated circuit including a planar extended drain MOS transistor formed in accordance with a sixth example, depicting a state after formation of a chain ion implantation mask. A chain ion implantation mask 6002 is formed over the substrate 6004 on which the integrated circuit 6000 is formed. The chain ion implantation mask 6002 has a drain end opening region 6006 and a segmented trench end opening region 6008. The segmented channel end opening region 6008 includes at least two differently sized open regions, for example, to obtain a desired doping profile adjacent to a channel of the MOS transistor and subsequently form a channel end chain under the channel end opening region 6008. The desired resistance. It will be appreciated that a configuration of segmented channel end opening regions having openings of different sizes can be formed in a MOS transistor having a non-linear channel end chain (e.g., a circular or racetrack shaped channel end chain).While the various examples of the invention have been described herein, it is understood that Numerous changes may be made to the disclosed embodiments in light of the present disclosure without departing from the spirit and scope of the invention. Therefore, the breadth and scope of the present invention should not be limited to any of the examples described above. Instead, the scope of the invention is to be defined by the appended claims and their equivalents.
Integrated circuit structures having backside gate partial cut or backside trench contact partial cut and/or spit epitaxial structure are described. For example, an integrated circuit structure includes a first sub-fin structure over a first stack of nanowires. A second sub-fin structure is over a second stack of nanowires. A first portion of a gate electrode is around the first stack of nanowires, a second portion of the gate electrode is around the second stack of nanowires, and a third portion of the gate electrode bridges the first and second portions of the gate electrode. A dielectric structure (230) is between the first portion of the gate electrode and the second portion of the gate electrode, the dielectric structure over the third portion of the gate electrode. The dielectric structure is continuous along the first and second portions of the gate electrode and the first and second sub-fin structures.
An integrated circuit structure, comprising:a first sub-fin structure over a first stack of nanowires;a second sub-fin structure over a second stack of nanowires;a gate electrode, wherein a first portion of the gate electrode is around the first stack of nanowires, a second portion of the gate electrode is around the second stack of nanowires, and a third portion of the gate electrode bridges the first and second portions of the gate electrode; anda dielectric structure between the first portion of the gate electrode and the second portion of the gate electrode, the dielectric structure over the third portion of the gate electrode, wherein the dielectric structure is continuous along the first and second portions of the gate electrode and the first and second sub-fin structures.The integrated circuit structure of claim 1, wherein the first, second and third portions of the gate electrode are in direct contact with the dielectric structure.The integrated circuit structure of claim 1 or 2, wherein a gate dielectric layer separates the first portion of gate electrode from the first stack of nanowires, and separates the second portion of the gate electrode from the second stack of nanowires.The integrated circuit structure of claim 1, 2 or 3, wherein the first and second sub-fin structures are semiconductor sub-fin structures.The integrated circuit structure of claim 1, 2 or 3, wherein the first and second sub-fin structures are insulator sub-fin structures.A method of fabricating an integrated circuit structure, the method comprising:forming a first sub-fin structure over a first stack of nanowires;forming a second sub-fin structure over a second stack of nanowires;forming a gate electrode, wherein a first portion of the gate electrode is around the first stack of nanowires, a second portion of the gate electrode is around the second stack of nanowires, and a third portion of the gate electrode bridges the first and second portions of the gate electrode; andforming a dielectric structure between the first portion of the gate electrode and the second portion of the gate electrode, the dielectric structure over the third portion of the gate electrode, wherein the dielectric structure is continuous along the first and second portions of the gate electrode and the first and second sub-fin structures.The method of claim 6, wherein the first, second and third portions of the gate electrode are in direct contact with the dielectric structure.The method of claim 6 or 7, wherein a gate dielectric layer separates the first portion of gate electrode from the first stack of nanowires, and separates the second portion of the gate electrode from the second stack of nanowires.The method of claim 6, 7 or 8, wherein the first and second sub-fin structures are semiconductor sub-fin structures.The method of claim 6, 7 or 8, wherein the first and second sub-fin structures are insulator sub-fin structures.
TECHNICAL FIELDEmbodiments of the disclosure are in the field of integrated circuit structures and processing and, in particular, integrated circuit structures having backside gate partial cut or backside trench contact partial cut and/or spit epitaxial structure, and methods of fabricating integrated circuit structures having backside gate partial cut or backside trench contact partial cut and/or spit epitaxial structure.BACKGROUNDFor the past several decades, the scaling of features in integrated circuits has been a driving force behind an ever-growing semiconductor industry. Scaling to smaller and smaller features enables increased densities of functional units on the limited real estate of semiconductor chips. For example, shrinking transistor size allows for the incorporation of an increased number of memory or logic devices on a chip, lending to the fabrication of products with increased capacity. The drive for ever-more capacity, however, is not without issue. The necessity to optimize the performance of each device becomes increasingly significant.In the manufacture of integrated circuit devices, multi-gate transistors, such as tri-gate transistors, have become more prevalent as device dimensions continue to scale down. In conventional processes, tri-gate transistors are generally fabricated on either bulk silicon substrates or silicon-on-insulator substrates. In some instances, bulk silicon substrates are preferred due to their lower cost and because they enable a less complicated tri-gate fabrication process. In another aspect, maintaining mobility improvement and short channel control as microelectronic device dimensions scale below the 10 nanometer (nm) node provides a challenge in device fabrication.Scaling multi-gate and nanowire transistors has not been without consequence, however. As the dimensions of these fundamental building blocks of microelectronic circuitry are reduced and as the sheer number of fundamental building blocks fabricated in a given region is increased, the constraints on the lithographic processes used to pattern these building blocks have become overwhelming. In particular, there may be a trade-off between the smallest dimension of a feature patterned in a semiconductor stack (the critical dimension) and the spacing between such features.BRIEF DESCRIPTION OF THE DRAWINGSFigures 1A-1D illustrate angled cross-sectional views representing various operations in a method of fabricating an integrated circuit structure having a backside trench contact partial cut and/or spit epitaxial structure, in accordance with an embodiment of the present disclosure.Figures 2A-2C illustrate angled cross-sectional views representing various operations in a method of fabricating an integrated circuit structure having a backside gate partial cut, in accordance with an embodiment of the present disclosure.Figure 3A illustrates a cross-sectional view of a structure including a trench contact without a backside trench contact partial cut and/or spit epitaxial structure, and a cross-sectional view of structure including a trench contact with a backside trench contact partial cut and/or spit epitaxial structure, in accordance with an embodiment of the present disclosure.Figure 3B illustrates a cross-sectional view of a structure including a gate electrode without a backside gate partial cut and a cross-sectional view of a structure including a gate electrode with a backside gate partial cut, in accordance with an embodiment of the present disclosure.Figure 3C illustrates a cross-sectional view of a non-planar integrated circuit structure as taken along a gate line, in accordance with an embodiment of the present disclosure.Figures 4A-4H illustrate plan views of a substrate processed with double-sided device processing methods, in accordance with some embodiments.Figures 5A-5H illustrate cross-sectional views of a substrate processed with double-sided device processing methods, in accordance with some embodiments.Figure 6 illustrates a cross-sectional view taken through nanowires and fins for a non-endcap architecture, in accordance with an embodiment of the present disclosure.Figure 7 illustrates a cross-sectional view taken through nanowires and fins for a self-aligned gate endcap (SAGE) architecture, in accordance with an embodiment of the present disclosure.Figure 8A illustrates a three-dimensional cross-sectional view of a nanowire-based integrated circuit structure, in accordance with an embodiment of the present disclosure.Figure 8B illustrates a cross-sectional source or drain view of the nanowire-based integrated circuit structure of Figure 8A , as taken along an a-a' axis, in accordance with an embodiment of the present disclosure.Figure 8C illustrates a cross-sectional channel view of the nanowire-based integrated circuit structure of Figure 8A , as taken along the b-b' axis, in accordance with an embodiment of the present disclosure.Figure 9 illustrates a computing device in accordance with one implementation of an embodiment of the disclosure.Figure 10 illustrates an interposer that includes one or more embodiments of the disclosure.DESCRIPTION OF THE EMBODIMENTSIntegrated circuit structures having backside gate partial cut or backside trench contact partial cut and/or spit epitaxial structure, and methods of fabricating integrated circuit structures having backside gate partial cut or backside trench contact partial cut and/or spit epitaxial structure, are described. In the following description, numerous specific details are set forth, such as specific integration and material regimes, in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to one skilled in the art that embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known features, such as integrated circuit design layouts, are not described in detail in order to not unnecessarily obscure embodiments of the present disclosure. Furthermore, it is to be appreciated that the various embodiments shown in the Figures are illustrative representations and are not necessarily drawn to scale.Certain terminology may also be used in the following description for the purpose of reference only, and thus are not intended to be limiting. For example, terms such as "upper", "lower", "above", and "below" refer to directions in the drawings to which reference is made. Terms such as "front", "back", "rear", and "side" describe the orientation and/or location of portions of the component within a consistent but arbitrary frame of reference which is made clear by reference to the text and the associated drawings describing the component under discussion. Such terminology may include the words specifically mentioned above, derivatives thereof, and words of similar import.Embodiments described herein may be directed to front-end-of-line (FEOL) semiconductor processing and structures. FEOL is the first portion of integrated circuit (IC) fabrication where the individual devices (e.g., transistors, capacitors, resistors, etc.) are patterned in the semiconductor substrate or layer. FEOL generally covers everything up to (but not including) the deposition of metal interconnect layers. Following the last FEOL operation, the result is typically a wafer with isolated transistors (e.g., without any wires).Embodiments described herein may be directed to back-end-of-line (BEOL) semiconductor processing and structures. BEOL is the second portion of IC fabrication where the individual devices (e.g., transistors, capacitors, resistors, etc.) are interconnected with wiring on the wafer, e.g., the metallization layer or layers. BEOL includes contacts, insulating layers (dielectrics), metal levels, and bonding sites for chip-to-package connections. In the BEOL part of the fabrication stage contacts (pads), interconnect wires, vias and dielectric structures are formed. For modern IC processes, more than 10 metal layers may be added in the BEOL.Embodiments described below may be applicable to FEOL processing and structures, BEOL processing and structures, or both FEOL and BEOL processing and structures. In particular, although an exemplary processing scheme may be illustrated using a FEOL processing scenario, such approaches may also be applicable to BEOL processing. Likewise, although an exemplary processing scheme may be illustrated using a BEOL processing scenario, such approaches may also be applicable to FEOL processing.One or more embodiments described herein are directed to integrated circuit structures having partially cut metal gates and/or having partially cut trench contact structures. One or more embodiments described herein are directed to FinFET structures. One or more embodiments described herein are directed to gate all around devices. It is to be appreciated that, unless indicated otherwise, reference to nanowires herein can indicate nanowires or nanoribbons. In one or more embodiments, backside sub-fin self-aligned trench contact partial cut and/or gate partial cut is described.To provide context, in order to reduce a cell height in a future or scaled technology node, device capacitance needs to be mitigated. In an embodiment, device capacitance is reduced by removing a bulk portion of a gate metal using a sub-fin self-aligned process from the back side. In one embodiment, a sub-fin feature and guide spacer are used to remove a bulk of the gate metal.To provide further context, source or drain epitaxial structure (epi) shorting can limit scaling of technologies. In an embodiment, a sub-fin is used to guide an epi separation (epi-splitting) etch. In one embodiment, by using a non-selective etch metal can be removed from the trench contact area and, possibly the gate area to reduce device capacitance significantly.In accordance with one or more embodiments of the present disclosure, addressing issues outlined above, a metal gate partial cut process is implemented from a backside of an integrated circuit structure. In accordance with one or more embodiments of the present disclosure, addressing issues outlined above, a trench contact partial cut and/or epi-splitting process is implemented from a backside of an integrated circuit structure.In a first exemplary processing scheme, Figures 1A-1D illustrate angled cross-sectional views representing various operations in a method of fabricating an integrated circuit structure having a backside trench contact partial cut and/or spit epitaxial structure, in accordance with an embodiment of the present disclosure.Referring to Figure 1A , a starting structure 100 includes an integrated circuit structure supported face-down, e.g., on a carrier, and following a backside reveal process. The integrated circuit structure includes sub-fins 102 exposed through shallow trench isolation (STI) structures 104. A dielectric liner 103 may separate the sub-fins 102 from the STI structures 104, as is depicted. Each sub-fin 102 is over a corresponding epitaxial source or drain structure 112. The epitaxial source or drain structures 112 are coupled to a single continuous conductive contact structure 116. Insulator structures 114 are laterally between the epitaxial source or drain structures 112. A plurality of conductive contact structures 116 alternate with stacks of nanowires 106. Corresponding gate structures 108, such as structures including a metal gate electrode and gate dielectric layer, are around the nanowires 106. The gate electrode is separated from the nanowires and from the sub-fins 102 by the gate dielectric layer, such as a high-k gate dielectric layer.Referring to Figure 1B , the STI structures 104 are recessed or altogether removed, as is depicted, to reveal the sub-fins 102. In one embodiment, the STI structures 104 are recessed to an extent that exposes a high-k dielectric layer 109 of the gate structures 108, as is depicted. A spacer-forming layer 120, such as a layer including silicon nitride, is formed over the revealed sub-fins 102. The spacer-forming layer 120 may be formed by depositing a conformal layer, depositing helmet etch stop layer 122 on the conformal layer, and then etching the conformal layer in the presence of the helmet etch stop layer 122 to form the spacer-forming layer 120.Referring to Figure 1C , the structure of Figure 1B is then etched using the spacer-forming layer 120 as an etch mask. The etching forms patterned isolator structures 114A and etches into the conductive contact structure 116 to form patterned conductive contact structure 116A. The etching can also narrow the epitaxial source or drain structures 112 to form etched or trimmed epitaxial source or drain structures 112A. In one embodiment, in the case of merged epitaxial source or drain structures 112 which are not intended to be ultimately merged, the etch splits the merged epitaxial source or drain structures 112, e.g., to provide a backside epi-splitting process.Referring to Figure 1D , an insulating layer 130 is formed over the structure of Figure 1C . In one embodiment, the insulating layer 130 is composed of a same material as the patterned isolator structures 114A, as is depicted. Further processing then can include planarizing from the backside (topside) to planarize the insulating layer 130 and to re-expose the sub-fins 102. The sub-fins 102 can be retained as a semiconductor material or can be replaced with insulator sub-fin structures. Further processing on the front side (bottom side), such as interconnect metallization formation can then be carried out.With reference again to the above description of Figures 1A-1D , in accordance with an embodiment of the present disclosure, an integrated circuit structure includes a first sub-fin structure 102 over a first epitaxial source or drain structure 112A. A second sub-fin structure 102 is over a second epitaxial source or drain structure 112A. The integrated circuit structure also includes a conductive contact structure 116A. A first portion of the conductive contact structure is beneath the first epitaxial source or drain structure 112A, a second portion of the conductive contact structure is beneath the second epitaxial source or drain structure 112A, and a third portion of the conductive contact structure bridges the first and second portions of the conductive contact structure. The integrated circuit structure also includes a dielectric structure 130 between the first portion of the conductive contact structure 116A and the second portion of the conductive contact structure 116A. The dielectric structure 130 is over the third portion of the conductive contact structure 116A, and is continuous along the first and second portions of the conductive contact structure 116A and the first and second sub-fin structures 102.In an embodiment, the conductive contact structure 116A is in direct contact with the dielectric structure 130. In an embodiment, the first and second epitaxial source or drain structure 112A are coupled to one or more stacks of nanowires 106. In an embodiment, the first and second sub-fin structures 102 are semiconductor sub-fin structures. In another embodiment, the first and second sub-fin structures are insulator sub-fin structures.In a second exemplary processing scheme, Figures 2A-2C illustrate angled cross-sectional views representing various operations in a method of fabricating an integrated circuit structure having a backside gate partial cut, in accordance with an embodiment of the present disclosure.Referring to Figure 2A , a starting structure 200 includes an integrated circuit structure supported face-down, e.g., on a carrier, and following a backside reveal process. The integrated circuit structure includes sub-fins 202 over corresponding stacks of nanowires 206. A metal gate electrode 208 is around the stacks nanowires 206. The gate electrode 208 is separated from the nanowires 206 and from the sub-fins 202 by the gate dielectric layer 209, such as a high-k gate dielectric layer. A plurality of conductive contact structures 216 and isolator structures 214 alternate with a plurality of such gate structures. The structure of Figure 2A can exemplify a process operation after STI structures are recessed or altogether removed and the sub-fins 202 are revealed. A spacer-forming layer 220, such as a layer including silicon nitride, is formed over the revealed sub-fins 202. The spacer-forming layer 220 may be formed by depositing a conformal layer, depositing helmet etch stop layer 222 on the conformal layer, and then etching the conformal layer in the presence of the helmet etch stop layer 222 to form the spacer-forming layer 220.Referring to Figure 2B , the structure of Figure 2A is then etched using the spacer-forming layer 220 as an etch mask. The etching forms patterned gate electrode 208A and patterned gate dielectric 209A, and can reveal dielectric spacers 201. The etching can remove a bulk portion of the gate electrode 208, e.g., for capacitance reduction. The etching can also erode the helmet etch stop layer 222 and the spacer-forming layer 220.Referring to Figure 2C , an insulating layer 230 is formed over the structure of Figure 2B . In one embodiment, the insulating layer 230 is composed of a same material as the isolator structures 214. Further processing then can include planarizing from the backside (topside) to planarize the insulating layer 230 and to re-expose the sub-fins 202. The sub-fins 202 can be retained as a semiconductor material or can be replaced with insulator sub-fin structures. Further processing on the front side (bottom side), such as interconnect metallization formation can then be carried out.With reference again to the above description of Figures 2A-2C , in accordance with an embodiment of the present disclosure, a first sub-fin structure 202 over a first stack of nanowires 206. A second sub-fin structure 202 is over a second stack of nanowires 206. The integrated circuit structure also includes a gate electrode 208A. A first portion of the gate electrode 208A is around the first stack of nanowires 206, a second portion of the gate electrode 208A is around the second stack of nanowires 206, and a third portion of the gate electrode 208A bridges the first and second portions of the gate electrode 208A. The integrated circuit structure also includes a dielectric structure 230 between the first portion of the gate electrode 208A and the second portion of the gate electrode 208A. The dielectric structure 230 is over the third portion of the gate electrode 208A. The dielectric structure 230 is continuous along the first and second portions of the gate electrode 208A and the first and second sub-fin structures 202.In an embodiment, the first, second and third portions of the gate electrode 208A are in direct contact with the dielectric structure 230. In an embodiment, a gate dielectric layer 209A separates the first portion of the gate electrode 208A from the first stack of nanowires 206, and separates the second portion of the gate electrode 208A from the second stack of nanowires 206. In an embodiment, the first and second sub-fin structures 202 are semiconductor sub-fin structures. In another embodiment, the first and second sub-fin structures 202 are insulator sub-fin structures.As a first comparative example, Figure 3A illustrates a cross-sectional view of a structure including a trench contact without a backside trench contact partial cut and/or spit epitaxial structure, and a cross-sectional view of structure including a trench contact with a backside trench contact partial cut and/or spit epitaxial structure, in accordance with an embodiment of the present disclosure.Referring to structure 300 of Figure 3A , an integrated circuit structure 300 includes epitaxial source or drain structures 312. A single continuous conductive contact structure 316 is over the epitaxial source or drain structures 312. A conductive barrier layer 315 can be included as part of the conductive contact structure 316, as is depicted. A trench contact via 317 can be coupled to the conductive contact structure 316, as is depicted. A dielectric spacer 303 is along lower surfaces of the epitaxial source or drain structures 312. The dielectric spacer 303 is on STI structures 304. In some examples, the epitaxial source or drain structures 312 can be inadvertently merged, possibly leading to device malfunction. Additionally, the volume of the conductive contact structure 316 can lead to extensive capacitance issues.Referring to structure 350 of Figure 3A , by contrast to structure 300, an integrated circuit structure 350 includes epitaxial source or drain structures 362. In one embodiment, the epitaxial source or drain structures 362 have trimmed sidewalls 363, as is depicted. A single continuous conductive contact structure 366 is over the epitaxial source or drain structures 362. A conductive barrier layer 365 can be included as part of the conductive contact structure 366, as is depicted. A trench contact via 367 can be coupled to the conductive contact structure 366, as is depicted. A dielectric structure 368 is between the epitaxial source or drain structures 362. The conductive contact structure 366 (which can include the conductive barrier layer 365) is directly on the dielectric structure 368.As a second comparative example, Figure 3B illustrates a cross-sectional view of a structure including a gate electrode without a backside gate partial cut and a cross-sectional view of a structure including a gate electrode with a backside gate partial cut, in accordance with an embodiment of the present disclosure.Referring to structure 320 of Figure 3B , an integrated circuit structure 320 includes stacks of nanowires 326 which may be over an underlying sub-fin 322 and under a corresponding insulator cap 327. A single metal gate electrode 328 is around the stacks of nanowires 326. A gate dielectric 329 is between the metal gate electrode 328 and the stacks of nanowires 326. An insulating cap and/or spacer 330 can be over the gate electrode 328, as is depicted. A trench contact via 332 can be coupled to the gate electrode 328, as is depicted. The sub-fins 322 are between STI structures 338. In some examples, the volume of the metal gate electrode 328 can lead to extensive capacitance issues.Referring to structure 370 of Figure 3B , by contrast to structure 320, an integrated circuit structure 370 includes stacks of nanowires 376 which may be over an underlying sub-fin 372 and under a corresponding insulator cap 377. A single metal gate electrode 378 is around the stacks of nanowires 376. A gate dielectric 379 is between the metal gate electrode 378 and the stacks of nanowires 376. An insulating cap and/or spacer 380 can be over the gate electrode 378, as is depicted. A trench contact via 382 can be coupled to the gate electrode 378, as is depicted. A dielectric structure 388 is between the stacks of nanowires 376. The gate electrode 378 is directly on the dielectric structure 388. In some examples, the volume of the metal gate electrode 378 is effectively reduced versus metal gate electrode 328, and can mitigate capacitance issues.It is to be appreciated that, as used throughout the disclosure, a sub-fin, a nanowire, a nanoribbon, or a fin described herein may be a silicon sub-fin, a silicon nanowire, a silicon nanoribbon, or a silicon fin. As used throughout, a silicon layer or structure may be used to describe a silicon material composed of a very substantial amount of, if not all, silicon. However, it is to be appreciated that, practically, 100% pure Si may be difficult to form and, hence, could include a tiny percentage of carbon, germanium or tin. Such impurities may be included as an unavoidable impurity or component during deposition of Si or may "contaminate" the Si upon diffusion during post deposition processing. As such, embodiments described herein directed to a silicon layer or structure may include a silicon layer or structure that contains a relatively small amount, e.g., "impurity" level, non-Si atoms or species, such as Ge, C or Sn. It is to be appreciated that a silicon layer or structure as described herein may be undoped or may be doped with dopant atoms such as boron, phosphorous or arsenic.It is to be appreciated that, as used throughout the disclosure, a sub-fin, a nanowire, a nanoribbon, or a fin described herein may be a silicon germanium sub-fin, a silicon germanium nanowire, a silicon germanium nanoribbon, or a silicon germanium fin. As used throughout, a silicon germanium layer or structure may be used to describe a silicon germanium material composed of substantial portions of both silicon and germanium, such as at least 5% of both. In some embodiments, the amount of germanium is greater than the amount of silicon. In particular embodiments, a silicon germanium layer or structure includes approximately 60% germanium and approximately 40% silicon (Si40Ge60). In other embodiments, the amount of silicon is greater than the amount of germanium. In particular embodiments, a silicon germanium layer or structure includes approximately 30% germanium and approximately 70% silicon (Si70Ge30). It is to be appreciated that, practically, 100% pure silicon germanium (referred to generally as SiGe) may be difficult to form and, hence, could include a tiny percentage of carbon or tin. Such impurities may be included as an unavoidable impurity or component during deposition of SiGe or may "contaminate" the SiGe upon diffusion during post deposition processing. As such, embodiments described herein directed to a silicon germanium layer or structure may include a silicon germanium layer or structure that contains a relatively small amount, e.g., "impurity" level, non-Ge and non-Si atoms or species, such as carbon or tin. It is to be appreciated that a silicon germanium layer or structure as described herein may be undoped or may be doped with dopant atoms such as boron, phosphorous or arsenic.It is to be appreciated that the integrated circuit structures described above in association with Figures 1D and/or 2C and/or structure 350 of Figure 3A and/or structure 370 of Figure 3B can be co-integrated with other backside revealed integrated circuit structures. Additionally or alternatively, other integrated circuit structures can be fabricated using processes described in association with Figures 1A-1D and/or 2A-2C. As an example of a backside revealed device, Figure 3C illustrate a cross-sectional view of a non-planar integrated circuit structure as taken along a gate line, in accordance with an embodiment of the present disclosure.Referring to Figure 3C , a semiconductor structure or device 390 includes a non-planar active region (e.g., a solid fin structure including protruding fin portion 391 and sub-fin region 392) within a trench isolation region 393. In another embodiment, instead of a solid fin, the non-planar active region is separated into nanowires (such as nanowires 391A and 391B) above sub-fin region 392, as is represented by the dashed lines. In either case, for ease of description for non-planar integrated circuit structure 390, a non-planar active region 391 is referenced below as a protruding fin portion. It is to be appreciated that, in one embodiment, there is no bulk substrate coupled to the subfin region 392.A gate line 394 is disposed over the protruding portions 391 of the non-planar active region (including, if applicable, surrounding nanowires 391A and 391B), as well as over a portion of the trench isolation region 393. As shown, gate line 394 includes a gate electrode 397 and a gate dielectric layer 398. In one embodiment, gate line 394 may also include a dielectric cap layer 399. A gate contact 395, and overlying gate contact via 396 are also seen from this perspective, along with an overlying metal interconnect 385, all of which are disposed in inter-layer dielectric stacks or layers 389. Also seen from the perspective of Figure 3C , the gate contact 395 is, in one embodiment, disposed over trench isolation region 393, but not over the non-planar active regions. In accordance with an embodiment of the present disclosure, a portion of the gate electrode 397 can be removed in locations 387 from the bottom side of device 390, e.g., for capacitance reduction, according to a process described above in association with Figures 2A-2C and 3B.In an embodiment, the semiconductor structure or device 390 is a non-planar device such as, but not limited to, a fin-FET device, a tri-gate device, a nano-ribbon device, or a nano-wire device. In such an embodiment, a corresponding semiconducting channel region is composed of or is formed in a three-dimensional body. In one such embodiment, the gate electrode stacks of gate lines 394 surround at least a top surface and a pair of sidewalls of the three-dimensional body.As is also depicted in Figure 3C , in an embodiment, an interface 383 exists between a protruding fin portion 391 and sub-fin region 392. The interface 383 can be a transition region between a doped sub-fin region 392 and a lightly or undoped upper fin portion 391. In one such embodiment, each fin is approximately 10 nanometers wide or less, and sub-fin dopants are supplied from an adjacent solid state doping layer at the subfin location. In a particular such embodiment, each fin is less than 10 nanometers wide. In another embodiment, the subfin region is a dielectric material, formed by recessing the fin through a wet or dry etch, and filling the recessed cavity with a conformal or flowable dielectric.Although not depicted in Figure 3C , it is to be appreciated that source or drain regions of or adjacent to the protruding fin portions 391 are on either side of the gate line 394, i.e., into and out of the page. In one embodiment, the source or drain regions are doped portions of original material of the protruding fin portions 391. In another embodiment, the material of the protruding fin portions 391 is removed and replaced with another semiconductor material, e.g., by epitaxial deposition to form discrete epitaxial nubs or non-discrete epitaxial structures. In either embodiment, the source or drain regions may extend below the height of dielectric layer of trench isolation region 393, i.e., into the sub-fin region 392. In accordance with an embodiment of the present disclosure, the more heavily doped sub-fin regions, i.e., the doped portions of the fins below interface 383, inhibits source to drain leakage through this portion of the bulk semiconductor fins.With reference again to Figure 3C , in an embodiment, fins 391/392 (and, possibly nanowires 391A and 391B) are composed of a crystalline silicon, silicon/germanium or germanium layer doped with a charge carrier, such as but not limited to phosphorus, arsenic, boron or a combination thereof. In one embodiment, the concentration of silicon atoms is greater than 93%. In another embodiment, fins 391/392 are composed of a group III-V material, such as, but not limited to, gallium nitride, gallium phosphide, gallium arsenide, indium phosphide, indium antimonide, indium gallium arsenide, aluminum gallium arsenide, indium gallium phosphide, or a combination thereof. Trench isolation region 393 may be composed of a dielectric material such as, but not limited to, silicon dioxide, silicon oxy-nitride, silicon nitride, or carbon-doped silicon nitride.Gate line 394 may be composed of a gate electrode stack which includes a gate dielectric layer 398 and a gate electrode layer 397. In an embodiment, the gate electrode of the gate electrode stack is composed of a metal gate and the gate dielectric layer is composed of a high-k material. For example, in one embodiment, the gate dielectric layer is composed of a material such as, but not limited to, hafnium oxide, hafnium oxy-nitride, hafnium silicate, lanthanum oxide, zirconium oxide, zirconium silicate, tantalum oxide, barium strontium titanate, barium titanate, strontium titanate, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, lead zinc niobate, or a combination thereof. Furthermore, a portion of gate dielectric layer may include a layer of native oxide formed from the top few layers of the substrate fin 391. In an embodiment, the gate dielectric layer is composed of a top high-k portion and a lower portion composed of an oxide of a semiconductor material. In one embodiment, the gate dielectric layer is composed of a top portion of hafnium oxide and a bottom portion of silicon dioxide or silicon oxy-nitride. In some implementations, a portion of the gate dielectric is a "U"-shaped structure that includes a bottom portion substantially parallel to the surface of the substrate and two sidewall portions that are substantially perpendicular to the top surface of the substrate.In one embodiment, the gate electrode is composed of a metal layer such as, but not limited to, metal nitrides, metal carbides, metal silicides, metal aluminides, hafnium, zirconium, titanium, tantalum, aluminum, ruthenium, palladium, platinum, cobalt, nickel or conductive metal oxides. In a specific embodiment, the gate electrode is composed of a non-workfunction-setting fill material formed above a metal workfunction-setting layer. The gate electrode layer may consist of a P-type workfunction metal or an N-type workfunction metal, depending on whether the transistor is to be a PMOS or an NMOS transistor. In some implementations, the gate electrode layer may consist of a stack of two or more metal layers, where one or more metal layers are workfunction metal layers and at least one metal layer is a conductive fill layer. For a PMOS transistor, metals that may be used for the gate electrode include, but are not limited to, ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides, e.g., ruthenium oxide. A P-type metal layer will enable the formation of a PMOS gate electrode with a workfunction that is between about 4.9 eV and about 5.2 eV. For an NMOS transistor, metals that may be used for the gate electrode include, but are not limited to, hafnium, zirconium, titanium, tantalum, aluminum, alloys of these metals, and carbides of these metals such as hafnium carbide, zirconium carbide, titanium carbide, tantalum carbide, and aluminum carbide. An N-type metal layer will enable the formation of an NMOS gate electrode with a workfunction that is between about 3.9 eV and about 4.2 eV. In some implementations, the gate electrode may consist of a "U"-shaped structure that includes a bottom portion substantially parallel to the surface of the substrate and two sidewall portions that are substantially perpendicular to the top surface of the substrate. In another implementation, at least one of the metal layers that form the gate electrode may simply be a planar layer that is substantially parallel to the top surface of the substrate and does not include sidewall portions substantially perpendicular to the top surface of the substrate. In further implementations of the disclosure, the gate electrode may consist of a combination of U-shaped structures and planar, non-U-shaped structures. For example, the gate electrode may consist of one or more U-shaped metal layers formed atop one or more planar, non-U-shaped layers.Spacers associated with the gate electrode stacks may be composed of a material suitable to ultimately electrically isolate, or contribute to the isolation of, a permanent gate structure from adjacent conductive contacts, such as self-aligned contacts. For example, in one embodiment, the spacers are composed of a dielectric material such as, but not limited to, silicon dioxide, silicon oxy-nitride, silicon nitride, or carbon-doped silicon nitride.Gate contact 395 and overlying gate contact via 396 may be composed of a conductive material. In an embodiment, one or more of the contacts or vias are composed of a metal species. The metal species may be a pure metal, such as tungsten, nickel, or cobalt, or may be an alloy such as a metal-metal alloy or a metal-semiconductor alloy (e.g., such as a silicide material).In an embodiment (although not shown), a contact pattern which is essentially perfectly aligned to an existing gate pattern 394 is formed while eliminating the use of a lithographic step with exceedingly tight registration budget. In one such embodiment, the self-aligned approach enables the use of intrinsically highly selective wet etching (e.g., versus conventionally implemented dry or plasma etching) to generate contact openings. In an embodiment, a contact pattern is formed by utilizing an existing gate pattern in combination with a contact plug lithography operation. In one such embodiment, the approach enables elimination of the need for an otherwise critical lithography operation to generate a contact pattern, as used in conventional approaches. In an embodiment, a trench contact grid is not separately patterned, but is rather formed between poly (gate) lines. For example, in one such embodiment, a trench contact grid is formed subsequent to gate grating patterning but prior to gate grating cuts.In an embodiment, providing structure 390 involves fabrication of the gate stack structure 394 by a replacement gate process. In such a scheme, dummy gate material such as polysilicon or silicon nitride pillar material, may be removed and replaced with permanent gate electrode material. In one such embodiment, a permanent gate dielectric layer is also formed in this process, as opposed to being carried through from earlier processing. In an embodiment, dummy gates are removed by a dry etch or wet etch process. In one embodiment, dummy gates are composed of polycrystalline silicon or amorphous silicon and are removed with a dry etch process including use of SF6. In another embodiment, dummy gates are composed of polycrystalline silicon or amorphous silicon and are removed with a wet etch process including use of aqueous NH4OH or tetramethylammonium hydroxide. In one embodiment, dummy gates are composed of silicon nitride and are removed with a wet etch including aqueous phosphoric acid.Referring again to Figure 3C , the arrangement of semiconductor structure or device 390 places the gate contact over isolation regions. Such an arrangement may be viewed as inefficient use of layout space. In another embodiment, however, a semiconductor device has contact structures that contact portions of a gate electrode formed over an active region, e.g., over a sub-fin 392, and in a same layer as a trench contact via.It is to be appreciated that not all aspects of the processes described above need be practiced to fall within the spirit and scope of embodiments of the present disclosure. For example, in one embodiment, dummy gates need not ever be formed prior to fabricating gate contacts over active portions of the gate stacks. The gate stacks described above may actually be permanent gate stacks as initially formed. Also, the processes described herein may be used to fabricate one or a plurality of semiconductor devices. The semiconductor devices may be transistors or like devices. For example, in an embodiment, the semiconductor devices are a metal-oxide semiconductor (MOS) transistors for logic or memory, or are bipolar transistors. Also, in an embodiment, the semiconductor devices have a three-dimensional architecture, such as a trigate device, an independently accessed double gate device, a gate all around (GAA) device, a nanowire device, a nanoribbon device, or a FIN-FET. One or more embodiments may be particularly useful for fabricating semiconductor devices at a sub-10 nanometer (10 nm) technology node.In an embodiment, as used throughout the present description, interlayer dielectric (ILD) material is composed of or includes a layer of a dielectric or insulating material. Examples of suitable dielectric materials include, but are not limited to, oxides of silicon (e.g., silicon dioxide (SiO2)), doped oxides of silicon, fluorinated oxides of silicon, carbon doped oxides of silicon, various low-k dielectric materials known in the arts, and combinations thereof. The interlayer dielectric material may be formed by conventional techniques, such as, for example, chemical vapor deposition (CVD), physical vapor deposition (PVD), or by other deposition methods.In an embodiment, as is also used throughout the present description, metal lines or interconnect line material (and via material) is composed of one or more metal or other conductive structures. A common example is the use of copper lines and structures that may or may not include barrier layers between the copper and surrounding ILD material. As used herein, the term metal includes alloys, stacks, and other combinations of multiple metals. For example, the metal interconnect lines may include barrier layers (e.g., layers including one or more of Ta, TaN, Ti or TiN), stacks of different metals or alloys, etc. Thus, the interconnect lines may be a single material layer, or may be formed from several layers, including conductive liner layers and fill layers. Any suitable deposition process, such as electroplating, chemical vapor deposition or physical vapor deposition, may be used to form interconnect lines. In an embodiment, the interconnect lines are composed of a conductive material such as, but not limited to, Cu, Al, Ti, Zr, Hf, V, Ru, Co, Ni, Pd, Pt, W, Ag, Au or alloys thereof. The interconnect lines are also sometimes referred to in the art as traces, wires, lines, metal, or simply interconnect.In an embodiment, as is also used throughout the present description, hardmask materials, capping layers, or plugs are composed of dielectric materials different from the interlayer dielectric material. In one embodiment, different hardmask, capping or plug materials may be used in different regions so as to provide different growth or etch selectivity to each other and to the underlying dielectric and metal layers. In some embodiments, a hardmask layer, capping or plug layer includes a layer of a nitride of silicon (e.g., silicon nitride) or a layer of an oxide of silicon, or both, or a combination thereof. Other suitable materials may include carbon-based materials. Other hardmask, capping or plug layers known in the arts may be used depending upon the particular implementation. The hardmask, capping or plug layers maybe formed by CVD, PVD, or by other deposition methods.In an embodiment, as is also used throughout the present description, lithographic operations are performed using 193nm immersion litho (i193), EUV and/or EBDW lithography, or the like. A positive tone or a negative tone resist may be used. In one embodiment, a lithographic mask is a trilayer mask composed of a topographic masking portion, an anti-reflective coating (ARC) layer, and a photoresist layer. In a particular such embodiment, the topographic masking portion is a carbon hardmask (CHM) layer and the anti-reflective coating layer is a silicon ARC layer.In another aspect, integrated circuit structures described herein may be fabricated using a back-side reveal of front-side structures fabrication approach. In some exemplary embodiments, reveal of the back-side of a transistor or other device structure entails wafer-level back-side processing. In contrast to a conventional TSV-type technology, a reveal of the back-side of a transistor as described herein may be performed at the density of the device cells, and even within sub-regions of a device. Furthermore, such a reveal of the back-side of a transistor may be performed to remove substantially all of a donor substrate upon which a device layer was disposed during front-side device processing. As such, a microns-deep TSV becomes unnecessary with the thickness of semiconductor in the device cells following a reveal of the back-side of a transistor potentially being only tens or hundreds of nanometers.Reveal techniques described herein may enable a paradigm shift from "bottom-up" device fabrication to "center-out" fabrication, where the "center" is any layer that is employed in front-side fabrication, revealed from the backside, and again employed in back-side fabrication. Processing of both a front side and revealed backside of a device structure may address many of the challenges associated with fabricating 3D ICs when primarily relying on front-side processing.A reveal of the back-side of a transistor approach may be employed for example to remove at least a portion of a carrier layer and intervening layer of a donor-host substrate assembly, for example as illustrated in Figures 4A-4H and 5A-5H , described below. The process flow begins with an input of a donor-host substrate assembly. A thickness of a carrier layer in the donor-host substrate is polished (e.g., CMP) and/or etched with a wet or dry (e.g., plasma) etch process. Any grind, polish, and/or wet/dry etch process known to be suitable for the composition of the carrier layer may be employed. For example, where the carrier layer is a group IV semiconductor (e.g., silicon) a CMP slurry known to be suitable for thinning the semiconductor may be employed. Likewise, any wet etchant or plasma etch process known to be suitable for thinning the group IV semiconductor may also be employed.In some embodiments, the above is preceded by cleaving the carrier layer along a fracture plane substantially parallel to the intervening layer. The cleaving or fracture process may be utilized to remove a substantial portion of the carrier layer as a bulk mass, reducing the polish or etch time needed to remove the carrier layer. For example, where a carrier layer is 400-900 µm in thickness, 100-700 µm may be cleaved off by practicing any blanket implant known to promote a wafer-level fracture. In some exemplary embodiments, a light element (e.g., H, He, or Li) is implanted to a uniform target depth within the carrier layer where the fracture plane is desired. Following such a cleaving process, the thickness of the carrier layer remaining in the donor-host substrate assembly may then be polished or etched to complete removal. Alternatively, where the carrier layer is not fractured, the grind, polish and/or etch operation may be employed to remove a greater thickness of the carrier layer.Next, exposure of an intervening layer is detected. Detection is used to identify a point when the back-side surface of the donor substrate has advanced to nearly the device layer. Any endpoint detection technique known to be suitable for detecting a transition between the materials employed for the carrier layer and the intervening layer may be practiced. In some embodiments, one or more endpoint criteria are based on detecting a change in optical absorbance or emission of the back-side surface of the donor substrate during the polishing or etching performed. In some other embodiments, the endpoint criteria are associated with a change in optical absorbance or emission of byproducts during the polishing or etching of the donor substrate back-side surface. For example, absorbance or emission wavelengths associated with the carrier layer etch byproducts may change as a function of the different compositions of the carrier layer and intervening layer. In other embodiments, the endpoint criteria are associated with a change in mass of species in byproducts of polishing or etching the back-side surface of the donor substrate. For example, the byproducts of processing may be sampled through a quadrupole mass analyzer and a change in the species mass may be correlated to the different compositions of the carrier layer and intervening layer. In another exemplary embodiment, the endpoint criteria is associated with a change in friction between a back-side surface of the donor substrate and a polishing surface in contact with the back-side surface of the donor substrate.Detection of the intervening layer may be enhanced where the removal process is selective to the carrier layer relative to the intervening layer as non-uniformity in the carrier removal process may be mitigated by an etch rate delta between the carrier layer and intervening layer. Detection may even be skipped if the grind, polish and/or etch operation removes the intervening layer at a rate sufficiently below the rate at which the carrier layer is removed. If an endpoint criteria is not employed, a grind, polish and/or etch operation of a predetermined fixed duration may stop on the intervening layer material if the thickness of the intervening layer is sufficient for the selectivity of the etch. In some examples, the carrier etch rate: intervening layer etch rate is 3:1-10:1, or more.Upon exposing the intervening layer, at least a portion of the intervening layer may be removed. For example, one or more component layers of the intervening layer may be removed. A thickness of the intervening layer may be removed uniformly by a polish, for example. Alternatively, a thickness of the intervening layer may be removed with a masked or blanket etch process. The process may employ the same polish or etch process as that employed to thin the carrier, or may be a distinct process with distinct process parameters. For example, where the intervening layer provides an etch stop for the carrier removal process, the latter operation may employ a different polish or etch process that favors removal of the intervening layer over removal of the device layer. Where less than a few hundred nanometers of intervening layer thickness is to be removed, the removal process may be relatively slow, optimized for across-wafer uniformity, and more precisely controlled than that employed for removal of the carrier layer. A CMP process employed may, for example employ a slurry that offers very high selectively (e.g., 100:1-300:1, or more) between semiconductor (e.g., silicon) and dielectric material (e.g., SiO) surrounding the device layer and embedded within the intervening layer, for example, as electrical isolation between adjacent device regions.For embodiments where the device layer is revealed through complete removal of the intervening layer, backside processing may commence on an exposed backside of the device layer or specific device regions there in. In some embodiments, the backside device layer processing includes a further polish or wet/dry etch through a thickness of the device layer disposed between the intervening layer and a device region previously fabricated in the device layer, such as a source or drain region.In some embodiments where the carrier layer, intervening layer, or device layer backside is recessed with a wet and/or plasma etch, such an etch may be a patterned etch or a materially selective etch that imparts significant non-planarity or topography into the device layer back-side surface. As described further below, the patterning may be within a device cell (i.e., "intra-cell" patterning) or may be across device cells (i.e., "inter-cell" patterning). In some patterned etch embodiments, at least a partial thickness of the intervening layer is employed as a hard mask for back-side device layer patterning. Hence, a masked etch process may preface a correspondingly masked device layer etch.The above described processing scheme may result in a donor-host substrate assembly that includes IC devices that have a backside of an intervening layer, a backside of the device layer, and/or backside of one or more semiconductor regions within the device layer, and/or front-side metallization revealed. Additional backside processing of any of these revealed regions may then be performed during downstream processing.In accordance with one or more embodiments of the present disclosure, in order to enable backside access to a partitioned source or drain contact structure, a double-sided device processing scheme may be practiced at the wafer-level. In some exemplary embodiments, a large formal substrate (e.g., 300 or 450 mm diameter) wafer may be processed. In an exemplary processing scheme, a donor substrate including a device layer is provided. In some embodiments, the device layer is a semiconductor material that is employed by an IC device. As one example, in a transistor device, such as a field effect transistor (FET), the channel semiconductor is formed from the semiconductor device layer. As another example, for an optical device, such as a photodiode, the drift and/or gain semiconductor is formed from the device layer. The device layer may also be employed in a passive structure with an IC device. For example, an optical waveguide may employ semiconductor patterned from the device layer.In some embodiments, the donor substrate includes a stack of material layers. Such a material stack may facilitate subsequent formation of an IC device stratum that includes the device layer but lacks other layers of the donor substrate. In an exemplary embodiment, the donor substrate includes a carrier layer separated from the device layer by one or more intervening material layers. The carrier layer is to provide mechanical support during front-side processing of the device layer. The carrier may also provide the basis for crystallinity in the semiconductor device layer. The intervening layer(s) may facilitate removal of the carrier layer and/or the reveal of the device layer backside.Front-side fabrication operations are then performed to form a device structure that includes one or more regions in the device layer. Any known front-side processing techniques may be employed to form any known IC device and exemplary embodiments are further described elsewhere herein. A front-side of the donor substrate is then joined to a host substrate to form a device-host assembly. The host substrate is to provide front-side mechanical support during back-side processing of the device layer. The host substrate may also entail integrated circuitry with which the IC devices fabricated on the donor substrate are interconnected. For such embodiments, joining of the host and donor substrate may further entail formation of 3D interconnect structures through hybrid (dielectric/metal) bonding. Any known host substrate and wafer-level joining techniques may be employed.The process flow continues where the back-side of the device stratum is revealed by removing at least a portion of the carrier layer. In some further embodiments, portions of any intervening layer and/or front-side materials deposited over the device layer may also be removed during the reveal operation. As described elsewhere herein in the context of some exemplary embodiments, an intervening layer(s) may facilitate a highly-uniform exposure of the device stratum back-side, for example serving as one or more of an etch marker or etch stop employed in the wafer-level backside reveal process. Device stratum surfaces exposed from the backside are processed to form a double-side device stratum. Native materials, such as any of those of the donor substrate, which interfaced with the device regions may then be replaced with one or more non-native materials. For example, a portion of a semiconductor device layer or intervening layer may be replaced with one or more other semiconductor, metal, or dielectric materials. In some further embodiments, portions of the front-side materials removed during the reveal operation may also be replaced. For example, a portion of a dielectric spacer, gate stack, or contact metallization formed during front-side device fabrication may be replaced with one or more other semiconductor, metal, or dielectric materials during backside deprocessing/reprocessing of the front-side device. In still other embodiments, a second device stratum or metal interposer is bonded to the reveal backside.The above process flow provides a device stratum-host substrate assembly. The device stratum-host assembly may then be further processed. For example, any known technique may be employed to singulate and package the device stratum-host substrate assembly. Where the host substrate is entirely sacrificial, packaging of the device stratum-host substrate may entail separation of the host substrate from the device stratum. Where the host substrate is not entirely sacrificial (e.g., where the host substrate also includes a device stratum), the device stratum-host assembly output may be fed back as a host substrate input during a subsequent iteration of the above process flow. Iteration of the above approach may thus form a wafer-level assembly of any number of double-side device strata, each only tens or hundreds of nanometers in thickness, for example. In some embodiments, and as further described elsewhere herein, one or more device cells within a device stratum are electrically tested, for example as a yield control point in the fabrication of a wafer-level assembly of double-side device strata. In some embodiments, the electrical test entails back-side device probing.Figures 4A-4H illustrate plan views of a substrate processed with double-sided device processing methods, in accordance with some embodiments. Figures 5A-5H illustrate cross-sectional views of a substrate processed with double-sided device processing methods, in accordance with some embodiments.As shown in Figures 4A and 5A , donor substrate 401 includes a plurality of IC die 411 in an arbitrary spatial layout over a front-side wafer surface. Front-side processing of IC die 411 may have been performed following any techniques to form any device structures. In exemplary embodiments, die 411 include one or more semiconductor regions within device layer 415. An intervening layer 410 separates device layer 415 from carrier layer 405. In the exemplary embodiment, intervening layer 410 is in direct contact with both carrier layer 405 and device layer 415. Alternatively, one or more spacer layers may be disposed between intervening layer 410 and device layer 415 and/or carrier layer 405. Donor substrate 401 may further include other layers, for example disposed over device layer 415 and/or below carrier layer 405.Device layer 415 may include one or more layers of any device material composition known to be suitable for a particular IC device, such as, but not limited to, transistors, diodes, and resistors. In some exemplary embodiments, device layer 415 includes one or more group IV (i.e., IUPAC group 14) semiconductor material layers (e.g., Si, Ge, SiGe), group III-V semiconductor material layers (e.g., GaAs, InGaAs, InAs, InP), or group III-N semiconductor material layers (e.g., GaN, AlGaN, InGaN). Device layer 415 may also include one or more semiconductor transition metal dichalcogenide (TMD or TMDC) layers. In other embodiments, device layer 415 includes one or more graphene layer, or a graphenic material layer having semiconductor properties. In still other embodiments, device layer 415 includes one or more oxide semiconductor layers. Exemplary oxide semiconductors include oxides of a transition metal (e.g., IUPAC group 4-10) or post-transition metal (e.g., IUPAC groups 11-14). In advantageous embodiments, the oxide semiconductor includes at least one of Cu, Zn, Sn, Ti, Ni, Ga, In, Sr, Cr, Co, V, or Mo. The metal oxides may be suboxides (A2O) monoxides (AO), binary oxides (AO2), ternary oxides (ABO3), and mixtures thereof. In other embodiments, device layer 415 includes one or more magnetic, ferromagnetic, ferroelectric material layer. For example device layer 415 may include one or more layers of any material known to be suitable for an tunneling junction device, such as, but not limited to a magnetic tunneling junction (MTJ) device.In some embodiments, device layer 415 is substantially monocrystalline. Although monocrystalline, a significant number of crystalline defects may nonetheless be present. In other embodiments, device layer 415 is amorphous or nanocrystalline. Device layer 415 may be any thickness (e.g., z-dimension in Figure 5A ). In some exemplary embodiments, device layer 415 has a thickness greater than a z-thickness of at least some of the semiconductor regions employed by die 411 as functional semiconductor regions of die 411 built on and/or embedded within device layer 415 need not extend through the entire thickness of device layer 415. In some embodiments, semiconductor regions of die 411 are disposed only within a top-side thickness of device layer 415 demarked in Figure 5A by dashed line 412. For example, semiconductor regions of die 411 may have a z-thickness of 200-300 nm, or less, while device layer may have a z-thickness of 700-1000 nm, or more. As such, around 600 nm of device layer thickness may separate semiconductor regions of die 411 from intervening layer 410.Carrier layer 405 may have the same material composition as device layer 415, or may have a material composition different than device layer 415. For embodiments where carrier layer 405 and device layer 415 have the same composition, the two layers may be identified by their position relative to intervening layer 410. In some embodiments where device layer 415 is a crystalline group IV, group III-V or group III-N semiconductor, carrier layer 405 is the same crystalline group IV, group III-V or group III-N semiconductor as device layer 415. In alternative embodiments, where device layer 415 is a crystalline group IV, group III-V or group III-N semiconductor, carrier layer 405 is a different crystalline group IV, group III-V or group III-N semiconductor than device layer 415. In still other embodiments, carrier layer 405 may include, or be, a material onto which device layer 415 transferred, or grown upon. For example, carrier layer may include one or more amorphous oxide layers (e.g., glass) or crystalline oxide layer (e.g., sapphire), polymer sheets, or any material(s) built up or laminated into a structural support known to be suitable as a carrier during IC device processing. Carrier layer 405 may be any thickness (e.g., z-dimension in Figure 5A ) as a function of the carrier material properties and the substrate diameter. For example, where the carrier layer 405 is a large format (e.g., 300-450 mm) semiconductor substrate, the carrier layer thickness may be 700-1000 µm, or more.In some embodiments, one or more intervening layers 410 are disposed between carrier layer 405 and device layer 415. In some exemplary embodiments, an intervening layer 410 is compositionally distinct from carrier layer 405 such that it may serve as a marker detectable during subsequent removal of carrier layer 405. In some such embodiments, an intervening layer 410 has a composition that, when exposed to an etchant of carrier layer 405 will etch at a significantly slower rate than carrier layer 405 (i.e., intervening layer 410 functions as an etch stop for a carrier layer etch process). In further embodiments, intervening layer 410 has a composition distinct from that of device layer 415. Intervening layer 410 may be a metal, semiconductor, or dielectric material, for example.In some exemplary embodiments where at least one of carrier layer 405 and device layer 415 are crystalline semiconductors, intervening layer 410 is also a crystalline semiconductor layer. Intervening layer 410 may further have the same crystallinity and crystallographic orientation as carrier layer 405 and/or device layer 415. Such embodiments may have the advantage of reduced donor substrate cost relative to alternative embodiments where intervening layer 410 is a material that necessitates bonding (e.g., thermal-compression bonding) of intervening layer 410 to intervening layer 410 and/or to carrier layer 405.For embodiments where intervening layer 410 is a semiconductor, one or more of the primary semiconductor lattice elements, alloy constituents, or impurity concentrations may vary between at least carrier layer 405 and intervening layer 410. In some embodiments where at least carrier layer 405 is a group IV semiconductor, intervening layer 410 may also be a group IV semiconductor, but of a different group IV element or alloy and/or doped with an impurity species to an impurity level different than that of carrier layer 405. For example, intervening layer 410 may be a silicon-germanium alloy epitaxially grown on a silicon carrier. For such embodiments, a pseudomorphic intervening layer may be grown heteroepitaxially to any thickness below the critical thickness. Alternatively, the intervening layer 410 may be a relaxed buffer layer having a thickness greater than the critical thickness.In other embodiments, where at least carrier layer 405 is a group III-V semiconductor, intervening layer 410 may also be a group III-V semiconductor, but of a different group III-V alloy and/or doped with an impurity species to an impurity level different than that of carrier layer 405. For example, intervening layer 410 may be an AlGaAs alloy epitaxially grown on a GaAs carrier. In some other embodiments where both carrier layer 405 and device layer 415 are crystalline semiconductors, intervening layer 410 is also a crystalline semiconductor layer, which may further have the same crystallinity and crystallographic orientation as carrier layer 405 and/or device layer 415.In embodiments where both carrier layer 405 and intervening layer 410 are of the same or different primary semiconductor lattice elements, impurity dopants may differentiate the carrier and intervening layer. For example, intervening layer 410 and carrier layer 405 may both be silicon crystals with intervening layer 410 lacking an impurity present in carrier layer 405, or doped with an impurity absent from carrier layer 405, or doped to a different level with an impurity present in carrier layer 405. The impurity differentiation may impart etch selectivity between the carrier and intervening layer, or merely introduce a detectable species.Intervening layer 410 may be doped with impurities that are electrically active (i.e., rendering it an n-type or p-type semiconductor), or not, as the impurity may provide any basis for detection of the intervening layer 410 during subsequent carrier removal. Exemplary electrically active impurities for some semiconductor materials include group III elements (e.g., B), group IV elements (e.g., P). Any other element may be employed as a non-electrically active species. Impurity dopant concentration within intervening layer 410 need only vary from that of carrier layer 405 by an amount sufficient for detection, which may be predetermined as a function of the detection technique and detector sensitivity.As described further elsewhere herein, intervening layer 410 may have a composition distinct from device layer 415. In some such embodiments, intervening layer 410 may have a different band gap than that of device layer 415. For example, intervening layer 410 may have a wider band-gap than device layer 415.In embodiments where intervening layer 410 includes a dielectric material, the dielectric material may be an inorganic material (e.g., SiO, SiN, SiON, SiOC, hydrogen silsesquioxane, methyl silsesquioxane) or organic material (polyimide, polynorbornenes, benzocyclobutene). For some dielectric embodiments, intervening layer 410 may be formed as an embedded layer (e.g., SiOx through implantation of oxygen into a silicon device and/or carrier layer). Other embodiments of a dielectric intervening layer may necessitate bonding (e.g., thermal-compression bonding) of carrier layer 405 to device layer 415. For example, where donor substrate 401 is a semiconductor-on-oxide (SOI) substrate, either or both of carrier layer 405 and device layer 415 may be oxidized and bonded together to form a SiO intervening layer 410. Similar bonding techniques may be employed for other inorganic or organic dielectric materials.In some other embodiments, intervening layer 410 includes two or more materials laterally spaced apart within the layer. The two or more materials may include a dielectric and a semiconductor, a dielectric and a metal, a semiconductor and a metal, a dielectric and a metal, two different dielectric, two different semiconductors, or two different metals. Within such an intervening layer, a first material may surround islands of the second material that extend through the thickness of the intervening layer. For example, an intervening layer may include a field isolation dielectric that surrounds islands of semiconductor, which extend through the thickness of the intervening layer. The semiconductor may be epitaxially grown within openings of a patterned dielectric or the dielectric material may be deposited within openings of a patterned semiconductor.In some exemplary embodiments, semiconductor features, such as fins or mesas, are etched into a front-side surface of a semiconductor device layer. Trenches surrounding these features may be subsequently backfilled with an isolation dielectric, for example following any known shallow trench isolation (STI) process. One or more of the semiconductor feature or isolation dielectric may be employed for terminating a back-side carrier removal process, for example as a back-side reveal etch stop. In some embodiments, a reveal of trench isolation dielectric may stop, significantly retard, or induce a detectable signal for terminating a back-side carrier polish. For example, a CMP polish of carrier semiconductor employing a slurry that has high selectivity favoring removal of carrier semiconductor (e.g., Si) over removal of isolation dielectric (e.g., SiO) may be significantly slowed upon exposure of a (bottom) surface of the trench isolation dielectric surrounding semiconductor features including the device layer. Because the device layer is disposed on a front side of intervening layer, the device layer need not be directly exposed to the back-side reveal process.Notably, for embodiments where the intervening layer includes both semiconductor and dielectric, the intervening layer thickness may be considerably greater than the critical thickness associated with the lattice mismatch of the intervening layer and carrier. Whereas an intervening layer below critical thickness may be an insufficient thickness to accommodate non-uniformity of a wafer-level back-side reveal process, embodiments with greater thickness may advantageously increase the back-side reveal process window. Embodiments with pin-holed dielectric may otherwise facilitate subsequent separation of carrier and device layers as well as improve crystal quality within the device layer.Semiconductor material within intervening layers that include both semiconductor and dielectric may also be homoepitaxial. In some exemplary embodiments, a silicon epitaxial device layer is grown through a pin-holed dielectric disposed over a silicon carrier layer.Continuing with description of Figures 4A and 5A , intervening layer 410 may also be a metal. For such embodiments, the metal may be of any composition known to be suitable for bonding to carrier layer 405 or device layer 415. For example, either or both of carrier layer 405 and device layer 415 may be finished with a metal, such as, but not limited to Au or Pt, and subsequently bonded together, for example to form an Au or Pt intervening layer 410. Such a metal may also be part of an intervening layer that further includes a patterned dielectric surrounding metal features.Intervening layer 410 may be of any thickness (e.g., z-height in Figure 5A ). The intervening layer should be sufficiently thick to ensure the carrier removal operation can be reliably terminated before exposing device regions and/or device layer 415. Exemplary thicknesses for intervening layer 410 range from a few hundred nanometers to a few micrometers and may vary as a function of the amount of carrier material that is to be removed, the uniformity of the carrier removal process, and the selectivity of the carrier removal process, for example. For embodiments where the intervening layer has the same crystallinity and crystallographic orientation as carrier layer 405, the carrier layer thickness may be reduced by the thickness of intervening layer 410. In other words, intervening layer 410 may be a top portion of a 700-1000 µm thick group IV crystalline semiconductor substrate also employed as the carrier layer. In pseudomorphic heteroepitaxial embodiments, intervening layer thickness may be limited to the critical thickness. For heteroepitaxial intervening layer embodiments employing aspect ratio trapping (ART) or another fully relaxed buffer architecture, the intervening layer may have any thickness.As further illustrated in Figures 4B and 5B , donor substrate 401 may be joined to a host substrate 402 to form a donor-host substrate assembly 403. In some exemplary embodiments, a front-side surface of donor substrate 401 is joined to a surface of host substrate 402 such that device layer 415 is proximal host substrate 402 and carrier layer 405 is distal from host substrate 402. Host substrate 402 may be any substrate known to be suitable for joining to device layer 415 and/or a front-side stack fabricated over device layer 415. In some embodiments, host substrate 402 includes one or more additional device strata. For example, host substrate 402 may further include one or more device layer (not depicted). Host substrate 402 may include integrated circuitry with which the IC devices fabricated in a device layer of host substrate 402 are interconnected, in which case joining of device layer 415 to host substrate 402 may further entail formation of 3D interconnect structures through the wafer-level bond.Although not depicted in detail by Figure 5B , any number of front-side layers, such as interconnect metallization levels and interlayer dielectric (ILD) layers, may be present between device layer 415 and host substrate 402. Any technique may be employed to join host substrate 402 and donor substrate 401. In some exemplary embodiments further described elsewhere herein, the joining of donor substrate 401 to host substrate 402 is through metal-metal, oxide-oxide, or hybrid (metal/oxide-metal/oxide) thermal compression bonding.With host substrate 402 facing device layer 415 on a side opposite carrier layer 405, at least a portion of carrier layer 405 may be removed as further illustrated in Figures 4C and 5C . Where the entire carrier layer 405 is removed, donor-host substrate assembly 403 maintains a highly uniform thickness with planar backside and front side surfaces. Alternatively, carrier layer 405 may be masked and intervening layer 410 exposed only in unmasked sub-regions to form a non-planar backside surface. In the exemplary embodiments illustrated by Figures 4C and 5C , carrier layer 405 is removed from the entire back-side surface of donor-host substrate assembly 403. Carrier layer 405 may be removed, for example by cleaving, grinding, and/or polishing (e.g., chemical-mechanical polishing), and/or wet chemical etching, and/or plasma etching through a thickness of the carrier layer to expose intervening layer 410. One or more operations may be employed to remove carrier layer 405. Advantageously, the removal operation(s) may be terminated based on duration or an endpoint signal sensitive to exposure of intervening layer 410.In further embodiments, for example as illustrated by Figures 4D and 5D , intervening layer 410 is also at least partially etched to expose a backside of device layer 415. At least a portion of intervening layer 410 may be removed subsequent to its use as a carrier layer etch stop and/or carrier layer etch endpoint trigger. Where the entire intervening layer 410 is removed, donor-host substrate assembly 403 maintains a highly uniform device layer thickness with planar back-side and front-side surfaces afforded by the intervening layer being much thinner than the carrier layer. Alternatively, intervening layer 410 may be masked and device layer 415 exposed only in unmasked sub-regions, thereby forming a non-planar back-side surface. In the exemplary embodiments illustrated by Figures 4D and 5D , intervening layer 410 is removed from the entire back-side surface of donor-host substrate assembly 403. Intervening layer 410 may be so removed, for example, by polishing (e.g., chemical-mechanical polishing), and/or blanket wet chemical etching, and/or blanket plasma etching through a thickness of the intervening layer to expose device layer 415. One or more operations may be employed to remove intervening layer 410. Advantageously, the removal operation(s) may be terminated based on duration or an endpoint signal sensitive to exposure of device layer 415.In some further embodiments, for example as illustrated by Figures 4E and 5E , device layer 415 is partially etched to expose a backside of a device structure previously formed from during front-side processing. At least a portion of device layer 415 may be removed subsequent to its use in fabricating one or more of the device semiconductor regions, and/or its use as an intervening layer etch stop or endpoint trigger. Where device layer 415 is thinned over the entire substrate area, donor-host substrate assembly 403 maintains a highly uniform reduced thickness with planar back and front surfaces. Alternatively, device layer 415 may be masked and device structures (e.g., device semiconductor regions) selectively revealed only in unmasked sub-regions, thereby forming a non-planar backside surface. In the exemplary embodiments illustrated by Figures 4E and 5E , device layer 415 is thinned over the entire back-side surface of donor-host substrate assembly 403. Device layer 415 may be thinned, for example by polishing (e.g., chemical-mechanical polishing), and/or wet chemical etching, and/or plasma etching through a thickness of the device layer to expose one or more device semiconductor regions, and/or one or more other device structures (e.g., front-side device terminal contact metallization, spacer dielectric, etc.) previously formed during front-side processing. One or more operations may be employed to thin device layer 415. Advantageously, the device layer thinning may be terminated based on duration or an endpoint signal sensitive to exposure of patterned features within device layer 415. For example, where front-side processing forms device isolation features (e.g., shallow trench isolation), backside thinning of device layer 415 may be terminated upon exposing the isolation dielectric material.A non-native material layer may be deposited over a back-side surface of an intervening layer, device layer, and/or specific device regions within device layer 415, and/or over or more other device structures (e.g., front-side device terminal contact metallization, spacer dielectric, etc.). One or more materials exposed (revealed) from the backside may be covered with non-native material layer or replaced with such a material. In some embodiments, illustrated by Figures 4F and 5F , non-native material layer 420 is deposited on device layer 415. Non-native material layer 420 may be any material having a composition and/or microstructure distinct from that of the material removed to reveal the backside of the device stratum. For example, where intervening layer 410 is removed to expose device layer 415, non-native material layer 420 may be another semiconductor of different composition or microstructure than that of intervening layer 410. In some such embodiments where device layer 415 is a group III-N semiconductor, non-native material layer 420 may also be a group III-N semiconductor of the same or different composition that is regrown upon a revealed backside surface of a group III-N device region. This material may be epitaxially regrown from the revealed group III-N device region, for example, to have better crystal quality than that of the material removed, and/or to induce strain within the device layer and/or device regions within the device layer, and/or to form a vertical (e.g., z-dimension) stack of device semiconductor regions suitable for a stacked device.In some other embodiments where device layer 415 is a group III-V semiconductor, non-native material layer 420 may also be a group III-V semiconductor of the same or different composition that is regrown upon a revealed backside surface of a group III-V device region. This material may be epitaxially regrown from the revealed group III-V device region, for example, to have relatively better crystal quality than that of the material removed, and/or to induce strain within the device layer or a specific device region within the device layer, and/or to form a vertical stack of device semiconductor regions suitable for a stacked device.In some other embodiments where device layer 415 is a group IV semiconductor, non-native material layer 420 may also be a group IV semiconductor of the same or different composition that is regrown upon a revealed backside surface of a group IV device region. This material may be epitaxially regrown from the revealed group IV device region, for example, to have relatively better crystal quality than that of the material removed, and/or to induce strain within the device region, and/or to form a stack of device semiconductor regions suitable for a stacked device.In some other embodiments, non-native material layer 420 is a dielectric material, such as, but not limited to SiO, SiON, SiOC, hydrogen silsesquioxane, methyl silsesquioxane, polyimide, polynorbornenes, benzocyclobutene, or the like. Deposition of such a dielectric may serve to electrically isolate various device structures, such as semiconductor device regions, that may have been previously formed during front-side processing of donor substrate 401.In some other embodiments, non-native material layer 420 is a conductive material, such as any elemental metal or metal alloy known to be suitable for contacting one or more surfaces of device regions revealed from the backside. In some embodiments, non-native material layer 420 is a metallization suitable for contacting a device region revealed from the backside, such as a transistor source or drain region. In embodiments, intermetallic contacts such as NixSiy, TixSiy, Ni:Si:Pt, TiSi, CoSi, etc. may be formed. Additionally, implants may be used to enable robust contacts (e.g., P, Ge, B etc.).In some embodiments, non-native material layer 420 is a stack of materials, such as a FET gate stack that includes both a gate dielectric layer and a gate electrode layer. As one example, non-native material layer 420 may be a gate dielectric stack suitable for contacting a semiconductor device region revealed from the backside, such as a transistor channel region. Any of the other the materials described as options for device layer 415 may also be deposited over a backside of device layer 415 and/or over device regions formed within device layer 415. For example, non-native material layer 420 may be any of the oxide semiconductors, TMDC, or tunneling materials described above, which may be deposited on the back-side, for example, to incrementally fabricate vertically-stacked device strata.Back-side wafer-level processing may continue in any manner known to be suitable for front-side processing. For example, non-native material layer 420 may be patterned into active device regions, device isolation regions, device contact metallization, or device interconnects using any known lithographic and etch techniques. Back-side wafer-level processing may further fabricate one or more interconnect metallization levels coupling terminals of different devices into an IC. In some embodiments further described elsewhere herein, back-side processing may be employed to interconnect a power bus to various device terminals within an IC.In some embodiments, back-side processing includes bonding to a secondary host substrate. Such bonding may employ any layer transfer process to join the back-side (e.g., non-native) material layer to another substrate. Following such joining, the former host substrate may be removed as a sacrificial donor to re-expose the front-side stack and/or the front side of the device layer. Such embodiments may enable iterative side-to-side lamination of device strata with a first device layer serving as the core of the assembly. In some embodiments illustrated in Figures 4G and 5G , secondary host substrate 440 joined to non-native material layer 420 provides at least mechanical support while host substrate 402 is removed.Any bonding, such as, but not limited to, thermal-compression bonding may be employed to join secondary host substrate 440 to non-native material layer 420. In some embodiments, both a surface layer of secondary host substrate 440 and non-native material layer 420 are continuous dielectric layers (e.g., SiO), which are thermal-compression bonded. In some other embodiments, both a surface layer of secondary host substrate 440 and non-native material layer 420 include a metal layer (e.g., Au, Pt, etc.), which are thermal-compression bonded. In other embodiments, at least one of surface layer of secondary host substrate 440 and non-native material layer 420 are patterned, including both patterned metal surface (i.e., traces) and surrounding dielectric (e.g., isolation), which are thermal-compression bonded to form a hybrid (e.g., metal/oxide) joint. For such embodiments, structural features in the secondary host substrate 440 and the patterned non-native material layer 420 are aligned (e.g., optically) during the bonding process. In some embodiments, non-native material layer 420 includes one or more conductive back-side traces coupled to a terminal of a transistor fabricated in device layer 415. The conductive back-side trace may, for example, be bonded to metallization on secondary host substrate 440.Bonding of device strata may proceed from the front-side and/or back-side of a device layer before or after front-side processing of the device layer has been completed. A back-side bonding process may be performed after front-side fabrication of a device (e.g., transistor) is substantially complete. Alternatively, back-side bonding process may be performed prior to completing front-side fabrication of a device (e.g., transistor), in which case the front side of the device layer may receive additional processing following the back-side bonding process. As further illustrated in Figures 4H and 5H , for example, front-side processing includes removal of host substrate 402 (as a second donor substrate) to re-expose the front side of device layer 415. At this point, donor-host substrate assembly 403 includes secondary host 440 joined to device layer 415 through non-native material layer 420.In another aspect, the integrated circuit structures described above in association with Figures 1D and/or 2C and/or structure 350 of Figure 3A and/or structure 370 of Figure 3B can be co-integrated with other backside revealed integrated circuit structures such as neighboring semiconductor structures or devices separated by self-aligned gate endcap (SAGE) structures. Particular embodiments may be directed to integration of multiple width (multi-Wsi) nanowires and nanoribbons in a SAGE architecture and separated by a SAGE wall. In an embodiment, nanowires/nanoribbons are integrated with multiple Wsi in a SAGE architecture portion of a front-end process flow. Such a process flow may involve integration of nanowires and nanoribbons of different Wsi to provide robust functionality of next generation transistors with low power and high performance. Associated epitaxial source or drain regions may be embedded (e.g., portions of nanowires removed and then source or drain (S/D) growth is performed).To provide further context, advantages of a self-aligned gate endcap (SAGE) architecture may include the enabling of higher layout density and, in particular, scaling of diffusion to diffusion spacing. To provide illustrative comparison, Figure 6 illustrates a cross-sectional view taken through nanowires and fins for a non-endcap architecture, in accordance with an embodiment of the present disclosure. Figure 7 illustrates a cross-sectional view taken through nanowires and fins for a self-aligned gate endcap (SAGE) architecture, in accordance with an embodiment of the present disclosure.Referring to Figure 6 , an integrated circuit structure 600 includes a substrate 602 having fins 604 protruding there from by an amount 606 above an isolation structure 608 laterally surrounding lower portions of the fins 604. Upper portions of the fins may include a local isolation structure 622 and a growth enhancement layer 620, as is depicted. Corresponding nanowires 605 are over the fins 604. A gate structure may be formed over the integrated circuit structure 600 to fabricate a device. However, breaks in such a gate structure may be accommodated for by increasing the spacing between fin 604/nanowire 605 pairs.Referring to Figure 6 , in an embodiment, following gate formation, the lower portions of the structure 600 can be planarized and/or etched to level 634 in order to leave a backside surface including exposed bottom surfaces of gate structures and epitaxial source or drain structures. It is to be appreciated that backside (bottom) contacts may be formed on the exposed bottom surfaces of the epitaxial source or drain structures. It is also to be appreciated that planarization and/or etching could be to other levels such as 630 or 632.By contrast, referring to Figure 7 , an integrated circuit structure 750 includes a substrate 752 having fins 754 protruding therefrom by an amount 756 above an isolation structure 758 laterally surrounding lower portions of the fins 754. Upper portions of the fins may include a local isolation structure 772 and a growth enhancement layer 770, as is depicted. Corresponding nanowires 755 are over the fins 754. Isolating SAGE walls 760 (which may include a hardmask thereon, as depicted) are included within the isolation structure 758 and between adjacent fin 754/nanowire 755 pairs. The distance between an isolating SAGE wall 760 and a nearest fin 754/nanowire 755 pair defines the gate endcap spacing 762. A gate structure may be formed over the integrated circuit structure 750, between insolating SAGE walls to fabricate a device. Breaks in such a gate structure are imposed by the isolating SAGE walls. Since the isolating SAGE walls 760 are self-aligned, restrictions from conventional approaches can be minimized to enable more aggressive diffusion to diffusion spacing. Furthermore, since gate structures include breaks at all locations, individual gate structure portions may be layer connected by local interconnects formed over the isolating SAGE walls 760. In an embodiment, as depicted, the isolating SAGE walls 760 each include a lower dielectric portion and a dielectric cap on the lower dielectric portion.Referring to Figure 7 , in an embodiment, following gate formation, the lower portions of the structure 700 can be planarized and/or etched to level 784 in order to leave a backside surface including exposed bottom surfaces of gate structures and epitaxial source or drain structures. It is to be appreciated that backside (bottom) contacts may be formed on the exposed bottom surfaces of the epitaxial source or drain structures. It is also to be appreciated that planarization and/or etching could be to other levels such as 780 or 782.A self-aligned gate endcap (SAGE) processing scheme involves the formation of gate/trench contact endcaps self-aligned to fins without requiring an extra length to account for mask mis-registration. Thus, embodiments may be implemented to enable shrinking of transistor layout area. Embodiments described herein may involve the fabrication of gate endcap isolation structures, which may also be referred to as gate walls, isolation gate walls or self-aligned gate endcap (SAGE) walls.In an embodiment, as described throughout, self-aligned gate endcap (SAGE) isolation structures may be composed of a material or materials suitable to ultimately electrically isolate, or contribute to the isolation of, portions of permanent gate structures from one another. Exemplary materials or material combinations include a single material structure such as silicon dioxide, silicon oxy-nitride, silicon nitride, or carbon-doped silicon nitride. Other exemplary materials or material combinations include a multi-layer stack having lower portion silicon dioxide, silicon oxy-nitride, silicon nitride, or carbon-doped silicon nitride and an upper portion higher dielectric constant material such as hafnium oxide.It is to be appreciated that the integrated circuit structures described above in association with Figures 1D and/or 2C and/or structure 350 of Figure 3A and/or structure 370 of Figure 3B can be co-integrated with other backside revealed integrated circuit structures, such as other nanowire or nanoribbon based devices. Additionally or alternatively, other integrated circuit structures can be fabricated using processes described in association with Figures 1A-1D and/or 2A-2C. To highlight an exemplary integrated circuit structure having three vertically arranged nanowires, Figure 8A illustrates a three-dimensional cross-sectional view of a nanowire-based integrated circuit structure, in accordance with an embodiment of the present disclosure. Figure 8B illustrates a cross-sectional source or drain view of the nanowire-based integrated circuit structure of Figure 8A , as taken along an a-a' axis. Figure 8C illustrates a cross-sectional channel view of the nanowire-based integrated circuit structure of Figure 8A , as taken along the b-b' axis.Referring to Figure 8A , an integrated circuit structure 800 includes one or more vertically stacked nanowires (804 set) above a substrate 802. In an embodiment, as depicted, a local isolation structure 802C, a growth enhancement layer 802B, and a lower substrate portion 802A are included in substrate 802, as is depicted. An optional fin below the bottommost nanowire and formed from the substrate 802 is not depicted for the sake of emphasizing the nanowire portion for illustrative purposes. Embodiments herein are targeted at both single wire devices and multiple wire devices. As an example, a three nanowire-based devices having nanowires 804A, 804B and 804C is shown for illustrative purposes. For convenience of description, nanowire 804A is used as an example where description is focused on one of the nanowires. It is to be appreciated that where attributes of one nanowire are described, embodiments based on a plurality of nanowires may have the same or essentially the same attributes for each of the nanowires.Each of the nanowires 804 includes a channel region 806 in the nanowire. The channel region 806 has a length (L). Referring to Figure 8C , the channel region also has a perimeter (Pc) orthogonal to the length (L). Referring to both Figures 8A and 8C , a gate electrode stack 808 surrounds the entire perimeter (Pc) of each of the channel regions 806. The gate electrode stack 808 includes a gate electrode along with a gate dielectric layer between the channel region 806 and the gate electrode (not shown). In an embodiment, the channel region is discrete in that it is completely surrounded by the gate electrode stack 808 without any intervening material such as underlying substrate material or overlying channel fabrication materials. Accordingly, in embodiments having a plurality of nanowires 804, the channel regions 806 of the nanowires are also discrete relative to one another. In accordance with an embodiment of the present disclosure, a portion of the gate electrode stack 808 can be removed from the bottom side of device 800, e.g., for capacitance reduction, according to a process described above in association with Figures 2A-2C and 3B .Referring to both Figures 8A and 8B , integrated circuit structure 800 includes a pair of non-discrete source or drain regions 810/812. The pair of non-discrete source or drain regions 810/812 is on either side of the channel regions 806 of the plurality of vertically stacked nanowires 804. Furthermore, the pair of non-discrete source or drain regions 810/812 is adjoining for the channel regions 806 of the plurality of vertically stacked nanowires 804. In one such embodiment, not depicted, the pair of non-discrete source or drain regions 810/812 is directly vertically adjoining for the channel regions 806 in that epitaxial growth is on and between nanowire portions extending beyond the channel regions 806, where nanowire ends are shown within the source or drain structures. In another embodiment, as depicted in Figure 8A , the pair of non-discrete source or drain regions 810/812 is indirectly vertically adjoining for the channel regions 806 in that they are formed at the ends of the nanowires and not between the nanowires.In an embodiment, as depicted, the source or drain regions 810/812 are non-discrete in that there are not individual and discrete source or drain regions for each channel region 806 of a nanowire 804. Accordingly, in embodiments having a plurality of nanowires 804, the source or drain regions 810/812 of the nanowires are global or unified source or drain regions as opposed to discrete for each nanowire. That is, the non-discrete source or drain regions 810/812 are global in the sense that a single unified feature is used as a source or drain region for a plurality (in this case, 3) of nanowires 804 and, more particularly, for more than one discrete channel region 806. In one embodiment, from a cross-sectional perspective orthogonal to the length of the discrete channel regions 806, each of the pair of non-discrete source or drain regions 810/812 is approximately rectangular in shape with a bottom tapered portion and a top vertex portion, as depicted in Figure 8B . In other embodiments, however, the source or drain regions 810/812 of the nanowires are relatively larger yet discrete non-vertically merged epitaxial structures such as nubs.In accordance with an embodiment of the present disclosure, and as depicted in Figures 8A and 8B , integrated circuit structure 800 further includes a pair of contacts 814, each contact 814 on one of the pair of non-discrete source or drain regions 810/812. In one such embodiment, in a vertical sense, each contact 814 completely surrounds the respective non-discrete source or drain region 810/812. In another aspect, the entire perimeter of the non-discrete source or drain regions 810/812 may not be accessible for contact with contacts 814, and the contact 814 thus only partially surrounds the non-discrete source or drain regions 810/812, as depicted in Figure 8B . In a contrasting embodiment, not depicted, the entire perimeter of the non-discrete source or drain regions 810/812, as taken along the a-a' axis, is surrounded by the contacts 814. In accordance with an embodiment of the present disclosure, a portion of the contacts 814 can be removed from the bottom side of device 800, e.g., for capacitance reduction and/or for epi-splitting, according to a process described above in association with Figures 1A-1D and 3A .Referring again to Figure 8A , in an embodiment, integrated circuit structure 800 further includes a pair of spacers 816. As is depicted, outer portions of the pair of spacers 816 may overlap portions of the non-discrete source or drain regions 810/812, providing for "embedded" portions of the non-discrete source or drain regions 810/812 beneath the pair of spacers 816. As is also depicted, the embedded portions of the non-discrete source or drain regions 810/812 may not extend beneath the entirety of the pair of spacers 816.Substrate 802 may be composed of a material suitable for integrated circuit structure fabrication. In one embodiment, substrate 802 includes a lower bulk substrate composed of a single crystal of a material which may include, but is not limited to, silicon, germanium, silicon-germanium, germanium-tin, silicon-germanium-tin, or a group III-V compound semiconductor material. An upper insulator layer composed of a material which may include, but is not limited to, silicon dioxide, silicon nitride or silicon oxy-nitride is on the lower bulk substrate. Thus, the structure 800 may be fabricated from a starting semiconductor-on-insulator substrate. Alternatively, the structure 800 is formed directly from a bulk substrate and local oxidation is used to form electrically insulative portions in place of the above described upper insulator layer. In another alternative embodiment, the structure 800 is formed directly from a bulk substrate and doping is used to form electrically isolated active regions, such as nanowires, thereon. In one such embodiment, the first nanowire (i.e., proximate the substrate) is in the form of an omega-FET type structure.In an embodiment, the nanowires 804 may be sized as wires or ribbons, as described below, and may have squared-off or rounder corners. In an embodiment, the nanowires 804 are composed of a material such as, but not limited to, silicon, germanium, or a combination thereof. In one such embodiment, the nanowires are single-crystalline. For example, for a silicon nanowire 804, a single-crystalline nanowire may be based from a (100) global orientation, e.g., with a <100> plane in the z-direction. As described below, other orientations may also be considered. In an embodiment, the dimensions of the nanowires 804, from a cross-sectional perspective, are on the nano-scale. For example, in a specific embodiment, the smallest dimension of the nanowires 804 is less than approximately 20 nanometers. In an embodiment, the nanowires 804 are composed of a strained material, particularly in the channel regions 806.Referring to Figures 8C , in an embodiment, each of the channel regions 806 has a width (Wc) and a height (Hc), the width (Wc) approximately the same as the height (Hc). That is, in both cases, the channel regions 806 are square-like or, if corner-rounded, circle-like in cross-section profile. In another aspect, the width and height of the channel region need not be the same, such as the case for nanoribbons as described throughout.Referring again to Figures 8A, 8B and 8C , in an embodiment, the lower portions of the structure 800 can be planarized and/or etched to level 899 in order to leave a backside surface including exposed bottom surfaces of gate structures and epitaxial source or drain structures. It is to be appreciated that backside (bottom) contacts may be formed on the exposed bottom surfaces of the epitaxial source or drain structures.In an embodiment, as described throughout, an integrated circuit structure includes non-planar devices such as, but not limited to, a finFET or a tri-gate structure with corresponding one or more overlying nanowire structures, and an isolation structure between the finFET or tri-gate structure and the corresponding one or more overlying nanowire structures. In some embodiments, the finFET or tri-gate structure is retained. In other embodiments, the finFET or tri-gate structure is may ultimately be removed in a substrate removal process.Embodiments disclosed herein may be used to manufacture a wide variety of different types of integrated circuits and/or microelectronic devices. Examples of such integrated circuits include, but are not limited to, processors, chipset components, graphics processors, digital signal processors, micro-controllers, and the like. In other embodiments, semiconductor memory may be manufactured. Moreover, the integrated circuits or other microelectronic devices may be used in a wide variety of electronic devices known in the arts. For example, in computer systems (e.g., desktop, laptop, server), cellular phones, personal electronics, etc. The integrated circuits may be coupled with a bus and other components in the systems. For example, a processor may be coupled by one or more buses to a memory, a chipset, etc. Each of the processor, the memory, and the chipset, may potentially be manufactured using the approaches disclosed herein.Figure 9 illustrates a computing device 900 in accordance with one implementation of an embodiment of the present disclosure. The computing device 900 houses a board 902. The board 902 may include a number of components, including but not limited to a processor 904 and at least one communication chip 906. The processor 904 is physically and electrically coupled to the board 902. In some implementations the at least one communication chip 906 is also physically and electrically coupled to the board 902. In further implementations, the communication chip 906 is part of the processor 904.Depending on its applications, computing device 900 may include other components that may or may not be physically and electrically coupled to the board 902. These other components include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), flash memory, a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth).The communication chip 906 enables wireless communications for the transfer of data to and from the computing device 900. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 906 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device 900 may include a plurality of communication chips 906. For instance, a first communication chip 906 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 906 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.The processor 904 of the computing device 900 includes an integrated circuit die packaged within the processor 904. The integrated circuit die of the processor 904 may include one or more structures, such as integrated circuit structures built in accordance with implementations of embodiments of the present disclosure. The term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.The communication chip 906 also includes an integrated circuit die packaged within the communication chip 906. The integrated circuit die of the communication chip 906 may include one or more structures, such as integrated circuit structures built in accordance with implementations of embodiments of the present disclosure.In further implementations, another component housed within the computing device 900 may contain an integrated circuit die that includes one or structures, such as integrated circuit structures built in accordance with implementations of embodiments of the present disclosure.In various implementations, the computing device 900 may be a laptop, a netbook, a notebook, an ultrabook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder. In further implementations, the computing device 900 may be any other electronic device that processes data.Figure 10 illustrates an interposer 1000 that includes one or more embodiments of the present disclosure. The interposer 1000 is an intervening substrate used to bridge a first substrate 1002 to a second substrate 1004. The first substrate 1002 may be, for instance, an integrated circuit die. The second substrate 1004 may be, for instance, a memory module, a computer motherboard, or another integrated circuit die. Generally, the purpose of an interposer 1000 is to spread a connection to a wider pitch or to reroute a connection to a different connection. For example, an interposer 1000 may couple an integrated circuit die to a ball grid array (BGA) 1006 that can subsequently be coupled to the second substrate 1004. In some embodiments, the first and second substrates 1002/1004 are attached to opposing sides of the interposer 1000. In other embodiments, the first and second substrates 1002/1004 are attached to the same side of the interposer 1000. And in further embodiments, three or more substrates are interconnected by way of the interposer 1000.The interposer 1000 may be formed of an epoxy resin, a fiberglass-reinforced epoxy resin, a ceramic material, or a polymer material such as polyimide. In further implementations, the interposer 1000 may be formed of alternate rigid or flexible materials that may include the same materials described above for use in a semiconductor substrate, such as silicon, germanium, and other group III-V and group IV materials.The interposer 1000 may include metal interconnects 1008 and vias 1010, including but not limited to through-silicon vias (TSVs) 1012. The interposer 1000 may further include embedded devices 1014, including both passive and active devices. Such devices include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, sensors, and electrostatic discharge (ESD) devices. More complex devices such as radio-frequency (RF) devices, power amplifiers, power management devices, antennas, arrays, sensors, and MEMS devices may also be formed on the interposer 1000. In accordance with embodiments of the disclosure, apparatuses or processes disclosed herein may be used in the fabrication of interposer 1000 or in the fabrication of components included in the interposer 1000.Thus, embodiments of the present disclosure include integrated circuit structures having backside gate partial cut or backside trench contact partial cut and/or spit epitaxial structure, and methods of fabricating integrated circuit structures having backside gate partial cut or backside trench contact partial cut and/or spit epitaxial structure.The above description of illustrated implementations of embodiments of the disclosure, including what is described in the Abstract, is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. While specific implementations of, and examples for, the disclosure are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the disclosure, as those skilled in the relevant art will recognize.These modifications may be made to the disclosure in light of the above detailed description. The terms used in the following claims should not be construed to limit the disclosure to the specific implementations disclosed in the specification and the claims. Rather, the scope of the disclosure is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.Example embodiment 1: An integrated circuit structure includes a first sub-fin structure over a first stack of nanowires. A second sub-fin structure is over a second stack of nanowires. The integrated circuit structure also includes a gate electrode. A first portion of the gate electrode is around the first stack of nanowires, a second portion of the gate electrode is around the second stack of nanowires, and a third portion of the gate electrode bridges the first and second portions of the gate electrode. The integrated circuit structure also includes a dielectric structure between the first portion of the gate electrode and the second portion of the gate electrode, the dielectric structure over the third portion of the gate electrode. The dielectric structure is continuous along the first and second portions of the gate electrode and the first and second sub-fin structures.Example embodiment 2: The integrated circuit structure of example embodiment 1, wherein the first, second and third portions of the gate electrode are in direct contact with the dielectric structure.Example embodiment 3: The integrated circuit structure of example embodiment 1 or 2, wherein a gate dielectric layer separates the first portion of the gate electrode from the first stack of nanowires, and separates the second portion of the gate electrode from the second stack of nanowires.Example embodiment 4: The integrated circuit structure of example embodiment 1, 2 or 3, wherein the first and second sub-fin structures are semiconductor sub-fin structures.Example embodiment 5: The integrated circuit structure of example embodiment 1, 2 or 3, wherein the first and second sub-fin structures are insulator sub-fin structures.Example embodiment 6: A computing device includes a board, and a component coupled to the board. The component includes an integrated circuit structure including a first sub-fin structure over a first stack of nanowires. A second sub-fin structure is over a second stack of nanowires. The integrated circuit structure also includes a gate electrode. A first portion of the gate electrode is around the first stack of nanowires, a second portion of the gate electrode is around the second stack of nanowires, and a third portion of the gate electrode bridges the first and second portions of the gate electrode. The integrated circuit structure also includes a dielectric structure between the first portion of the gate electrode and the second portion of the gate electrode, the dielectric structure over the third portion of the gate electrode. The dielectric structure is continuous along the first and second portions of the gate electrode and the first and second sub-fin structures.Example embodiment 7: The computing device of example embodiment 6, further including a memory coupled to the board.Example embodiment 8: The computing device of example embodiment 6 or 7, further including a communication chip coupled to the board.Example embodiment 9: The computing device of example embodiment 6, 7 or 8, wherein the component is a packaged integrated circuit die.Example embodiment 10: The computing device of example embodiment 6, 7, 8 or 9, wherein the component is selected from the group consisting of a processor, a communications chip, and a digital signal processor.Example embodiment 11: An integrated circuit structure includes a first sub-fin structure over a first epitaxial source or drain structure. A second sub-fin structure is over a second epitaxial source or drain structure. The integrated circuit structure also includes a conductive contact structure. A first portion of the conductive contact structure is beneath the first epitaxial source or drain structure, a second portion of the conductive contact structure is beneath the second epitaxial source or drain structure, and a third portion of the conductive contact structure bridges the first and second portions of the conductive contact structure. The integrated circuit structure also includes a dielectric structure between the first portion of the conductive contact structure and the second portion of the conductive contact structure. The dielectric structure is over the third portion of the conductive contact structure, and is continuous along the first and second portions of the conductive contact structure and the first and second sub-fin structures.Example embodiment 12: The integrated circuit structure of example embodiment 11, wherein the conductive contact structure is in direct contact with the dielectric structure.Example embodiment 13: The integrated circuit structure of example embodiment 11 or 12, wherein the first and second epitaxial source or drain structures are coupled to one or more stacks of nanowires.Example embodiment 14: The integrated circuit structure of example embodiment 11, 12 or 13, wherein the first and second sub-fin structures are semiconductor sub-fin structures.Example embodiment 15: The integrated circuit structure of example embodiment 11, 12 or 13, wherein the first and second sub-fin structures are insulator sub-fin structures.Example embodiment 16: A computing device includes a board, and a component coupled to the board. The component includes an integrated circuit structure including a first sub-fin structure over a first epitaxial source or drain structure. A second sub-fin structure is over a second epitaxial source or drain structure. The integrated circuit structure also includes a conductive contact structure. A first portion of the conductive contact structure is beneath the first epitaxial source or drain structure, a second portion of the conductive contact structure is beneath the second epitaxial source or drain structure, and a third portion of the conductive contact structure bridges the first and second portions of the conductive contact structure. The integrated circuit structure also includes a dielectric structure between the first portion of the conductive contact structure and the second portion of the conductive contact structure. The dielectric structure is over the third portion of the conductive contact structure, and is continuous along the first and second portions of the conductive contact structure and the first and second sub-fin structures.Example embodiment 17: The computing device of example embodiment 16, further including a memory coupled to the board.Example embodiment 18: The computing device of example embodiment 16 or 17, further including a communication chip coupled to the board.Example embodiment 19: The computing device of example embodiment 16, 17 or 18, wherein the component is a packaged integrated circuit die.Example embodiment 20: The computing device of example embodiment 6, 17, 18 or 19, wherein the component is selected from the group consisting of a processor, a communications chip, and a digital signal processor.
Some examples described herein provide for a heterogeneous integration module (HIM) that includes a thermal management apparatus. In an example, an apparatus (e.g., a HIM) includes a wiring substrate, a first component, a second component, and a thermal management apparatus. The first component and the second component are communicatively coupled together via the wiring substrate. The thermal management apparatus is in thermal communication with the first component and the second component. The thermal management apparatus has a first thermal energy flow path for dissipating thermal energy generated by the first component and has a second thermal energy flow path for dissipating thermal energy generated by the second component. The first thermal energy flow path has a lower thermal resistivity than the second thermal energy flow path.
CLAIMSWhat is claimed is:1. An apparatus comprising: a wiring substrate; a first component; a second component, the first component and the second component being communicatively coupled together via the wiring substrate; and a thermal management apparatus in thermal communication with the first component and the second component, the thermal management apparatus having a first thermal energy flow path for dissipating thermal energy generated by the first component and having a second thermal energy flow path for dissipating thermal energy generated by the second component, the first thermal energy flow path having a lower thermal resistivity than the second thermal energy flow path.2. The apparatus of claim 1 , wherein the first component is an optical device, photonic device, or a combination thereof, and the second component is an electrical device.3. The apparatus of claim 1 , wherein: a first thermal interface material is disposed on the first component; a second thermal interface material is disposed on the second component; and the thermal management apparatus is disposed on and contacting the first thermal interface material and the second thermal interface material.4. The apparatus of claim 3, wherein: the thermal management apparatus comprises: a main portion; an integral island portion integrally formed with the main portion; a separate island attached to the main portion; and a third thermal interface material disposed between the separate island and the main portion; the integral island portion contacts the first thermal interface material; the first thermal energy flow path is through the integral island portion and the main portion; the separate island contacts the second thermal interface material; and the second thermal energy flow path is through the separate island, the third thermal interface material, and the main portion.5. The apparatus of claim 3, wherein: the thermal management apparatus comprises: a main portion; a first separate island attached to the main portion; a third thermal interface material disposed between the first separate island and the main portion; a second separate island attached to the main portion; and a fourth thermal interface material disposed between the second separate island and the main portion; the first separate island contacts the first thermal interface material; the first thermal energy flow path is through the first separate island, the third thermal interface material, and the main portion; the second separate island contacts the second thermal interface material; and the second thermal energy flow path is through the second separate island, the fourth thermal interface material, and the main portion.6. The apparatus of claim 1 further comprising a heat exchanger attached to the thermal management apparatus, the heat exchanger including and a fluid pump, a compressor, or a combination thereof.7. The apparatus of claim 1 further comprising: a package substrate; and a stiffener mechanically attached to the package substrate; and wherein: the wiring substrate is an interposer; the first component and the second component are each attached to the interposer; the interposer is attached to the package substrate; the stiffener is laterally around the interposer; the thermal management apparatus includes a main portion, a support portion extending perpendicularly from the main portion, and a flange portion extending perpendicularly from the support portion and away from the main portion; the main portion is in thermal communication with the first component and the second component; and the flange portion is attached to the stiffener.8. A system comprising: a heterogeneous integration module comprising: a wiring substrate; a first component attached to the wiring substrate; a second component attached to the wiring substrate, the first component and the second component being communicatively coupled together through the wiring substrate; a first thermal interface material disposed on the first component; a second thermal interface material disposed on the second component; and a thermal management apparatus contacting the first thermal interface material and the second thermal interface material, the thermal management apparatus having a first thermal energy flow path from where the thermal management apparatus contacts the first thermal interface material and having a second thermal energy flow path from where the thermal management apparatus contacts the second thermal interface material, the first thermal energy flow path having a lower thermal resistivity than the second thermal energy flow path.9. The system of claim 8, wherein the first component is an optical device, a photonic device, or a combination thereof, and the second component is an electrical device.10. The system of claim 8, wherein: the thermal management apparatus comprises: a main portion; an integral island portion integrally formed with the main portion; a separate island attached to the main portion; and a third thermal interface material disposed between the separate island and the main portion; the integral island portion contacts the first thermal interface material; the first thermal energy flow path is through the integral island portion and the main portion; the separate island contacts the second thermal interface material; and the second thermal energy flow path is through the separate island, the third thermal interface material, and the main portion.11. The system of claim 8, wherein: the thermal management apparatus comprises: a main portion; a first separate island attached to the main portion; a third thermal interface material disposed between the first separate island and the main portion; a second separate island attached to the main portion; and a fourth thermal interface material disposed between the second separate island and the main portion; the first separate island contacts the first thermal interface material; the first thermal energy flow path is through the first separate island, the third thermal interface material, and the main portion; the second separate island contacts the second thermal interface material; and the second thermal energy flow path is through the second separate island, the fourth thermal interface material, and the main portion.12. The system of claim 8 further comprising a printed circuit board, the heterogeneous integration module being attached to the printed circuit board.13. The system of claim 12 further comprising: a first heat exchanger attached to the thermal management apparatus, the first heat exchanger comprising a fluid pump, a compressor, or a combination thereof; and a second heat exchanger disposed on the printed circuit board, the second heat exchanger comprising a serpentine pipe and fins, the serpentine pipe being attached to and extending through the fins, the serpentine pipe being fluidly coupled with the first heat exchanger.14. The system of claim 13, wherein the first heat exchanger includes an internal volume through which fluid is to flow, the internal volume permitting pooling of the fluid in operation.15. The system of claim 8, wherein the heterogeneous integration module further comprises: a package substrate; and a stiffener mechanically attached to the package substrate; and wherein: the wiring substrate is an interposer; the first component and the second component are each attached to the interposer; the interposer is attached to the package substrate; the stiffener is laterally around the interposer; the thermal management apparatus includes a main portion, a support portion extending perpendicularly from the main portion, and a flange portion extending perpendicularly from the support portion and away from the main portion; the main portion is in thermal communication with the first component and the second component; and the flange portion is attached to the stiffener.16. A method for forming a heterogeneous integration module, the method comprising: assembling a first component and a second component on a wiring substrate; and securing a thermal management apparatus in thermal communication with the first component and the second component, the thermal management apparatus having a first thermal energy flow path for thermal energy generated by the first component and having a second thermal energy flow path for thermal energy generated by the second component, the first thermal energy flow path having a lower thermal resistivity than the second thermal energy flow path.
HETEROGENEOUS INTEGRATION MODULE COMPRISING THERMALMANAGEMENT APPARATUSTECHNICAL FIELDThis invention was made with U.S. Government support under Agreement No. HR0011-19-3-0004, awarded by Defense Advanced Research Projects Agency. The U.S. Government has certain rights in the invention.Examples of the present disclosure generally relate to a heterogeneous integration module comprising a thermal management apparatus.BACKGROUNDElectronic devices, such as are included in tablets, computers, copiers, digital cameras, smart phones, control systems, and automated teller machines, among others, often include integrated circuit die(s) for some desired functionality. Dies can consume various amounts of electrical power. By consuming electrical power, dies can generate thermal energy. The thermal energy can accumulate in the die if the thermal energy is not dissipated by the transfer of thermal energy. If thermal energy accumulates to too great of levels, and the die becomes too hot, deleterious effects may occur. For example, physical characteristics of devices in the die may be altered by excessive temperatures. As an example, threshold voltages of transistors in the die can vary as temperature changes. Further, migration of metal in the die can be increased by increased thermal energy. Accordingly, thermal management of electronic devices that include a die is a concern.SUMMARYSome examples described herein provide for a heterogeneous integration module (HIM) that includes a thermal management apparatus. In such a HIM, components having different specifications for operating temperatures can be incorporated in close proximity to avoid significant delay of signal propagation between the components and degradation of signals propagated between the components. Additionally, the components can be operated at their respective rated temperatures. An example of the present disclosure is an apparatus. The apparatus includes a wiring substrate, a first component, a second component, and a thermal management apparatus. The first component and the second component are communicatively coupled together via the wiring substrate. The thermal management apparatus is in thermal communication with the first component and the second component. The thermal management apparatus has a first thermal energy flow path for dissipating thermal energy generated by the first component and has a second thermal energy flow path for dissipating thermal energy generated by the second component. The first thermal energy flow path has a lower thermal resistivity than the second thermal energy flow path.Another example of the present disclosure is a system. The system includes a heterogeneous integration module. The heterogeneous integration module includes a wiring substrate, a first component attached to the wiring substrate, a second component attached to the wiring substrate, a first thermal interface material disposed on the first component, a second thermal interface material disposed on the second component, and a thermal management apparatus. The first component and the second component are communicatively coupled together through the wiring substrate. The thermal management apparatus contacts the first thermal interface material and the second thermal interface material. The thermal management apparatus has a first thermal energy flow path from where the thermal management apparatus contacts the first thermal interface material and has a second thermal energy flow path from where the thermal management apparatus contacts the second thermal interface material. The first thermal energy flow path has a lower thermal resistivity than the second thermal energy flow path.A further example of the present disclosure is a method for forming a heterogeneous integration module. A first component and a second component are assembled on a wiring substrate. A thermal management apparatus is secured in thermal communication with the first component and the second component. The thermal management apparatus has a first thermal energy flow path for thermal energy generated by the first component and has a second thermal energy flow path for thermal energy generated by the second component. The first thermal energy flow path has a lower thermal resistivity than the second thermal energy flow path. These and other aspects may be understood with reference to the following detailed description.BRIEF DESCRIPTION OF THE DRAWINGSSo that the manner in which the above recited features can be understood in detail, a more particular description, briefly summarized above, may be had by reference to example implementations, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical example implementations and are therefore not to be considered limiting of its scope.FIG. 1 depicts a simplified cross-sectional view of a first heterogeneous integration module (HIM) comprising a thermal management apparatus according to some examples.FIG. 2 depicts a simplified cross-sectional view of a second HIM comprising a thermal management apparatus according to some examples.FIG. 3 depicts a channel pattern of a contact region according to some examples.FIG. 4 depicts a layout view of a system comprising the first or second HIM according to some examples.FIG. 5 is flow diagram of a method for forming a HIM according to some examples.To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements of one example may be beneficially incorporated in other examples.DETAILED DESCRIPTIONSome examples described herein provide for a heterogeneous integration module (HIM) that includes a thermal management apparatus. The HIM includes a first component and a second component attached to a wiring substrate. The first component and the second component are communicatively coupled together through the wiring substrate. The first component and the second component can have respective specifications for operating temperatures that differ. For example, the first component can have a target operating temperature that is lower than the target operating temperature of the second component. In some examples, the first component can be an active optical and/or photonic device, and the second component can be an active electrical device ( e.g ., a die having a processor, a programmable logic integrated circuit (IC), an application specific IC (ASIC), the like, or a combination thereof). The thermal management apparatus is in thermal communication with the first component and the second component to dissipate thermal energy generated by the first component and the second component. A thermal energy flow path through the thermal management apparatus for dissipating thermal energy generated by the first component can have a thermal resistance that is less than a thermal energy flow path through the thermal management apparatus for dissipating thermal energy generated by the second component. In such a HIM, components having different specifications for operating temperatures can be incorporated in close proximity to avoid significant delay of signal propagation between the components and to avoid degradation of signals propagated between the components. Additionally, the components can be operated at their respective rated temperatures. A HIM, as described herein, may be particularly useful for computing and/or networking devices (e.g., fiber optic devices).Various features are described hereinafter with reference to the figures. It should be noted that the figures may or may not be drawn to scale and that the elements of similar structures or functions are represented by like reference numerals throughout the figures. It should be noted that the figures are only intended to facilitate the description of the features. They are not intended as an exhaustive description of the claimed invention or as a limitation on the scope of the claimed invention. In addition, an illustrated example need not have all the aspects or advantages shown. An aspect or an advantage described in conjunction with a particular example is not necessarily limited to that example and can be practiced in any other examples even if not so illustrated or if not so explicitly described.FIG. 1 illustrates a simplified cross-sectional view of a first HIM 100 comprising a thermal management apparatus according to some examples. The first HIM 100 includes a first component 102 and a second component 106. The first component 102 generally has a specification for an operating temperature that is lower than a specification for an operating temperature of the second component 106. For example, the first component 102 can be or include an active optical and/or photonic device ( e.g ., for generating an optical signal for a fiber optic port), and the second component 106 can be or include an electrical device {e.g., a die comprising a processor, a programmable logic IC, an ASIC, the like, or a combination thereof), where the optical and/or photonic device has a lower operating temperature than the electrical device. The thermal management apparatus of the first HIM 100, as described below, assists controlling the operating temperatures of the first component 102 and the second component 106 to be within the respective specifications.The first HIM 100 includes an interposer 110 and a package substrate 112. The first component 102 is attached to a first side of the interposer 110 by external connectors 114, and the second component 106 is attached to the first side of the interposer 110 by external connectors 116. The external connectors 114, 116 can be, for example, microbumps or the like, and can form an electrical connection and physical attachment between the first component 102 and the interposer 110 and between the second component 106 and the interposer 110, respectively. A second side of the interposer 110 (opposite from the first side of the interposer 110) is attached to a first side of the package substrate 112 by external connectors 118. The external connectors 118 can be, for example, controlled collapse chip connections (C4) or the like, and can form an electrical connection and physical attachment between the interposer 110 and the package substrate 112. External connectors 120 are attached to a second side of the package substratel 12 (opposite from the first side of the package substratel 12). The external connectors 120 can be, for example, ball grid array (BGA) balls or the like, and may be used to attach the package substrate 112 to a printed circuit board (PCB) (not shown).The arrangement of first component 102, second component 106, interposer 110, and package substrate 112 is for illustration purposes. The HIM can have different configurations with more or fewer components. For example, the first component 102 and second component 106 can be attached to the package substrate 112 by external connectors without an interposer intervening therebetween. In other examples, the first component 102 and second component 106 can be integrated in an integrated fan-out package. The first component 102 and second component 106 are electrically and/or communicatively coupled together via the interposer 110 in the illustrated example, or via an interposer, package substrate, and/or metalization of an integrated fan-out package. Generically, the first component 102 and second component 106 are electrically and/or communicatively coupled together via a wiring substrate. In some examples, the first component 102 and second component 106 are in closer proximity compared with prior techniques that did not integrate similar components in a HIM, and hence, delay of signal propagation and signal degradation due to wiring lengths connecting the first component 102 and second component 106 can be reduced.The thermal management apparatus of the first HIM 100 includes a main portion 140, a vertical support portion 142, and a flange portion 144. The main portion 140 is generally horizontal and overlies, and is in thermal communication with, the first component 102 and the second component 106. The vertical support portion 142 extends vertically downward (e.g., toward the package substrate 112) perpendicular to and from a periphery of, and around, the main portion 140. The flange portion 144 extends horizontally away from and perpendicular to a lower portion of the vertical support portion 142 proximate the package substrate 112.The main portion 140 of the thermal management apparatus has an integral island portion 146. The integral island portion extends vertically downward (e.g., in a same direction that the vertical support portion 142 extends) from a bottom side of the main portion 140 and at a location corresponding to the first component 102. A separate island 148 is mechanically coupled to the bottom side of the main portion 140 and at a location corresponding to the second component 106. A first thermal interface material (TIM) 150 is disposed between and contacting the main portion 140 and the separate island 148. Screws 152 are inserted through the separate island 148 at respective periphery locations, are inserted through respective springs 154, and are threadedly engaged with (e.g., screwed into) the bottom side of the main portion 140. The separate island 148 may float along the length of the screws 152. The springs 154 apply a downward force on the separate island 148 (e.g., in a direction away from the bottom side of the main portion 140). A counter force may be applied to the separate island 148 (e.g., in part by the first component 102, as will be described subsequently). Depending on the magnitude of these forces, the separate island 148 may be at any of various positions along the lengths of the screws 152. A second TIM 156 is on a backside of the first component 102, and a third TIM 158 is on a backside of the second component 106. The integral island portion 146 contacts the second TIM 156, and hence, the thermal management apparatus is in thermal communication with the first component 102. The separate island 148 contacts the third TIM 158, and hence, the thermal management apparatus is in thermal communication with the second component 106.The thermal management apparatus is mechanically coupled to the package substrate 112. The thermal management apparatus can be mechanically coupled to the package substrate in numerous ways. In the illustrated example, a stiffener 160 (e.g., a ring stiffener) is adhered, e.g., by an epoxy, to the package substrate 112. The stiffener 160 has blind holes 162. The thermal management apparatus has guide pins 164 extending vertically downward from the vertical support portion 142. The guide pins 164 align with and are inserted into the blind holes 162. The insertion of the guide pins 164 into the blind holes 162 can align the thermal management apparatus with the stiffener 160, and further, can align the thermal management apparatus to the first component 102 and second component 106. Screws 166 are inserted through respective springs 168, through the flange portion 144, and are threadedly engaged with (e.g., screwed into) the stiffener 160. The flange portion 144 (and hence, the thermal management apparatus) may float along the length of the screws 166. The springs 168 apply a downward force on the flange portion 144. A counter force may be applied to the main portion 140 (e.g., in part by the first component 102). Depending on the magnitude of these forces, the flange portion 144 may be at any of various positions along the lengths of the screws 166.A person having ordinary skill in the art will readily understand that the components of the first HIM 100 may be manufactured with various tolerances and/or warpage (e.g., due to thermal cycling). The configuration of screws and springs as described above permits the thermal management apparatus to be securely affixed in the first HIM 100 without generating additional stresses in the first HIM 100. For example, the thermal management apparatus can be held securely in a position where the integral island portion 146 contacts the second TIM 156 by the force of the springs 168 forcing the flange portion 144 (and hence, the main portion 140 and integral island portion 146) downward. Without the springs 168, there may be a risk that the screws 166 would be over-torqued, which could cause additional and deleterious stress in the first HIM 100, or under-torqued, which could permit the integral island portion 146 from contacting the second TIM 156. Additionally, in this example with the position of the thermal management apparatus largely being determined by the first component 102, the screws 152 and springs 154 can accommodate any further tolerances such that the separate island 148 can still contact the third TIM 158. Before the thermal management apparatus is placed on and secured to the stiffener 160, the first TIM 150 may fill a maximum space between the bottom of the main portion 140 and the separate island 148, and when the separate island 148 contacts the third TIM 158, the first TIM 150 may be compressed and extrude out from between the main portion 140 and the separate island 148.In the illustrated example, the thermal management apparatus is mechanically coupled to the package substrate 112 (e.g., via the stiffener 160). In other examples, the thermal management apparatus can be mechanically coupled to another component instead of the package substrate. For example, the vertical support portion 142 of the thermal management apparatus can be around a package substrate, and the thermal management apparatus may be mechanically coupled to a PCB. The stiffener 160 can be adhered or soldered on the PCB, and the thermal management apparatus can be secured to the stiffener 160 like described with respect to FIG. 1.A heat exchanger and fluid pump and/or compressor (HEFP/C) 180 is attached to a top side of the main portion 140 of the thermal management apparatus. The HEFP/C 180 can be attached to the main portion 140 by a TIM and/or by screws. The HEFP/C 180 can receive thermal energy from the main portion 140, transfer that thermal energy to a fluid, and circulate the fluid from an outlet 182, through another heat exchanger, and back to an inlet 184. The HEFP/C 180 can also include a fluid compressor that compresses, e.g., a vapor received at the inlet 184 to a liquid. The flow of the fluid through in the HEFP/C 180 and other heat exchanger can be in a single phase (e.g., a liquid) or in two phases (e.g., liquid, vapor, or a mix thereof). In two phases, the thermal management apparatus may permit refrigerant functionality to cool the first component 102 and second component 106. In some examples, the HEFP/C 180 can provide for self-filtration to clean the fluid flowing through the HEFP/C 180. For example, the HEFP/C 180 can include an internal volume through which the fluid flows. The internal volume can be sufficiently large to permit the fluid to pool therein. With the fluid pooling in the internal volume of the HEFP/C 180, the flow of the fluid may be sufficiently low to permit any particulates in the fluid to settle out of the fluid, which can provide self-filtration.The thermal management apparatus permits forming different thermal energy flow paths with different thermal resistances. A first thermal energy flow path can be between the first component 102 and the HEFP/C 180 and can be through the second TIM 156, the integral island portion 146, and the main portion 140. As is apparent, there is no change of material or interface between the integral island portion 146 and the main portion 140. A second thermal energy flow path can be between the second component 106 and the HEFP/C 180 and can be through the third TIM 158, the separate island 148, the first TIM 150, and the main portion 140. The first TIM 150 is disposed between the separate island 148 and the main portion 140 in this thermal energy flow path. The combination of materials used for the main portion 140 (and hence, the integral island portion 146, also), the TIMs 150, 156, 158, and the separate island 148 can be selected such that the thermal energy flow paths have different thermal resistances. For example, assuming that the second TIM 156 and third TIM 158 are a same material and that the main portion 140 and the separate island 148 are a same material, the first TIM 150 can be a high thermal resistivity TIM. This can permit the second thermal energy flow path to have a higher thermal resistivity than the first thermal energy flow path. In this example, the first TIM 150 can act as a thermal brake.In operation, the first component 102 and the second component 106 both generate thermal energy, e.g., due to the consumption of electrical energy, which can be, in part, converted to thermal energy. Thermal energy generated by the first component 102 can flow in the first thermal energy flow path to the HEFP/C 180, which can then transfer the thermal energy for dissipation. Thermal energy generated by the second component 106 can flow in the second thermal energy flow path to the HEFP/C 180, which can then transfer the thermal energy for dissipation. Since the first thermal energy flow path has a lower thermal resistivity than the second thermal energy flow path, the first component 102 can dissipate thermal energy at a greater rate than the second component 106, and the second component 106 can be maintained at a higher operating temperature than the first component 102. This can permit both the first component 102 and the second component 106 to operate within more desirable, yet different, temperature ranges.FIG. 2 illustrates a simplified cross-sectional view of a second HIM 200 comprising a thermal management apparatus according to some examples. The second HIM 200 of FIG. 2 includes many of the same or similar components illustrated in and described with respect to the first HIM 100 of FIG. 1. Accordingly, further description of such components is omitted here for brevity.In the second HIM 200, a separate island 202 and fourth TIM 204 are in the place of the integral island portion 146 in the first HIM 100. The separate island 202 is mechanically coupled to the bottom side of the main portion 140 and at a location corresponding to the first component 102. The fourth TIM 204 is disposed between and contacting the main portion 140 and the separate island 202. Screws 206 are inserted through the separate island 202 at respective periphery locations, are inserted through respective springs 208, and are threadedly engaged with (e.g., screwed into) the bottom side of the main portion 140. The separate island 202 may float along the length of the screws 206. The springs 208 apply a downward force on the separate island 202 (e.g., in a direction away from the bottom side of the main portion 140). A counter force may be applied to the separate island 202 (e.g., in part by the first component 102). Depending on the magnitude of these forces, the separate island 202 may be at any of various positions along the lengths of the screws 206. The separate island 202 contacts the second TIM 156, and hence, the thermal management apparatus is in thermal communication with the first component 102.The thermal management apparatus permits forming different thermal energy flow paths with different thermal resistances. In this example, a first thermal energy flow path can be between the first component 102 and the HEFP/C 180 and can be through the second TIM 156, the separate island 202, the fourth TIM 204, and the main portion 140. The second thermal energy flow path is as described above with respect to FIG. 1. The combination of materials used for the main portion 140, the TIMs 150, 156, 158, 204, and the separate islands 148, 202 can be selected such that the thermal energy flow paths have different thermal resistances. For example, assuming that the second TIM 156 and third TIM 158 are a same material and that the separate islands 148, 202 are a same material, the first TIM 150 can be a high thermal resistivity TIM, and the fourth TIM 204 can be a low thermal resistivity TIM. This can permit the second thermal energy flow path to have a higher thermal resistivity than the first thermal energy flow path. In this example, the first TIM 150 can act as a thermal brake.In operation, the first component 102 and the second component 106 both generate thermal energy. Thermal energy generated by the first component 102 can flow in the first thermal energy flow path, and thermal energy generated by the second component 106 can flow in the second thermal energy flow path. Since the first thermal energy flow path has a lower thermal resistivity than the second thermal energy flow path, the first component 102 can dissipate thermal energy at a greater rate than the second component 106, and the second component 106 can be maintained at a higher operating temperature than the first component 102. This can permit both the first component 102 and the second component 106 to operate within more desirable, yet different, temperature ranges.FIG. 3 illustrates a channel pattern of a contact region 300 according to some examples. The contact region 300 is on an island surface 302. The island surface 302 can be any of the surfaces of the integral island portion 146, the separate island 148, and the separate island 202 where respective TIMs 156, 158 contact those surfaces. The contact region 300 has channels 304 micro-machined or etched into the island surface 302. When the thermal management apparatus of FIG. 1 or 2 is used with a HIM, air in a TIM can settle in the channels 304 in the contact region 300, which can permit the respective integral island portion 146, separate island 148, or separate island 202 to be closer to a component on which the TIM is disposed. The closer the integral island portion 146, separate island 148, or separate island 202 is to the component, the less thermal resistance may be present between the thermal management apparatus and the component, which can permit increased conductivity of thermal energy from the component to the thermal management apparatus for dissipation. The contact region 300, in which the channels are formed, may protrude from a main portion of the island surface 302 in some examples.The channels 304 cross at a number of intersections. A first subset of channels 304 extend in a first direction (e.g., vertically in the illustration), and a second subset of channels 304 extend perpendicularly (e.g., horizontally in the illustration) to the first direction and intersect the first subset of channels 304 at a number of locations. A third subset of channels 304 extend in a direction at a forty- five degree angle from the first direction, and a fourth subset of channels 304 extend in a direction at a one hundred thirty-five degree angle from the first direction and perpendicularly to the direction that the third subset of channels 304 extend. The fourth subset of channels 304 intersect the third subset of channels 304 where the first subset and second subset of channels 304 intersect and at centers in the quadrilateral shapes formed by the first and second subset of channels 304. Neighboring, parallel pairs of the first and second subset of channels 304 have a first pitch, and neighboring, parallel pairs of the third and fourth subset of channels 304 have a second pitch that is approximately half of the first pitch.FIG. 4 illustrates a layout view of a system comprising the first HIM 100 or second HIM 200 (designated “100/200”) according to some examples. The system includes a PCB 402. A power supply 404, a first load package 406, and a second load package 408 are disposed on and attached to the PCB 402. A number of optical ports 410 are disposed on and attached to the PCB 402. The HIM 100/200 is disposed on and attached to the PCB 402 (e.g., via the external connectors 120 (not illustrated)). Various components of the HIM 100/200 are depicted in the layout view but not described here, except to note that the first component 102 and second component 106 are shown by dashed lines due to those components underlying the main portion 140 of the thermal management apparatus.A heat exchanger is disposed on the second load package 408 and is fluidly coupled to the HEFP/C 180. The heat exchanger includes fins 420 and a serpentine pipe 422. The serpentine pipe 422 intersects each of the fins 420 at a number of different locations and is mechanically attached to each of the fins 420 at those locations. The serpentine pipe 422 is further fluidly coupled to the outlet 182 and the inlet 184 of the HEFP/C 180.In operation, thermal energy received at the HEFP/C 180 from the first component 102 and/or second component 106 is transferred to a fluid (e.g., water that is in a liquid and/or vapor phase) in the HEFP/C 180. The HEFP/C 180 then pumps the fluid out the outlet 182 through the serpentine pipe 422. The thermal energy carried by the fluid can be transferred to the serpentine pipe 422 and then to the fins 420 by thermal conduction. The fluid flowing through the serpentine pipe 422 can be in a liquid phase, a vapor phase, or a mixture of liquid and vapor phases. The thermal energy can be dissipated from the serpentine pipe 422 and fins 420 by gas (e.g. air) flow 424. The fluid, e.g., with the thermal energy dissipated, is circulated through the serpentine pipe 422 to the inlet 184 of the HEFP/C 180. The HEFP/C 180 can also compress the fluid ( e.g ., from a vapor phase to a liquid phase) received at the inlet 184, which may provide refrigerant functionality. The FIEFP/C 180 can continuously recirculate the fluid, such as at a rate of about 1.1 L/min and at a pressure of about 4 PSI. In this example, the heat exchanger comprising the serpentine pipe 422 and fins 420 can act as a primary heat sink to thermally manage active devices on the PCB 402.In some examples, the system of FIG. 4 has a form factor of a height of 15 inches, a width of 17 inches, and a thickness of 1.7 inches. It is contemplated that in some of these examples, the system is capable of dissipating 1 kW of thermal energy. For example, the power supply 404 can generate approximately 100 W to 150 W; the first load package 406 and second load package 408 can each generate approximately 200 W; the PCB 402 can generate approximately 50 W; the first component 102 can generate approximately 40 W; and the second component 106 can generate approximately 205 W. This thermal energy can be dissipated by the system.FIG. 5 is flow diagram of a method 500 for forming a HIM according to some examples. Various operations of the method 500 can be performed sequentially or in parallel. At block 502, a first component 102 and a second component 106 are assembled on a wiring substrate to be electrically and communicatively coupled together. The first component 102 and second component 106 can be assembled on a wiring substrate by any acceptable technique, which would be readily apparent to a person having ordinary skill in the art. In the foregoing illustrated examples, the first component 102 and the second component 106 are attached to the interposer 110, such as by reflowing the external connectors 114, 116. The interposer 110 is attached to the package substrate 112, such as by reflowing the external connectors 118.At block 504, a thermal management apparatus is formed. The main portion 140, vertical support portion 142, guide pins 164, and, if applicable, integral island portion 146 of the thermal management apparatus can be machined from any heat conducting material, such as a metal material, like cooper, aluminum, aluminum titanium, or the like. Similarly, the separate island 148 and, if applicable, separate island 202 of the thermal management apparatus can be machined from any heat conducting material, such as a metal material, like cooper, aluminum, aluminum titanium, or the like. The material of the main portion 140 and the respective materials of the separate islands 148, 202 can be the same or different. The first TIM 150 can be applied to the separate island 148, and then, the separate island 148 can be secured to the bottom of the main portion 140 using the screws 152 and springs 154. If applicable, the fourth TIM 204 can be applied to the separate island 202, and then, the separate island 202 can be secured to the bottom of the main portion 140 using the screws 206 and springs 208. The first TIM 150 and the fourth TIM 204 can each be, for example, a thermal grease that includes a liquid matrix and a thermally conductive filler. When both the first TIM 150 and fourth TIM 204 are implemented, the first TIM 150 can have a lower ratio of filler to matrix and/or a filler with lower thermal conductivity than the fourth TIM 204.At block 506, the thermal management apparatus is secured on the first component 102 and the second component 106. The thermal management apparatus is secured to be in thermal communication with the first component 102 and the second component 106. Depending on how the first component 102 and the second component 106 were assembled, the thermal management apparatus can be secured on the first component 102 and the second component 106 by mechanically coupling the thermal management apparatus to, for example, a package substrate or a PCB. In the foregoing examples, the thermal management apparatus is mechanically coupled to the package substrate 112. According to the foregoing examples, a stiffener 160 having blind holes 162 (corresponding to the guide pins 164) can be manufactured, such as by machining a rigid material, such as a metal. The stiffener 160 can be attached to the package substrate 112 by an adhesive. In some examples where the stiffener 160 is attached to a PCB, the stiffener 160 can be attached by an adhesive or by soldering the stiffener 160 to a metal on the surface of the PCB. The second TIM 156 and third TIM 158 are applied to the first component 102 and the second component 106, respectively. The second TIM 156 and third TIM 158 can each be, for example, a thermal grease that includes a liquid matrix and a thermally conductive filler. The second TIM 156 and third TIM 158 can have a same material composition, or the third TIM 158 can have a lower ratio of filler to matrix and/or a filler with lower thermal conductivity than the second TIM 156. The thermal management apparatus can then be placed on the first component 102 and the second component 106 and can be attached to the stiffener 160 in that position by guide pins 164 being inserted in the blind holes 162 and by the screws 166 and springs 168.Thereafter, the HIM, if not assembled on a PCB, can be attached to a PCB, such as illustrated in and described with respect to FIG. 4. A HEFP/C 180 can be attached to the thermal management apparatus, such as by a TIM and/or by screws, and can be fluidly coupled to a heat exchanger, such as comprising a serpentine pipe 422 and fins 420 as illustrated in and described with respect to FIG. 4. The HIM, in the system included on the PCB, can be operated as described above.The disclosed technology may also be expressed in the following non limiting examples.Example 1. An apparatus comprising: a wiring substrate; a first component; a second component, the first component and the second component being communicatively coupled together via the wiring substrate; and a thermal management apparatus in thermal communication with the first component and the second component, the thermal management apparatus having a first thermal energy flow path for dissipating thermal energy generated by the first component and having a second thermal energy flow path for dissipating thermal energy generated by the second component, the first thermal energy flow path having a lower thermal resistivity than the second thermal energy flow path.Example 2. The apparatus of example 1 , wherein the first component is an optical device, photonic device, or a combination thereof, and the second component is an electrical device.Example 3. The apparatus of example 1 , wherein: a first thermal interface material is disposed on the first component; a second thermal interface material is disposed on the second component; and the thermal management apparatus is disposed on and contacting the first thermal interface material and the second thermal interface material.Example 4. The apparatus of example 3, wherein: the thermal management apparatus comprises: a main portion; an integral island portion integrally formed with the main portion; a separate island attached to the main portion; and a third thermal interface material disposed between the separate island and the main portion; the integral island portion contacts the first thermal interface material; the first thermal energy flow path is through the integral island portion and the main portion; the separate island contacts the second thermal interface material; and the second thermal energy flow path is through the separate island, the third thermal interface material, and the main portion.Example 5. The apparatus of example 3, wherein: the thermal management apparatus comprises: a main portion; a first separate island attached to the main portion; a third thermal interface material disposed between the first separate island and the main portion; a second separate island attached to the main portion; and a fourth thermal interface material disposed between the second separate island and the main portion; the first separate island contacts the first thermal interface material; the first thermal energy flow path is through the first separate island, the third thermal interface material, and the main portion; the second separate island contacts the second thermal interface material; and the second thermal energy flow path is through the second separate island, the fourth thermal interface material, and the main portion.Example 6. The apparatus of example 1 further comprising a heat exchanger attached to the thermal management apparatus, the heat exchanger including and a fluid pump, a compressor, or a combination thereof.Example 7. The apparatus of example 1 further comprising: a package substrate; and a stiffener mechanically attached to the package substrate; and wherein: the wiring substrate is an interposer; the first component and the second component are each attached to the interposer; the interposer is attached to the package substrate; the stiffener is laterally around the interposer; the thermal management apparatus includes a main portion, a support portion extending perpendicularly from the main portion, and a flange portion extending perpendicularly from the support portion and away from the main portion; the main portion is in thermal communication with the first component and the second component; and the flange portion is attached to the stiffener. Example 8. A system comprising: a heterogeneous integration module comprising: a wiring substrate; a first component attached to the wiring substrate; a second component attached to the wiring substrate, the first component and the second component being communicatively coupled together through the wiring substrate; a first thermal interface material disposed on the first component; a second thermal interface material disposed on the second component; and a thermal management apparatus contacting the first thermal interface material and the second thermal interface material, the thermal management apparatus having a first thermal energy flow path from where the thermal management apparatus contacts the first thermal interface material and having a second thermal energy flow path from where the thermal management apparatus contacts the second thermal interface material, the first thermal energy flow path having a lower thermal resistivity than the second thermal energy flow path.Example 9. The system of example 8, wherein the first component is an optical device, a photonic device, or a combination thereof, and the second component is an electrical device.Example 10. The system of example 8, wherein: the thermal management apparatus comprises: a main portion; an integral island portion integrally formed with the main portion; a separate island attached to the main portion; and a third thermal interface material disposed between the separate island and the main portion; the integral island portion contacts the first thermal interface material; the first thermal energy flow path is through the integral island portion and the main portion; the separate island contacts the second thermal interface material; and the second thermal energy flow path is through the separate island, the third thermal interface material, and the main portion.Example 11. The system of example 8, wherein: the thermal management apparatus comprises: a main portion; a first separate island attached to the main portion; a third thermal interface material disposed between the first separate island and the main portion; a second separate island attached to the main portion; and a fourth thermal interface material disposed between the second separate island and the main portion; the first separate island contacts the first thermal interface material; the first thermal energy flow path is through the first separate island, the third thermal interface material, and the main portion; the second separate island contacts the second thermal interface material; and the second thermal energy flow path is through the second separate island, the fourth thermal interface material, and the main portion.Example 12. The system of example 8 further comprising a printed circuit board, the heterogeneous integration module being attached to the printed circuit board.Example 13. The system of example 12 further comprising: a first heat exchanger attached to the thermal management apparatus, the first heat exchanger comprising a fluid pump, a compressor, or a combination thereof; and a second heat exchanger disposed on the printed circuit board, the second heat exchanger comprising a serpentine pipe and fins, the serpentine pipe being attached to and extending through the fins, the serpentine pipe being fluidly coupled with the first heat exchanger.Example 14. The system of example 13, wherein the first heat exchanger includes an internal volume through which fluid is to flow, the internal volume permitting pooling of the fluid in operation.Example 15. The system of example 8, wherein the heterogeneous integration module further comprises: a package substrate; and a stiffener mechanically attached to the package substrate; and wherein: the wiring substrate is an interposer; the first component and the second component are each attached to the interposer; the interposer is attached to the package substrate; the stiffener is laterally around the interposer; the thermal management apparatus includes a main portion, a support portion extending perpendicularly from the main portion, and a flange portion extending perpendicularly from the support portion and away from the main portion; the main portion is in thermal communication with the first component and the second component; and the flange portion is attached to the stiffener. Example 16. A method for forming a heterogeneous integration module, the method comprising: assembling a first component and a second component on a wiring substrate; and securing a thermal management apparatus in thermal communication with the first component and the second component, the thermal management apparatus having a first thermal energy flow path for thermal energy generated by the first component and having a second thermal energy flow path for thermal energy generated by the second component, the first thermal energy flow path having a lower thermal resistivity than the second thermal energy flow path.Example 17. The method of example 16, wherein the first component is an optical device, a photonic device, or a combination thereof, and the second component is an electrical device.Example 18. The method of example 16, wherein: the thermal management apparatus comprises: a main portion; an integral island portion integrally formed with the main portion; a separate island attached to the main portion; and a thermal interface material disposed between the separate island and the main portion; the first thermal energy flow path is through the integral island portion and the main portion; and the second thermal energy flow path is through the separate island, the thermal interface material, and the main portion.Example 19. The method of example 16, wherein: the thermal management apparatus comprises: a main portion; a first separate island attached to the main portion; a first thermal interface material disposed between the first separate island and the main portion; a second separate island attached to the main portion; and a second thermal interface material disposed between the second separate island and the main portion; the first thermal energy flow path is through the first separate island, the first thermal interface material, and the main portion; and the second thermal energy flow path is through the second separate island, the second thermal interface material, and the main portion.Example 20. The method of example 16, wherein: assembling the first component and the second component on the wiring substrate comprises: attaching the first component and the second component to an interposer, the interposer being the wiring substrate; and attaching the interposer to a package substrate; and securing the thermal management apparatus comprises: attaching a stiffener to the package substrate; and attaching the thermal management apparatus to the stiffener. While the foregoing is directed to specific examples, other and further examples may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
A memory system includes a memory hub controller coupled to a plurality of memory modules each of which includes a memory hub. The memory hubs each include a transmit interface having a data organization system that organizes a command header and data for each of a plurality of memory transactions into lane groups each of which contain a predetermined number of lanes. Each of the lanes contains either parallel command header bits or parallel data bits. The lane groups are then converted to a serial stream of lanes and transmitted from the memory hubs through a high-speed bus. The lane groups are organized so that they are always filled with lanes containing either a command header or data. As a result, the high-speed bus is never idle during transmission of memory transactions from the memory hub thereby maximizing the memory bandwidth of the memory system.
CLAIMS What is claimed is: 1. A memory module, comprising: a plurality of memory devices; and a memory hub, comprising: a memory controller coupled to the memory devices; at least one receive interface coupled to the memory controller; and at least one transmit interface coupled to the memory controller to transmit memory transactions from the memory module, each transmit interface receiving memory transactions each of which comprises a command header and data having a variable number of data bits, each transmit interface including a data organization system organizing the command header and data into lane groups each of which includes a plurality of lanes each of which contains a plurality of parallel command header bits or parallel data bits, the data organization system organizing the lane groups so that all of the lanes in each lane group are filled with either command header bits or data bits, the data organization system being operable to convert each of the lane groups into a serial stream of the lanes for transmission by the transmit interface, each of the transmitted lanes containing a plurality of parallel command header bits or parallel data bits. 2. The memory module of claim 1 wherein each of the lane groups comprise eight lanes. 3. The memory module of claim 1 wherein each of the lanes comprise 32 parallel bits of command header or data. <Desc/Clms Page number 17> 4. The memory module of claim 1 wherein the at least one transmit interface comprises an upstream transmit interface and a downstream transmit interface each of which comprises the data organization system. 5. The memory module of claim 1 wherein the memory devices comprise dynamic random access memory devices. 6. The memory module of claim 1 wherein the data organization system comprises: a data organization unit organizing the command header and data into lane groups each of which includes a plurality of lanes containing either a command header or data, the data organization unit organizing the lane groups so that all of the lanes in each lane group are filled with either command header bits or data bits; and a parallel-to-serial converter converting each of the lane groups into a serial stream of the lanes for transmission by the transmit interface. 7. The memory module of claim 6 wherein the data organization unit comprises: a data buffer storing respective data for a plurality of the transactions, the data for each of the transactions being selectively passed from the data buffer; and a command queue storing respective command headers for a plurality of the transactions, the command header for each of the transactions being selectively passed from the command queue with the data for the corresponding transaction being passed from the data buffer. 8. The memory module of claim 7, wherein the data organization unit further comprises: a multiplexer coupled to receive the data stored in the data buffer for each of the transactions and the command headers stored in the command queue for <Desc/Clms Page number 18> each of the transactions, the multiplexer being operable to couple the data for each of the transactions and the command header for each of the transactions to an output port responsive to multiplexer control signals; an arbitration unit coupled to at least one of the data buffer and the command queue to receive information indicative of the data and command headers for the transactions stored in the data buffer and command queue, respectively, the arbitration unit being operable to generate the control signals responsive to the information indicative of the data and command headers to cause the multiplexer to couple a lane group of either data or a command header and data for at least one of the transactions to the output port of the multiplexer. 9. The memory module of claim 8 further comprising a parallel- to-serial converter coupled to the output port of the multiplexer, the parallel-to-serial converter being operative to convert the lane group at the output port of the multiplexer into a serial stream of the lanes. 10. The memory module of claim 1 wherein the data organization unit is configurable to vary the number of lanes in each lane groups that are coupled from the data organization during each cycle of a clock signal. 11. The memory module of claim 1 wherein the command header and data for each of the transactions comprise a memory packet. 12. A memory module, comprising: a plurality of memory devices; and a memory hub, comprising: a memory controller coupled to the memory devices; at least one receive interface coupled to the memory controller; and <Desc/Clms Page number 19> at least one transmit interface coupled to the memory controller to transmit memory transactions from the memory module, each transmit interface receiving memory transactions each of which comprises a command header and data having a variable number of data bits, each transmit interface including a data organization system that is operable to organize the command header and data into groups each of which contains a predetermined number of sub-groups of a predetermined size, each of the sub-groups containing a plurality of parallel command header bits or data bits, each sub-group containing data for a first transaction being immediately followed by a sub-group containing either additional data for the first transaction or the command header for a second transaction so that each group is filled with sub-groups containing either command header bits or data bits, the data organization system further being operable to output each group of data as a serial stream of the sub-groups. 13. The memory module of claim 12 wherein each of the groups comprise eight sub-groups. 14. The memory module of claim 12 wherein each of the sub- groups comprise 32 parallel bits of command header or data. 15. The memory module of claim 12 wherein the at least one transmit interface comprises an upstream transmit interface and a downstream transmit interface each of which comprises the data organization system. 16. The memory module of claim 12 wherein the memory devices comprise dynamic random access memory devices. 17. The memory module of claim 12 wherein the data organization system comprises: <Desc/Clms Page number 20> a data organization unit organizing the command header and data into groups each of which includes a plurality of the sub-groups containing either a command header or data, the data organization unit organizing the groups so that all of the sub-groups in each group are filled with either command header bits or data bits; and a parallel-to-serial converter converting each of the groups into a serial stream of the sub-groups for transmission by the transmit interface. 18. The memory module of claim 17 wherein the data organization unit comprises: a data buffer storing respective data for a plurality of the transactions, the data for each of the transactions being selectively passed from the data buffer; and a command queue storing respective command headers for a plurality of the transactions, the command header for each of the transactions being selectively passed from the command queue with the data for the corresponding transaction being passed from the data buffer. 19. The memory module of claim 18, wherein the data organization unit further comprises: a multiplexer coupled to receive the data stored in the data buffer for each of the transactions and the command headers stored in the command queue for each of the transactions, the multiplexer being operable to couple the data for each of the transactions and the command header for each of the transactions to an output port responsive to multiplexer control signals; an arbitration unit coupled to at least one of the data buffer and the command queue to receive information indicative of the data and command headers for the transactions stored in the data buffer and command queue, respectively, the arbitration unit being operable to generate the control signals responsive to the information indicative of the data and command headers to cause the multiplexer to <Desc/Clms Page number 21> couple a group of sub-groups containing either data or a command header and data for at least one of the transactions to the output port of the multiplexer. 20. The memory module of claim 19 further comprising a parallel-to-serial converter coupled to the output port of the multiplexer, the parallelto-serial converter being operative to convert the group at the output port of the multiplexer into a serial stream of the sub-groups. 21. The memory module of claim 17 wherein the data organization unit is configurable to vary the number of lanes in each lane groups that are coupled from the data organization during each cycle of a clock signal. 22. The memory module of claim 12 wherein the command header and data for each of the transactions comprise a memory packet. 23. A data organization system, comprising: a data organization unit organizing a command header and data for each of a plurality of memory transaction into lane groups each of which includes a plurality of lanes each of which contains a plurality of parallel command header bits or parallel data bits, the data organization unit organizing the lane groups so that all of the lanes in each lane group are filled with either command header bits or data bits; and a parallel-to-serial converter converting each of the lane groups into a serial stream of the lanes each of which contains a plurality of parallel command header bits or parallel data bits. 24. The data organization system of claim 23 wherein each of the lane groups comprise eight lanes. <Desc/Clms Page number 22> 25. The data organization system of claim 23 wherein each of the lanes comprise 32 parallel bits of command header or data. 26. The data organization system of claim 23, further comprising: a data buffer storing respective data for a plurality of the transactions, the data for each of the transactions being selectively passed from the data buffer; and a command queue storing respective command headers for a plurality of the transactions, the command header for each of the transactions being selectively passed from the command queue with the data for the corresponding transaction being passed from the data buffer. 27. The data organization system of claim 26, wherein the data organization unit further comprises: a multiplexer coupled to receive the data stored in the data buffer for each of the transactions and the command headers stored in the command queue for each of the transactions, the multiplexer being operable to couple the data for each of the transactions and the command header for each of the transactions to an output port responsive to multiplexer control signals; an arbitration unit coupled to at least one of the data buffer and the command queue to receive information indicative of the data and command headers for the transactions stored in the data buffer and command queue, respectively, the arbitration unit being operable to generate the control signals responsive to the information indicative of the data and command headers to cause the multiplexer to couple a lane group of either data or a command header and data for at least one of the transactions to the output port of the multiplexer. 28. The data organization system of claim 23 wherein the data organization unit is configurable to vary the number of lanes in each lane groups that are coupled from the data organization during each cycle of a clock signal. <Desc/Clms Page number 23> 29. A processor-based system, comprising: a processor having a processor bus; a system controller coupled to the processor bus, the system controller having a peripheral device port; at least one input device coupled to the peripheral device port of the system controller; at least one output device coupled to the peripheral device port of the system controller; at least one data storage device coupled to the peripheral device port of the system controller; and a memory hub controller coupled to the processor bus; a plurality of memory modules coupled to the memory hub controller by at least one bus, each of the memory modules comprising: a plurality of memory devices; and a memory hub, comprising: a memory controller coupled to the memory devices; a receive interface coupled to the memory controller through a bus system; and a transmit interface coupled to the memory controller through the bus system to transmit memory transactions from the memory module to the memory controller, the transmit interface receiving memory transactions each of which comprises a command header and data having a variable number of data bits, the transmit interface including a data organization system organizing the command header and data into lane groups each of which includes a plurality of lanes each of which contains a plurality of parallel command header bits or parallel data bits, the data organization system organizing the lane groups so that all of the lanes in each lane group are filled with either command header bits or data bits, the data <Desc/Clms Page number 24> organization system being operable to convert each of the lane groups into a serial stream of the lanes for transmission by the transmit interface, each of the transmitted lanes containing a plurality of parallel command header bits or parallel data bits. 30. The processor-based system of claim 29 wherein each of the lane groups comprise eight lanes. 31. The processor-based system of claim 29 wherein each of the lanes comprise 32 parallel bits of command header or data. 32. The processor-based system of claim 29 wherein the bus system comprises a downstream bus for coupling memory transactions transmitted by the memory modules away from the memory controller and an upstream bus for coupling memory transactions transmitted by the memory modules toward the memory controller, and wherein the transmit interface comprises an upstream transmit interface coupled to the upstream bus and a downstream transmit interface coupled to the downstream bus, each of the upstream and downstream transmit interfaces including a respective one of the data organization systems. 33. The processor-based system of claim 29 wherein the memory devices comprise dynamic random access memory devices. 34. The processor-based system of claim 29 wherein the data organization system comprises: a data organization unit organizing the command header and data into lane groups each of which includes a plurality of lanes containing either a command header or data, the data organization unit organizing the lane groups so that all of the lanes in each lane group are filled with either command header bits or data bits; and <Desc/Clms Page number 25> a parallel-to-serial converter converting each of the lane groups into a serial stream of the lanes for transmission by the transmit interface. 35. The processor-based system of claim 34 wherein the data organization unit comprises: a data buffer storing respective data for a plurality of the transactions, the data for each of the transactions being selectively passed from the data buffer; and a command queue storing respective command headers for a plurality of the transactions, the command header for each of the transactions being selectively passed from the command queue with the data for the corresponding transaction being passed from the data buffer. 36. The processor-based system of claim 35, wherein the data organization unit further comprises: a multiplexer coupled to receive the data stored in the data buffer for each of the transactions and the command headers stored in the command queue for each of the transactions, the multiplexer being operable to couple the data for each of the transactions and the command header for each of the transactions to an output port responsive to multiplexer control signals; an arbitration unit coupled to at least one of the data buffer and the command queue to receive information indicative of the data and command headers for the transactions stored in the data buffer and command queue, respectively, the arbitration unit being operable to generate the control signals responsive to the information indicative of the data and command headers to cause the multiplexer to couple a lane group of either data or a command header and data for at least one of the transactions to the output port of the multiplexer. 37. The processor-based system of claim 36 further comprising a parallel-to-serial converter coupled to the output port of the multiplexer, the parallel- <Desc/Clms Page number 26> to-serial converter being operative to convert the lane group at the output port of the multiplexer into a serial stream of the lanes. 38. The processor-based system of claim 34 wherein the data organization unit is configurable to vary the number of lanes in each lane groups that are coupled from the data organization during each cycle of a clock signal. 39. The processor-based system of claim 29 wherein the command header and data for each of the transactions comprise a memory packet. 40. A method of transmitting memory transactions each of which comprises a command header and a variable amount of data, the method comprising: organizing the command header and data into groups each of which contains a predetermined number of sub-groups of a predetermined size, each of the sub-groups containing a plurality of parallel command header bits or data bits, each sub-group containing data for a first transaction being immediately followed by a sub-group containing either additional data for the first transaction or the command header for a second transaction so that each group is filled with sub-groups containing either command header bits or data bits; and transmitting each group of data as a serial stream of the sub-groups each of which includes the plurality of parallel command header bits or data bits. 41. The method of claim 40 wherein the act of organizing the command header and data into groups comprises organizing the command header and data into groups each of which contains eight sub-groups. 42. The method of claim 40 wherein the act of organizing the command header and data into groups containing a predetermined number of sub- groups comprises the command header and data so that each sub-group comprises 32 parallel bits of command header or data. <Desc/Clms Page number 27> 43. The method of claim 40, further comprising varying the quantity of sub-groups in each group. 44. A method of transmitting memory transactions each of which comprises a command header and a variable amount of data, the method comprising organizing the command header and data into lane groups each of which contains a plurality of lanes of a predetermined size, each of the lanes containing a plurality of parallel command header bits or data bits, the lane groups being organizing so that all of the lanes in each lane group are filled with either command header bits or data bits. 45. The method of claim 44 further comprising converting each of the lane groups into a serial stream of the lanes each of which contains a plurality of parallel command header bits or parallel data bits. 46. The method of claim 44 wherein the act of organizing the command header and data into lane groups comprises organizing the command header and data into lane groups each of which contains eight lanes. 47. The method of claim 44 wherein the act of organizing the command header and data into lane groups each of which contains a predetermined number of lanes comprises organizing the command header and data so that each lane comprises 32 parallel bits of command header or data. 48. The method of claim 44, further comprising varying the number of lanes in each lane group.
<Desc/Clms Page number 1> SYSTEM AND METHOD FOR ORGANIZING DATA TRANSFERS WITH MEMORY HUB MEMORY MODULES CROSS-REFERENCE TO RELATED APPLICATIONS [001] The present application claims the benefit of the filing date of U. S. Patent Application No. 10/804,608 entitled system and method for SYSTEM AND METHOD FOR ORGANIZING DATA TRANSFERS WITH MEMORY HUB MEMORY MODULES, filed March 18,2004, which is incorporated herein by reference. TECHNICAL FIELD [002] The present invention relates to processor-based systems, and more particularly, to processor-based systems having a memory module with a memory hub coupling several memory devices to a processor or other memory access device. BACKGROUND OF THE INVENTION [003] Processor-based systems, such as computer systems, use memory devices, such as dynamic random access memory ("DRAM") devices, as system memory to store instructions and data that are accessed by a processor. In a typical computer system, the processor communicates with the system memory through a processor bus and a memory controller. The processor issues a memory request, which includes a memory command, such as a read command, and an address designating the location from which data or instructions are to be read or to which data or instructions are to be written. The memory controller uses the command and address to generate appropriate command signals as well as row and column addresses, which are applied to the system memory. In response to the commands and addresses, data is transferred between the system memory and the processor. The memory controller is often part of a system controller, which also includes bus bridge circuitry for coupling the processor bus to an expansion bus, such as a PCI bus. <Desc/Clms Page number 2> [004] Although the operating speed of memory devices has continuously increased, this increase in operating speed has not kept pace with increases in the operating speed of processors. Even slower has been the increase in operating speed of memory controllers coupling processors to memory devices. The relatively slow speed of memory controllers and memory devices limits the data bandwidth between the processor and the memory devices. [005] One approach to increasing the data bandwidth to and from memory devices is to use multiple memory devices coupled to the processor through a memory hub as shown in Figure 1. A computer system 100 using a memory hub architecture includes a processor 104 for performing various computing functions, such as executing specific software to perform specific calculations or tasks. The processor 104 includes a processor bus 106 that normally includes an address bus, a control bus, and a data bus. The processor bus 106 is typically coupled to cache memory 108, which, is typically static random access memory ("SRAM"). Finally, the processor bus 106 is coupled to a system controller 110, which is also sometimes referred to as a bus bridge. [006] The system controller 110 contains a memory hub controller 128 that is coupled to the processor 104. The memory hub controller 128 is also coupled to several memory modules 130a-n through a bus system 134. Each of the memory modules 130a-n includes a memory hub 140 coupled to several memory devices 148 through command, address and data buses, collectively shown as bus 150. The memory hub 140 efficiently routes memory requests and responses between the controller 128 and the memory devices 148. Computer systems employing this architecture can have a higher bandwidth because the processor 104 can access one memory module 130a-n while another memory module 130a-n is responding to a prior memory access. For example, the processor 104 can output write data to one of the memory modules 130a-n in the system while another memory module 130a-n in the system is preparing to provide read data to the processor 104. The operating efficiency of computer systems using a memory hub architecture can make it more practical to vastly increase data bandwidth of a memory system. A memory hub architecture can also provide greatly increased memory capacity in computer systems. <Desc/Clms Page number 3> [007] The system controller 110 also serves as a communications path to the processor 104 for a variety of other components. More specifically, the system controller 110 includes a graphics port that is typically coupled to a graphics controller 112, which is, in turn, coupled to a video terminal 114. The system controller 110 is also coupled to one or more input devices 118, such as a keyboard or a mouse, to allow an operator to interface with the computer system 100. Typically, the computer system 100 also includes one or more output devices 120, such as a printer, coupled to the processor 104 through the system controller 110. One or more data storage devices 124 are also typically coupled to the processor 104 through the system controller 110 to allow the processor 104 to store data or retrieve data from internal or external storage media (not shown). Examples of typical storage devices 124 include hard and floppy disks, tape cassettes, and compact disk read-only memories (CD-ROMs). [008] A memory hub architecture can greatly increase the rate at which data can be stored in and retrieved from memory because it allows memory requests in each of several memory modules 130 to be simultaneously serviced. In fact, a memory system using several memory modules each containing a memory hub can collectively transmit and receive data at such a high rate that the bus system 134 can become the"bottleneck"limiting the data bandwidth of the memory system. [009] Two techniques have been used to maximize the data bandwidth of memory systems using a memory hub architecture. First, rather than using traditional address, data and control buses, the address, data and control bits for each memory request or"transaction"are sent together in a single packet. The packet includes a command header followed by read or write data. The command header includes bits corresponding to a memory command, such as a write or a read command, identifying bits that specify the memory module to which the request is directed, and address bits that specify the address of the memory devices 148 in the specified memory module that is being accessed with the request. The command header may also specify the quantity of read or write data that follows the command header. The use of a packetized memory system allows the memory hub controller 128 to issue a memory request by simply transmitting a packet instead of transmitting a sequence of command, address and, in the case of a write request, <Desc/Clms Page number 4> write data signals. As a result, the memory hub controller 128 can issue memory requests at a faster rate. Furthermore, a packetized memory system frees the memory hub controller 128 from having to keep track of the processing of each memory request. Instead, the memory hub controller 128 need only transmit the packet. The memory hub 140 in the memory module 130 to which the memory request is directed then processes the memory request without further interaction with the memory hub controller 128. In the case of a read request, the memory hub 140 transmits a packet back to the memory hub controller 128, either directly or through intervening memory modules 130, that contains the read data as well as identifying bits in a command header identifying the read data. The memory hub controller 128 uses the identifying bits to associate the read data with a specific memory request. [010] The second technique that has been used to maximize the data bandwidth of memory systems using a memory hub architecture is to implement the bus system 134 using separate high-speed"downstream"and"upstream"buses (not shown in Figure 1). The high-speed downstream bus couples data from the memory hub controller 128 to the memory modules 130 and from the memory modules 130 to memory modules 130 located further away from the memory hub controller 128. The high-speed upstream bus couples data from memory modules 130 to the memory hub controller 128 and from the memory modules 130 to memory modules 130 located closer to the memory hub controller 128. [011] One approach to forming packets for a memory hub system that has been proposed will now be explained with reference to Figure 2 in which several 32- bit groups of data from each of several memory accesses or"transactions"are shown in the right hand side of Figure 2. Transaction TO consists of 7 32-bit groups DO-D6 of data, which are coupled to a data organization unit 160 on a 96-bit bus 162. The bus 162 is therefore capable of coupling three 32-bit groups of data to the data organization 162 each cycle of a core clock CCLK signal, i. e. , a clock signal that is used internally in the memory hubs 140. Transaction T1 also consists of 7 32-bit groups DO-D6 of data, and it is coupled to a data organization unit 160 on a 64-bit bus 164, which is capable of coupling two 32-bit groups of data to the data organization 162 each CCLK cycle. Transaction T2 consists of only 5 32-bit groups <Desc/Clms Page number 5> DO-D4 of data, and it is also coupled to a data organization unit 160 on a 64-bit bus 166 two 32-bit groups each CCLK cycle. Finally, transaction T3 consists of 12 32- bit groups DO-D11 of data, and it is coupled to a data organization unit 160 on a 128-bit bus 168, which is capable of coupling four 32-bit groups of data to the data organization 162 each CCLK cycle. It can therefore be seen that components in the memory hub 140 outputting data on respective buses having different widths can interface with the data organization unit 160. [012] As proposed, after the groups of data for transactions TO-T3 have been clocked into a data organization unit 160, they are re-organized into respective packets. The packets are clocked out of the data organization unit in parallel, and then coupled to a parallel-to-serial converter 174, which then outputs the packet in up to 8 32-bit groups of data DO-D7. In the embodiment shown in Figure 2, the data are clocked out of the parallel-to-serial converter 174 by a high-speed system clock SCLK signal. The quantity of data transmitted from the data organization unit 160 will depend on the relative frequency between the core clock signal and the system clock signal as well as the width of the bus 134. The system may be designed so that various internal busses of various widths may be coupled to the data organization unit 160. As a result, a memory hub 140 may be designed with a core clock frequency dictated by advances in technology or specific characteristics of a system, and the system clock frequency may have been dictated by its own unique design restraints. In the embodiment shown in Figure 2, the system clock signal has a frequency of eight times the frequency of the core clock. [013] Each packet includes a 32-bit command header followed by the 32- bit groups of data in the transaction. The 32-bit groups, known as"lanes, "which are clocked out of the data organization unit 160 in parallel. The groups of lanes for each of the transactions TO-T3 are also shown in Figure 2. The number of lanes of data clocked out of the parallel-to-serial converter 174 for each period of the system clock signal will depend on the width of the high-speed bus system 134 (in this example, 32 bits). [014] Although the use separate downstream and upstream buses and memory packets organized as explained with reference to Figure 2 would be instrumental in increasing the data bandwidth to and from the memory modules 130, <Desc/Clms Page number 6> the data bandwidth would still sometimes be less than optimal because the size of a packet for a transaction may be less than the capacity of the high speed bus system 134 particularly since the quantity of data in each packet may vary. With further reference to Figure 2, the 32-bit groups of data for each transaction are arranged in packets. As explained above, the 32-bit command header CH is inserted before the first 32-bit group of data for each transaction. Since transaction TO consists of 7 32- bit groups of data DO-D6, the command header CH plus the data in transaction TO occupies all 8 lanes of a first lane group 175. As a result, all 8 lanes of the high- speed bus system 134 would be used. Similarly, since transaction T1 also consists of 7 32-bit groups of data DO-D6, all 8 lanes of a second lane group 176 would be occupied. Consequently, all 8 lanes of the high-speed bus system 134 would again be filled. However, since transaction T2 consists of only 5 32-bit groups of data DO- D4, only 6 lanes (the command header plus the 5 32-bit groups of data in transaction T2) of a third lane group 177 would be occupied. The 2 vacant lanes in the third lane group 177 would result in the high-speed bus system 134 not carrying any packet data during two periods of the high-speed system clock signal. [015] Transaction T3 consists of 12 32-bit groups of data DO-D11 so that the first 7 32-bit groups of data DO-D6 in transaction T3 (plus the 32-bit command header) would fill all 8 lanes of a fourth lane group 178. As a result, the high-speed bus system 134 would be fully occupied. However, the remaining 5 32-bit groups of data D7-D11 would occupy only 5 of 8 lanes of a fifth lane group 179. Therefore, data would not be coupled through the high-speed bus system 134 for 3 periods of the system clock signal. As a result, the data bandwidth of the memory system may be significantly less than the data bandwidth that could be achieved if all 8 lanes of the high-speed bus system 134 were always filled. [016] Although the data organization method has been described with respect to a computer system having specific bus widths, groups of data having specific sized, etc. , it will be understood that the same or similar problems would exist for computer systems having other design parameters. [017] There is therefore a need for a system and method that organizes the data coupled to or from memory modules in a memory hub system in a manner that allows the full capacity of one a high-speed memory bus system to be utilized. <Desc/Clms Page number 7> SUMMARY OF THE INVENTION [018] A memory hub for a memory module includes a system for organizing memory transactions transmitted by the memory module. The organizing system organizes the memory transactions into packets each of which includes a command header and data, which may have a variable number of data bits. The organizing system organizes the command header and data into lane groups each of which includes a plurality of lanes. Each of the lanes contains a plurality of parallel command header bits or parallel data bits. The organizing system organizing the lane groups so that all of the lanes in each lane group are filled with either command header bits or data bits. The organizing system if further operable to convert each of the lane groups into a serial stream of the lanes for transmission from the memory hub. Each of the transmitted lanes contains either a plurality of parallel command header bits or parallel data bits. BRIEF DESCRIPTION OF THE DRAWINGS [019]'Figure 1 is a block diagram of a computer system having a memory hub controller that is coupled to several memory modules having a memory hub architecture. [020] Figure 2 is a schematic diagram illustrating one approach that has been proposed for organizing data that is coupled to and from the memory modules shown in Figure 1. [021] Figure 3 is a schematic diagram illustrating one approach for organizing data for coupling to and from the memory modules shown in Figure 1 according to one example of the present invention. [022] Figure 4 is a block diagram of a memory hub that is capable of reorganizing data as shown in Figure 3, which may be used in the memory modules of Figure 1. [023] Figure 5 is a block diagram of a data organization system that can be used in a memory hub controller, the memory hub of Figure 4 or some other memory hub design. <Desc/Clms Page number 8> DETAILED DESCRIPTION OF THE INVENTION [024] Embodiments of the present invention are directed to a memory hub controller coupled to several memory hub modules through a high-speed downstream bus and a high-speed upstream bus. More particularly, embodiments of the present invention are directed to a system and method in which data are organized prior to be coupled to the downstream and upstream buses so that substantially all of the capacity of the buses are utilized. Certain details are set forth below to provide a sufficient understanding of various embodiments of the invention. However, it will be clear to one skilled in the art that the invention may be practiced without these particular details. In other instances, well-known circuits, control signals, and timing protocols have not been shown in detail in order to avoid unnecessarily obscuring the invention. [025] A method of forming packets for a memory hub system according to one example of the present invention will now be explained with reference to Figure 3. As shown in Figure 3, several 32-bit groups of data from each of several memory accesses or"transactions"are identical to those shown in Figure 2 for purposes of illustrating the differences therebetween except that a portion of an additional transaction T4 is shown in Figure 3. As before, transaction TO consists of 7 32-bit groups of data DO-D6, transaction T1 also consists of 7 32-bit groups of data DO-D6, transaction T2 consists of 5 32-bit groups of data DO-D4, and transaction T3 consists of 12 32-bit groups of data DO-D11. [026] According to one example of the present invention, the groups of data for the transactions TO-T4 are clocked into a data organization unit 180 (explained with reference to Figure 5) responsive to the core clock signal, where they are re- organized into respective packets. In the example of Figure 3, each packet also includes a 32-bit command header CH followed by the 32-bit groups of data in the transaction. As before, the 32-bit groups or lanes are clocked out of the data organization unit 180 in parallel and then converted to serial data by a parallel-to- serial converter 182 responsive to the system clock signal. [027] Transactions TO and T1, which each consists of the command header plus 7 32-bit groups of data DO-D6, occupy all 8 lanes of the first lane group 190 and the second lane group 192, respectively, in the same manner as explained above <Desc/Clms Page number 9> with reference to Figure 2. Similarly, transaction T2 again consists of only 5 32-bit groups of data DO-D4 so that only 6 lanes (the command header plus the 5 32-bit groups of data in transaction T2) of a third lane group 194 are filled. However, the 3 subsequent lanes of the third lane group 194 that were left vacant in the example of Figure 2 are instead filled by the command header CH and the first 32-bit group of data DO for transaction T3. As a result, a full lane of data is coupled through the high-speed bus system 134 during respective periods of the system clock signal. [028] With further reference to Figure 3, the next 8 groups of data D1-D8 of transaction T3 are used to fill all 8 lanes of a fourth lane group 196 so that the high-speed bus system 134 is fully utilized. The remaining 3 lanes carrying data D9-D11 for transaction T3 are placed in a fifth lane group 198. Significantly, however, the remaining 5 lanes in the fifth lane group 198 are filled with the 32-bit command header CH and the first 4 32-bit groups of data DO-D3 for the transaction T4. In like manner, the command header and data for a memory transaction always immediately follows the data from a prior transaction so that the high-speed bus system 134 is fully utilized. Therefore, assuming there is data from a memory transaction that is waiting to be coupled through the high-speed bus system 134, there are never any idle periods in the bus system 134. As a result, the data bandwidth of the memory system is maximized. [029] Another advantage to the data organization unit 180 of Figure 3 is that the number of lanes of data in each lane group 190-198 can be configured based on the frequency of the CCLK signal and the frequency of the system clock SCLK clocking data from the parallel to serial converter 182 as well as the width of the external bus 134 and possibly other factors. Therefore, a memory hub 140 may be designed with a CCLK frequency dictaged by advances in technology or specific characterists of an system, and the SCLK frequency may be dictated by it own design constraints, thus changing the frequency ratio of CCLK to the frequency of SCLK. Additionally, some memory hubs 140 may be designed with a wider bus 134 than others. However, the ability to vary the number of lane groups clocked out of the data organization unit 180 each CCLK cycle can accommodate these changes without changing circuitry within the memory hub 140. The data organization unit <Desc/Clms Page number 10> 180 can be programmed to output a specific number of lanes each CCLK cycle by suitable means, such as through an I/O port during initialization. [030] One example of a memory hub 200 that can organize data coupled to and from the memory devices 148 in the manner shown in Figure 3 is shown in Figure 4. The memory hub 200 includes a downstream receive interface 210, an upstream transmit interface 212, an upstream receive interface 214, and a downstream transmit interface 216. The downstream receive interface 210 is used to couple data into the memory module 130 from either the memory hub controller 128 (Figure 1) or an upstream memory module 130. The upstream transmit interface 212 is used to couple data from the memory module 130 to either the memory hub controller 128 or an upstream memory module 130. The upstream receive interface 214 is used to couple data into the memory module 130 from a downstream memory module 130. Finally, the downstream transmit interface 216 is used to couple data out of the memory module 130 to a downstream memory module 130. Significantly, the upstream transmit interface 212 includes a data organization system 220 that organizes a command header and data prior to being coupled to a high-speed upstream bus 224. The structure and operation of one example of the data organization system 220 will be explained with reference to Figure 5. [031] The interfaces 210-216 are coupled to a switch 260 through a plurality of bus and signal lines, represented by buses 228. The buses 228 are conventional, and include a write data bus coupled to the receiver interfaces 210, 224 and a read data bus coupled to the transmit interfaces 212,222. [032] The switch 260 is coupled to four memory interfaces 270a-d which are, in turn, coupled to the memory devices 160 (Figure 1). By providing a separate and independent memory interface 270a-d for each set of memory devices 160, the memory hub 200 avoids bus or memory bank conflicts that typically occur with single channel memory architectures. The switch 260 is coupled to each memory interface through a plurality of bus and signal lines, represented by buses 274. In addition to coupling the interfaces 210-216 to the memory interfaces, the switch 260 can also couple the memory interfaces 210-216 to each other to allow memory packets to be coupled downstream or upstream through the memory module 130 to either another memory module 130 or the memory hub controller 128. <Desc/Clms Page number 11> [033] In an embodiment of the present invention, each memory interface 270a-d is specially adapted to the memory devices 148 (Figure 1) to which it is coupled. More specifically, each memory interface 270a-d is specially adapted to provide and receive the specific signals received and generated, respectively, by the memory devices 148 to which it is coupled. Also, the memory interfaces 270a-d are capable of operating with memory devices 148 operating at different clock frequencies. As a result, the memory interfaces 270a-d isolate the processor 104 from changes that may occur at the interface between the memory hub 200 and memory devices 148 coupled to the memory hub 200, and it provides a more controlled environment to which the memory devices 148 may interface. [034] The switch 260 can be any of a variety of conventional or hereinafter developed switches. For example, the switch 260 may be a cross-bar switch or a set of multiplexers that do not provide the same level of connectivity as a cross-bar switch but nevertheless can couple the bus interfaces 210-216 to each of the memory interfaces 470a-d. The switch 260 may also include arbitration logic (not shown) to determine which memory accesses should receive priority over other memory accesses. Bus arbitration performing this function is well known to one skilled in the art. [035] With further reference to Figure 4, each of the memory interfaces 270a-d includes a respective memory controller 280, a respective write buffer 282, and a respective cache memory unit 284. The memory controller 280 performs the same functions as a conventional memory controller by providing control, address and data signals to the memory devices 148 to which it is coupled and receiving data signals from the memory devices 148 to which it is coupled. However, the nature of the signals sent and received by the memory controller 280 will correspond to the nature of the signals that the memory devices 148 are adapted to send and receive. The cache memory unit 284 includes the normal components of a cache memory, including a tag memory, a data memory, a comparator, and the like, as is well known in the art. The memory devices used in the write buffer 282 and the cache memory unit 284 may be either DRAM devices, static random access memory ("SRAM") devices, other types of memory devices, or a combination of all three. <Desc/Clms Page number 12> Furthermore, any or all of these memory devices as well as the other components used in the cache memory unit 284 may be either embedded or stand-alone devices. [036] The write buffer 282 in each memory interface 270a-d is used to store write requests while a read request is being serviced. In such a system, the processor 104 can issue a write request to a system memory device even if the memory device 148 to which the write request is directed is busy servicing a prior write or read request. The write buffer 282 preferably accumulates several write requests received from the switch 260, which may be interspersed with read requests, and subsequently applies them to each of the memory devices 148 in sequence without any intervening read requests. By pipelining the write requests in this manner, they can be more efficiently processed since delays inherent in read/write turnarounds are avoided. The ability to buffer write requests to allow a read request to be serviced can also greatly reduce memory read latency since read requests can be given first priority regardless of their chronological order. [037] The use of the cache memory unit 284 in each memory interface 270a-d allows the processor 104 to receive data responsive to a read command directed to respective memory devices 148 without waiting for the memory devices 148 to provide such data in the event that the data was recently read from or written to that memory devices 148. The cache memory unit 284 thus reduces the read latency of the memory devices 148a-d to maximize the memory bandwidth of the computer system. Similarly, the processor 104 can store write data in the cache memory unit 284 and then perform other functions while the memory controller 280 in the same memory interface 270a-d transfers the write data from the cache memory unit 284 to the memory devices 148 to which it is coupled. [038] Further included in the memory hub 200 may be a self-test module 290 coupled to the switch 260 through a test bus 292. The self-test module 290 is further coupled to a maintenance bus 296, such as a System Management Bus (SMBus) or a maintenance bus according to the Joint Test Action Group (JTAG) and IEEE 1149.1 standards. Both the SMBus and JTAG standards are well known by those ordinarily skilled in the art. Generally, the maintenance bus 296 provides a user access to the self-test module 290 in order to set memory testing parameters and receive test results. For example, the user can couple a separate PC host via the <Desc/Clms Page number 13> maintenance bus 296 to set the relative timing between signals that are applied to the memory devices 148. Similarly, data indicative of the relative timing between signals that are received from the memory devices 148 can be coupled to the PC host via the maintenance bus 296. [039] Further included in the memory hub 200 may be a DMA engine 286 coupled to the switch 260 through a bus 288. The DMA engine 286 enables the memory hub 200 to move blocks of data from one location in one of the memory devices 148 to another location in the memory device without intervention from the processor 104. The bus 288 includes a plurality of conventional bus lines and signal lines, such as address, control, data buses, and the like, for handling data transfers in the system memory. Conventional DMA operations well known by those ordinarily skilled in the art can be implemented by the DMA engine 286. [040] The memory modules 130 are shown coupled to the memory hub controller 128 in a point-to-point coupling arrangement in which each portion of the high-speed buses 132,134 are coupled only between two points. However, it will be understood that other topologies may also be used. For example, it may be possible to use a multi-drop arrangement in which a single downstream bus (not shown) and a single upstream bus (not shown) are coupled to all of the memory modules 130. A switching topology may also be used in which the memory hub controller 128 is selectively coupled to each of the memory modules 130 through a switch (not shown). Other topologies that may be used will be apparent to one skilled in the art. [041] One embodiment of the data organization system 220 used in the memory hub 200 of Figure 4 is shown in Figure 5. The data organization system 220 can also be used in the memory hub controller 128 to couple data to the high- speed downstream bus 222. The portions of receive interfaces 210, 224 (Figure 4) and a receive interface in the memory hub controller 128 that capture the memory packets from the high-speed buses 132,134 is relatively straightforward, and the design of a suitable system is well within the ability of one skilled in the art. [042] The data organization system 220 includes a data buffer 230 that receives the 32-bit groups of data that are to be coupled through the high-speed buses 132,134. In the case of the data organization system 220 in the memory hub controller 128, the source of the data may be the processor 104 (Figure 1) or any <Desc/Clms Page number 14> other memory access device. In the case of the memory modules 130, the data may originate from the memory devices 148 in the memory modules 130 or from another memory module 130. In any case, the groups of data are clocked into the data buffer 230 responsive to the core clock signal, as indicated schematically in Figure 5. As also schematically shown in Figure 5, the data stored in the data buffer 230 for different transactions are of different lengths. [043] Also included in the data organization system 220 is a command queue 234, which is a small buffer that stores the command headers for the memory packets. The command queue 234, which is also clocked by the core clock signal, interfaces with a number of other components that provide the information for the command headers, but these components have been omitted from Figure 5 in the interests of brevity and clarity. [044] Data stored in the data buffer 230 and the command headers stored in the command queue 234 are coupled to a multiplexer 236, which is controlled by an arbitration unit 238. The multiplexer 236 selects the data for one of the transactions stored in the data buffer 230 and selects the corresponding command header from the command queue 234. The arbitration unit 238 can cause the multiplexer to select the data and command header for the transaction based on a variety of algorithms. For example, the arbitration unit 238 may give priority to transactions that comprise responses from downstream memory modules 130 and thereby transmit such transactions upstream on the bus 224 (Figure 4) prior to transmitting local transactions from memory devices 148 in the memory module 130. Conversely, the arbitration unit 238 may give priority to transactions comprising local responses. Alternatively, the arbitration unit 238 may alternately transmit local transactions and downstream or upstream transactions. Most simply, the arbitration unit 238 could transmit transactions in the order that they are received by the memory hub 140. Although the arbitration unit 238 in each memory hub 140 preferably operates in the same manner, in alternative embodiments the arbitration units in difference memory hubs 140 may operate differently. Other variations in the operation of the arbitration unit 238 and logic circuitry for implementing the arbitration unit will be apparent to one skilled in the art. <Desc/Clms Page number 15> [045] Significantly, regardless of which order the arbitration unit 238 selects the transactions, the arbitration unit causes the multiplexer 236 to organize the command header and data for the selected transaction so that all lanes of a lane group 240 at the output of the multiplexer 236 are filled. The lane group 240 is then coupled to a parallel-to-serial converter 244, which may be, for example, a series of shift registers that are loaded in parallel. The data are then clocked out of the parallel-to-serial converter 244 by the system clock signal, and is passed to one of the high-speed buses 222,224, as explained above with reference to Figure 3. By filling all of the lanes in each lane group 240, the entire data bandwidth of the high- speed buses 222,224 is utilized. [046] From the foregoing it will be appreciated that, although specific embodiments of the invention have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the invention. Accordingly, the invention is not limited except as by the appended claims.
Circuitry in a portable device may be attached to external device, such as a power supply, to receive a voltage at a desired voltage level from the external device. The circuitry may assert one of several electrical configurations on the cabling that electrically connects the portable device to the external device to indicate to the external device a desired voltage level.
CLAIMS 1. A circuit comprising: a power bus for attachment to an external device; a plurality of signal lines for attachment to the external device; a plurality of first circuits to sense an electrical configuration on the signal lines; and a plurality of second circuits to assert an electrical configuration on the signal lines, wherein one of the second circuits asserts an electrical configuration on the signal lines from among a plurality of electrical configurations when one of the first circuits senses a predetermined electrical configuration on the signal lines, whereby a voltage from an external device attached to the circuit is asserted on the power bus at a voltage level that corresponds to the electrical configuration that is asserted on the signal lines by said one of the second circuits. 2. The circuit of claim 1 wherein the plurality of electrical configurations comprises at least a first electrical configuration that is associated with a first voltage level and a second electrical configuration that is associated with a second voltage level, wherein the voltage on the power bus is at the first voltage level in response to the first electrical configuration being asserted on the signal lines and the voltage on the power bus is at the second voltage level in response to the second electrical configuration being asserted on the signal lines. 3. The circuit of claim 1 wherein the circuit is compliant with the Universal Serial Bus (USB) Specification, wherein the power bus is VBUS and the plurality of signal lines comprises a D- signal line and a D+ signal line. 4. The circuit of claim 1 wherein the circuit operates in conformance to the USB Battery Charging Specification. 5. The circuit of claim 1 further comprising charging circuitry connected to the power bus and having connectors for connection to a battery, whereby a battery connected to the charging circuitry can be charged by the voltage on the power bus. 6. The circuit of claim 1 further comprising a connector connected to the power bus, wherein a load connected to the connector can receive power from the power bus. 7. The circuit of claim 1 wherein the external device is an AC adapter. 8. The circuit of claim 1 wherein the external device is an electronic device. 9. The circuit of claim 1 wherein the external device has a selectable output voltage. 10. The circuit of claim 9 wherein the selectable output voltage depends on an electrical configuration asserted on the signal lines. 11. A method in a circuit comprising: detecting an attachment to an external device, wherein a voltage from the external device is asserted on a power bus of the circuit; determining if the external device is of a first kind; and establishing a voltage level on the power bus by asserting an electrical configuration on signal lines connected to the external device when the external device is of the first kind including asserting at least a first electrical configuration on the signal lines to receive a first voltage level from the external device or a second electrical configuration on the signal lines to receive a second voltage level from the external device. 12. The method of claim 1 1 further comprising asserting, at a time subsequent to asserting the first electrical configuration or the second electrical configuration, a third electrical configuration to receive a third voltage level from the external device. 13. The method of claim 1 1 further comprising operating the circuit in accordance with the USB Battery Charging specification, wherein the power bus is VBUS and the signal lines are a D- signal line and a D+ signal line. 14. The method of claim 11 wherein determining if the external device is of a first kind includes performing steps in accordance with the USB Battery Charging Specification. 15. The method of claim 14 wherein determining if the external device is of a first kind includes determining if the external device is a Standard Downstream Port (SDP), a Dedicated Charging Port (DCP), or a Charging Downstream Port (CDP). 16. The method of claim 11 wherein determining if the external device is of a first kind includes asserting a third electrical configuration on the signal lines and sensing a predetermined electrical configuration on the signal lines. 17. The method of claim 11 wherein asserting an electrical configuration on the signal lines includes one or more of connecting a signal line to a voltage potential, or connecting a signal line to a current source, or connecting a signal line to another signal line. 18. The method of claim 11 further comprising charging a battery connected to the circuit using the voltage asserted on the power bus. 19. The method of claim 1 1 further comprising providing the voltage asserted on the power bus to a load connected to the circuit. 20. A method in a circuit comprising a connection to a power bus and a connection to a plurality of signal lines, the method comprising: detecting attachment of the circuit to an external device; detecting that the device is a DCP as defined by the USB Battery Charging Specification, including sensing a first electrical configuration on the signal lines; asserting a second electrical configuration on the signal lines and detecting a third electrical configuration on the signal lines in response thereto; asserting a fourth electrical configuration on the signal lines and in response thereto receiving voltage on the power bus at a first voltage level, wherein the fourth electrical configuration is selected from among a plurality of predetermined electrical configurations, each predetermined electrical configuration having associated therewith a voltage level that can be asserted on the power bus. 21. The method of claim 20 wherein the external device is a power supply.
HIGH VOLTAGE DEDICATED CHARGING PORT CROSS REFERENCE TO RELATED APPLICATIONS [0001] The present disclosure claims priority to U.S. App. No. 13/759,865 filed February 5, 2013, which also claims priority to U.S. Provisional App. No. 61/719,822 filed October 29, 2012, the content of both of which are incorporated herein by reference in their entireties for all purposes. BACKGROUND [0002] Unless otherwise indicated herein, the approaches described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section. [0003] Power requirements for modern portable electronics are increasing very rapidly; e.g., devices having larger displays, LTE devices (radios, modems, etc.), multi- core processors, and so on. To maintain acceptable up times, such devices utilize batteries with higher capacity. In such systems, battery charging times tend to be very long when conventional power sources are used. The reasons include: (1) limited power capability (USB 5V/1.8A max); and (2) voltage headroom issues between input power source and battery. Furthermore, many readily available power sources (e.g., monitors, notebooks, etc.) cannot be utilized because of their high-voltage operation vs. what the portable device can tolerate. Implementing a solution that requires the use of a secondary portable device connector significantly increases solution and consumer cost (proprietary connector, wall adapter, etc.). [0004] With battery capacities increasing, 5V input voltage does not provide enough voltage headroom to achieve sufficiently high charge currents due to cable, connector, PCB, and charger impedances. Many batteries now have a float voltage of 4.35V which makes this issue worse, especially since the trend is toward the use of higher voltages. For example, a 2S stack provides about 8.4V or 8.7V, thus requiring a voltage higher than 5V to charge efficiently. SUMMARY [0005] A circuit for charging a battery from an external device may include a detection circuit to detect an electrical configuration of the signal lines that comprise a cable for connecting the circuit to the external device. A configuration circuit may assert one of several electrical configurations on the signal lines in response to the detection circuit. In response, the external device may supply a voltage on a power line at a voltage level corresponding to the electrical configuration asserted on the signal lines. [0006] In some embodiments, the circuit operates in accordance with the USB Battery Charging Specification. The power line may be VBUS and the signal lines may be the D+ and D- lines as set forth in the USB Specification. The circuit can be backward compatible with industry standards, allowing for existing standardized connectors and cabling, while at the same time allowing for a greater range of operational voltages beyond the standard 5 V operating level of the USB specification. [0007] The following detailed description and accompanying drawings provide a better understanding of the nature and advantages of the present disclosure. BRIEF DESCRIPTION OF THE DRAWINGS [0008] Fig. 1 is a high level generic block diagram of circuitry according to the present disclosure. [0009] Fig. 2 is a high level functional flow chart of processing in accordance with the present disclosure. [0010] Fig. 3 shows an illustrative embodiment based on the USB Specification. [0011] Fig. 4 illustrates an example of an external device. [0012] Fig. 5 shows a functional flow chart of the processing in the portable device shown in Fig. 3. [0013] Fig. 6 shows a function flow chart of the processing in the external device shown in Fig. 3. [0014] Fig. 7 shows voltage levels according to the USB Battery Charging Specification. [0015] Fig. 8 is a summary of system operation according to the present disclosure. DETAILED DESCRIPTION [0016] In the following description, for purposes of explanation, numerous examples and specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be evident, however, to one skilled in the art that the present disclosure as expressed in the claims may include some or all of the features in these examples alone or in combination with other features described below, and may further include modifications and equivalents of the features and concepts described herein. [0017] Fig. 1 shows a circuit 100 in accordance with embodiments of the present disclosure. The circuit 100 may be included in a portable device 10 such as a smartphone, computer tablet, and so on. The portable device 10 may include a battery 12 to power the portable device. In some embodiments, the battery 12 may be a rechargeable battery that the circuit 100 may charge. The battery 12 may be a single cell configuration, or may be a multi-cell stack. [0018] The portable device 10 may be connected to an external device 14. In some embodiments, the external device 14 may be an alternating current (AC) adapter such as a wall adapter. In other embodiments, the external device 14 may be an electronic device that can supply power to the portable device. For example, the external device 14 may be laptop computer that supplies power from its own battery pack or by virtue of being connected to an AC supply. [0019] The portable device 10 and external device 14 may have respective connectors 22 and 24. A cable 26 may electrically connect the portable device 10 and the external device 14. [0020] In some embodiments, the circuit 100 may include charging circuitry 102, detection circuitry 104, control circuitry 106, and configuration circuitry 108. The circuit 100 may include a power bus 114 for electrical connection to a power line in the cable 26. The circuit 100 may further include a signal bus 1 12 comprising a plurality of signal bus lines for electrical connection to signal lines in the cable 26. The number of signal bus lines comprising the signal bus 112 may vary from one embodiment to another. For example, a design based on the USB Specification defines two signal bus lines, D+ and D-, while another design may employ more than two signal bus lines. [0021] The charging circuitry 102 may be connected to the power bus 1 14 to transfer power from a voltage supplied by the external device 14 to charge the battery 12. The charging circuitry 102 may be of any known design, such as a switching charger design for instance. [0022] The detection circuitry 104 may be connected to the signal bus 1 12 to detect various electrical configurations on the signal bus lines comprising the signal bus. The external device 14 may assert an electrical configuration on the signal lines of the cable 26 that the detection circuitry 104 may detect on the signal bus 112. In some embodiments, the detection circuitry 104 may comprise voltage comparators, current sensors, and the like to detect an electrical configuration on the signal bus 112. [0023] An electrical configuration asserted on the signal bus lines of the signal bus 1 12 may be a voltage level (including ground potential) asserted one or more signal bus lines, or multiple voltage levels asserted on several signal bus lines. An electrical configuration may also be one or more currents flowing respectively in one or more of the signal bus lines. In some embodiments, an electrical configuration may be asserted by connecting one or more of the signal bus lines to a resistor (or other passive device such as a capacitor or inductor), or connecting together one or more of the signal bus lines. In some embodiments, an electrical configuration may be asserted using a combination of voltage, current flows, and/or resistor (or other passive device). [0024] As mentioned above, an electrical configuration may be asserted on the signal bus lines of the signal bus 1 12 by an external device 14 electrically connected to the signal bus via cable 26. Similarly, an electrical configuration may be asserted on the signal bus lines by the configuration circuitry 108. In some embodiments, for example, the configuration circuitry 108 may include voltage sources, current sources, switches (e.g., MOS switches), passive devices (e.g., a resistor), and the like to assert some combination of voltage levels and/or current levels on one or more of the signal bus lines that comprise the signal bus 112. [0025] The control circuitry 106 may be connected to receive one or more signals 104a from the detection circuitry 104. The signals 104a may be indicative of a detected electrical configuration asserted on the signal bus 1 12 by the external device 14. The control circuitry 106 may be connected to provide one or more control signals 106a to the configuration circuitry 108 in order to assert a particular electrical configuration on the signal bus 112. [0026] The portable device 10 may further comprise device electronics (load) 101. For example, if the portable device 10 is a computer tablet, the device electronics 101 may comprise the components such as a processor, memory, display, etc. The device electronics 101 may be connected to the power bus 1 14 via connector 114a to draw power received by the circuit 100. [0027] The external device 14 may include a voltage selector 122 and a power section 124, in addition to other electronic circuitry (not shown) comprising the external device. For example, the external device 14 may be laptop computer, or the external device may be a power supply (e.g., an AC adapter), etc. The power circuit 124 may provide a voltage at one of several selectable voltage levels that can be delivered to the portable device 10 via cable 26. For example, the external device 14 may include a power bus 134 that is connected to the power line in the cable 26. The voltage selector 122 may connect the voltage produced by the power section 124 to the power bus 134. In some embodiments, the voltage selector 122 may be connected to a signal bus 132 comprising a plurality of signal bus lines, which may be electrically connected to signal bus 1 12 via cable 26. As will be explained in more detail below, the voltage selector 122 may detect or sense an electrical configuration on the signal bus 132 and control or otherwise signal the power section 124 to output a voltage level that corresponds to the detected electrical configuration. The voltage selector 122 may comprise digital logic, analog circuitry, or a combination of digital and analog components to detect or sense the electrical configuration on the signal bus 132. [0028] Fig. 2 illustrates an operation of the circuit 100 in conjunction with an external device according to principles of the present disclosure. At block 202, the circuit 100 may detect an attachment to an external device (e.g., 14, Fig. 1). For example, the circuit 100 may include circuitry (not shown) to detect the presence of a voltage on the power bus 114 that is provided by the external device 14. [0029] At block 204, the circuit 100 may determine what kind of external device is attached to the circuit. For example, the external device 14 may be a conventional power supply that supplies a single output voltage. In accordance with the present disclosure, the circuit may be attached to an external device that is capable of supplying a voltage at any one of several selectable voltage levels. [0030] In some embodiments, the external device 14 may assert an electrical configuration on the signal bus 132 to indicate what kind of device it is. Merely to illustrate, suppose the signal bus 132 comprises two signal bus lines. An electrical configuration on the two signal bus lines may be asserted by the external device 14 (e.g., using voltage selector 122) by connecting a resistor between two of the signal bus lines and applying a predetermined direct current (DC) voltage level on the other signal bus line. Another electrical configuration might involve applying two different DC voltage levels on each of the signal bus lines, and so on. [0031] The detection circuitry 104 may sense the particular electrical configuration asserted by the external device by sensing the signal bus lines comprising the signal bus 1 12. Based on the electrical configuration sensed by the detection circuitry 104, signal(s) 104a may be provided to the control circuitry 106 to indicate the kind of external device that is attached to the circuit 100. In accordance with the present disclosure, if at block 206, the electrical configuration sensed at block 204 indicates that the external device 14 is of a first kind (e.g., has selectable voltage levels) then additional processing may be performed, as described below. If the external device 14 is not of the first kind, then the circuit 100 may operate under the assumption that it is attached to an external device that is capable of outputting a single voltage level, and at block 208 receive the voltage from the external device. Accordingly, at block 208, the voltage received by the circuit 100 may then be used to charge a battery (e.g., 26, Fig. 1) or provide power to a load (e.g., 101). [0032] If, at block 206, the external device 14 is determined to be of the first kind where the external device supports multiple selectable output voltage levels, then in accordance with principles of the present disclosure, the circuit 100 at block 212 may use the configuration circuitry 108 to assert an electrical configuration on the signal bus 1 12 from among several predefined electrical configurations. In some embodiments, for example, the circuit 100 may support different kinds of battery 12, having different voltage levels for proper battery charging. For instance, some batteries may be charged with 5 volts, other batteries may require 9 volts, 12 volts, 20 volts, and so on. Likewise, different types of loads 101 may operate at different voltage levels. Accordingly, the control circuitry 106 may generate signals 106a to operate the configuration circuitry 108 to assert an electrical configuration on the signal bus 1 12 that corresponds to a specified voltage level. [0033] Each predefined electrical configuration may be associated with a predefined voltage level. Merely to illustrate this point, consider the following example. Suppose the signal bus 1 12 comprises two signal bus lines. A first electrical configuration that may be asserted on the signal bus lines may include asserting 1.5V on one line and 3V on the other line. This configuration may be associated with a voltage level say, for example, 10V. A second electrical configuration might be to short the first and second signal bus lines, and this configuration may be associated with a voltage level of, say, 15V, and so on. [0034] If the circuit 100 requires 10V, then the configuration circuitry 108 may assert the first electrical configuration on the signal bus 1 12. Likewise, if the circuit 100 requires 15V, then the configuration circuitry 108 may assert the second electrical configuration on the signal bus 112, and so on. In accordance with principles of the present disclosure, the circuit 100 may specify to the external device 14 what voltage level to output when the external device can support multiple outputs by asserting a suitable electrical configuration on the signal bus lines that the external device may detect. These voltage levels, of course, are merely to illustrate an example; specific voltage levels will depend on implementation, adherence to industry specs., and so on. [0035] In some embodiments, the electrical configuration asserted on the signal bus 1 12 may be detected by the external device 14 at block 210a, and in response, the external device may reconfigure itself to output a voltage level that corresponds to the detected electrical configuration. At block 212, the circuit 100 may receive a voltage from the external device 14 at the specified voltage level. For example, the circuit 100 may use the received voltage to charge a battery (e.g., 26, Fig. 1) or to provide power to a load (e.g., 101, Fig. 1). [0036] A specific embodiment according to principles of the present disclosure may be incorporated in the Universal Serial Bus (USB) interface (e.g., USB Specification, Revision 2.0) as depicted in Fig. 3. More particularly, the embodiment depicted in Fig. 3 may include an embodiment of circuit 100 that is based on the USB Battery Charging Specification, Revision 1.2 (BC1.2). A large majority of devices conform to BC1.2, and so this embodiment may have desirable benefits from in terms of manufacturing and installed user base. Accordingly, in some embodiments, circuit 100 may operate in conformance with BC1.2, thus providing for devices that are compatible with existing devices, are easy to manufacture (since most of the circuitry has already been designed), and offer benefits of the present disclosure. [0037] A portable device 302 may attach to an external device 304. The portable device 302 may be any electronic device that incorporates a USB interface; e.g., mobile communication device, digital camera, computer tablet, etc. Likewise, the external device 304 may be any electronic device that incorporates a USB interface and can provide power to the portable device 302, including power supplies, battery chargers, other electronic devices such as a computer, and so on. [0038] A cable (e.g., cable 26, Fig. 1) that mechanically and electrically connects the portable device 302 and the external device 304 may comprise four wires including a power line called VBUS, signal bus lines D+ and D-, and a ground line. These four wires are found in standard USB A and USB B plugs (e.g., connectors 22 and 24, Fig. 1). Accordingly, VBUS constitutes an example of power bus 1 14 and 134 shown in Fig. 1. The D+ and D- lines represent an example of signal lines comprising signal bus 1 12 and 132 shown in Fig. 1. [0039] In some embodiments, the portable device 302 may include a comparator to compare a voltage asserted on VBUS with a voltage level VOTG SESSN VLD. The comparator may be used to determine that an attachment to external device 304 has been made; e.g., when the voltage level on VBUS exceeds VOTG_SESSN_VLD. [0040] The portable device 302 may include detection circuitry 312a, 312b, which produce respective signals DCH DET and CHG DET. As explained above in connection with the detection circuitry 104 shown in Fig. 1, the detection circuitry 312a, 312b in Fig. 3 may detect different electrical configurations on the D+ and D- lines, as will be described in more detail below. [0041] Configuration circuitry 322a may include voltage sources VDP SRC, VDP UP & resistor RDP UP, VLGC_HI & current source IDP SRC, and IDP SINK, and their respective switches for selective connection to the D+ line. Additional configuration circuitry 322b may also include VDM UP, VDM SRC, RDM DWN, and IDM SINK, and their respective switches for selective connection to the D- line. As explained above in connection with the configuration circuitry 108 shown in Fig. 1, the configuration circuitry 322a, 322b in Fig. 3 may assert different electrical configurations on the D+ and D- lines, as will be described in more detail below. [0042] In accordance with the present disclosure, the external device 304 may include a power supply 314 having an output voltage with selectable voltage levels. For example, the selectable voltage levels may be 5V, 9V, 12V, and 20V. Of course, fewer or more levels may be provided, different levels may be output, and so on. The external device 304 may further include comparators 324a, 324b, 324c, and 324d for detecting voltage levels and current flows (e.g., through resistors RDAT LKG and RDM DWN) on the D+ and D- lines. The voltage levels and current flows define different electrical configurations that can be asserted on the D+ and D- lines by the portable device 302. The reference levels shown in Fig. 3 use 1 V voltage levels, but it will be appreciated that in other embodiments, the reference levels may be at other voltage levels. [0043] As will be explained below, the external device 304 may also assert different electrical configurations on the D+ and D- lines using the resistors RDAT LKG and RDM DWN. In some embodiments, a glitch filter 334 may be provided to avoid false positive detections due to noise on the D+ line. [0044] An illustrative example of an external device 304 (Fig. 3) is the power supply 400 (e.g., wall adapter), shown in Fig. 4, that can provide 9V, 12V, and 20V voltage levels, in addition to the 5V that is conventionally provided on VBUS. A transformer may be used to electrically isolate the high-power primary side 404 from the low-power secondary side 402, which interfaces with the external environment. The secondary side 402 may include an interface IC having connections for the D+ and D- lines. The interface IC may include detection circuitry such as comparators 324a-324d shown in Fig. 3, for example. In some embodiments, the interface IC may be integrated into the AC/DC control IC. A primary side 404 may provide a selectable output voltage level on VBUS. For example, the primary side 404 may include a power section 412 that is coupled to the secondary side 402. In the particular example shown in Fig. 4, an optical coupling 414 comprising a transmitting LED on the side of the secondary side 402 may transmit optical signals to a receiving LED on the side of the power section 412 to control the output of the power section. [0045] The interface IC may include circuitry and logic (not shown) that can detect and decode a particular electrical configuration asserted on the D+ and D- lines. The 9V, 12V, and 20V switches may be activated to control, via a resistor network 402a, the optical signal that is produced by the transmitting LED; e.g., by controlling the frequency of the optical signal. The optical signal may then be received by the receiving LED and sensed a controller in the power section 412. The controller may generate a voltage on VBUS having a voltage level based on the optical signal sensed by the receiving LED. It will be appreciated, of course, that the use of resistor network 402a and optical LEDs is simply illustrative and that in other embodiments, the secondary side 402 may communicate with the primary side 404 using any known signaling technique other than optical signaling; e.g., a digital signal may be sent from the secondary side to the primary side. [0046] It will be appreciated that the external device 304 need not be a power supply per se, but may be any electronic device that is configured to provide multiple output voltage levels. For example, in some embodiments, the external device 304 may be a laptop computer that incorporates voltage selector 402 and includes a power source having selectable output voltage levels. [0047] Fig. 5 illustrates processing in accordance with the present disclosure, when the portable device 302 (Fig. 3) attaches to an external device. As explained above, in some embodiments, the portable device 302 may operate in accordance with BC1.2 in which the portable device 302 is viewed as attaching to a port on the external device. Going forward, the terms "external device" and "port" may be used concurrently and/or interchangeably. Typical values for voltage levels mentioned below may be set in accordance with BC1.2. Fig. 7, for example, shows a table of voltage values set forth in BC1.2. [0048] At loop 502, the portable device 302 may detect an attachment event. For example, an external device may output a voltage on VBUS. In accordance with BC1.2, if the portable device 302 detects a voltage level on VBUS > VOTG_SESSN_VLD for a predetermined period of time, the portable device 302 may determine that an attachment to the external device has occurred. [0049] At block 504, the portable device 302 may determine whether the external device is a dedicated charging port (DCP) or not. At block 506, if a DCP is detected, processing continues at block 508; otherwise, a standard downstream port (SDP) or a charging downstream port (CDP) has been detected. The DCP, SDP, and CDP are port types defined in BC1.2. [0050] In accordance with BC1.2, block 504 may include a primary detection step and a secondary detection step. The portable device 302 may perform primary detection to detect if the external device is an SDP by asserting an electrical configuration (i.e., a voltage level) on the D+ line and sensing an electrical configuration (i.e., a voltage level) asserted on the D- line. If an SDP is detected, then the NO branch of block 506 is taken and the portable device 302 may proceed in accordance with detection of a SDP. If the external device is determined not to be an SDP, then the portable device 302 may perform secondary detection to detect whether the external device is a DCP or a CDP by asserting an electrical configuration on the D- line and sensing an electrical configuration on the D+ line. If a CDP is detected, then the NO branch of block 506 is taken and the portable device 302 may proceed in accordance with detection of a CDP. [0051] If a CDP is not detected, then in some embodiments, processing proceeds to block 508. In other embodiments, before proceeding to block 508, the portable device 302 may perform additional detection steps in block 504 to detect for attached devices that may be proprietary, may conform to other standards, or are otherwise non- compliant with BC1.2; e.g., Apple ®power adapters typically do not conform to BC1.2, laptop manufacturers may produce power adapters that use proprietary circuitry, and so on. If a non-BCl .2 port is not detected, then processing may proceed to block 508. [0052] Continuing with Fig. 5, if processing reaches block 508, the portable device 302 has determined that it is attached to a DCP. An external device in accordance with the present disclosure (e.g., 304, Fig. 3) appears electrically like a DCP at this point; i.e., the external device shorts together the D+ and D- lines using, for example, a switch connected between the D+ and D- lines as shown in Fig. 3. A conventional DCP is typically specified to output 5V. By comparison, an external device according to the present disclosure may output any one of several higher voltage levels (e.g., 9V, 12V, 20V, etc.), in addition to a 5V level. Accordingly, an external device in accordance with the present disclosure may be referred to as a high voltage DCP (HVDCP). In accordance with principles of the present disclosure, the portable device 302 may perform an additional detection to distinguish between an external device that is a conventional DCP and an HVDCP. Thus, in some embodiments, the portable device 302 may assert a voltage level VDP SRC on the D+ line, at block 508. [0053] If the external device is a conventional DCP, the short between D+ and D- will be maintained. Accordingly, at block 510, the portable device 302 will sense that the voltage level asserted at D- is >VDAT_REF and detect that a conventional DCP is attached. [0054] If the external device is an HVDCP (e.g., 304, Fig. 3), then, in accordance with the present disclosure, the HVDCP will respond to the D+ line being asserted at VDP SRC by opening the short between the D+ and D- line. Accordingly, at block 510, the portable device 302 will sense a voltage level asserted at D- that is < VDAT REF, which may indicate that an HVDCP is attached. At block 512, if the portable device 302 continues to detect a voltage on VBUS, that may serve to indicate to the portable device that the external device is still attached and that the external device is an HVDCP. [0055] At this point, the portable device 302 may select an operating voltage to receive from the HVDCP. If 5V operation is desired at block 514, the portable device 302 may assert the following electrical configuration on the D+ and D- lines at block 514a: VDP SRC on D+ and ground potential on D-. Similarly, if 9V operation is desired at block 516, the portable device may assert the following electrical configuration on the D+ and D- lines at block 516a: VDP UP on D+ and VDM SRC on D-. If 12V operation is desired at block 518, the portable device may assert the following electrical configuration on the D+ and D- lines at block 518a: VDP SRC on D+ and VDM SRC on D-. If 20V operation is desired at block 516, the portable device may assert the following electrical configuration on the D+ and D- lines at block 516a: VDP UP on D+ and VDM UP on D-. [0056] It can be appreciated, of course, that any suitable combination of voltage levels may be associated with the different operating voltages. It can be further appreciated that in some embodiments, different current flows can be asserted on the D+ and D- lines instead of asserting voltage levels. More generally, combinations of different voltage levels and current flows may be asserted on the D+ and D- lines. [0057] Continuing with Fig. 5, in some embodiments, if at block 522 a voltage level is still present on VBUS, processing may loop back to block 514. The loop allows the portable device 302 to dynamically change the operating voltage as needed, providing a high degree of flexibility of operation in the portable device 302. Thus, for example, at a time ti, the portable device 302 may assert a first electrical configuration on the D+ and D- lines to receive a first voltage level on VBUS. At a subsequent time t 2(without having to re-attach) the HVDCP, the portable device 302 may assert a second electrical configuration on the D+ and D- lines to receive a second voltage level on VBUS. [0058] Referring now to Fig. 6, processing in an external device (e.g., 304, Fig. 3) in accordance with the present disclosure, namely an HVDCP, will now be discussed. At block 602, the HVDCP may initialize itself for detection as a DCP. For example, the HVDCP may assert 5V on VBUS and short the D+ and D- lines. In addition, the D+ line is pulled down using resistor RDAT LKG (about 500 ΚΩ) per BC1.2. In this state, the HVDCP appears electrically to be a DCP. The HVDCP enters a loop 604 until the D+ line exceeds VDAT REF. [0059] When the HVDCP is attached to the portable device 302, the portable device will proceed through its detection sequence as described above. If the portable device 302 can accept different output voltage levels on VBUS, the portable device can indicate this fact to the HVDCP by asserting VDP SRC on the D+ line (block 508, Fig. 5), which the HVDCP will detect at blocks 606 and 608. [0060] At blocks 606 and 608, a timer (not shown) in the HVDCP may be initiated while the HVDCP is sensing the D+ line using the glitch filter 334 (Fig. 3). The glitch filter 334 may provide a measure of safety by avoiding a false positive indication that the portable device 302 accept different voltage levels. At block 610, if the D+ line remains >VDAT_REF after the timeout, this may indicate to the HVDCP that the portable device 302 can receive different operating voltage levels and is looking for an HVDCP. Accordingly, at block 612, the HVDCP may open the short between the D+ and D- lines and pull down the D- line through resistor RDM DWN to indicate to the portable device 302 that it is attached to an HVDCP. [0061] At block 614, if the HVDCP senses an electrical configuration where the D- line is >VDAT_REF, then at block 614a the HVDCP will output 5V on VBUS. At block 616, if the HVDCP senses an electrical configuration where the D+ line is >VDAT_REF, then at block 616a the HVDCP will output 12V on VBUS. Similarly, at block 618, if the HVDCP senses an electrical configuration where the D- line is >VSEL_REF, then at block 618a the HVDCP will output 20V on VBUS. Otherwise, at block 620 the HVDCP will output 9V on VBUS. In some embodiments, VSEL REF may be set to 2V ± 0.2V. [0062] Processing continues to block 622 to check that the D+ line continues to be >VDAT_REF. If SO, processing loops back to block 614, allowing the HVDCP to change its output voltage to a different level. [0063] The foregoing processing between the portable device 302 and the HVDCP may be summarized in the flow chart shown in Fig. 8. At 802, an HVDCP is attached to the portable device. The HVDCP is initially configured to appear as a DCP by outputting 5V on VBUS and shorting its D+ and D- lines. At 804, the portable device performs detection according to BC1.2. At 806, the portable device detects a DCP, thus marking completion of the detection process per BC1.2. The portable device then asserts VDP SRC on the D+ line, in accordance with principles of the present disclosure, to see if the attached DCP is an HVDCP. At 808, the HVDCP senses the D+ line to look for VDP SRC, which indicates the portable device is capable of receiving multiple voltage levels. At 810, the HVDCP opens the short between D+ and D- and turns on RDM DWN to signify to the portable device that an HVDCP is attached. At 812, the portable device asserts an electrical configuration on D+ and D- corresponding to a desired voltage level. At 814, the HVDCP outputs the desired voltage level. [0064] An advantageous aspect of the present disclosure is that backward compatibility with existing devices is maintained. For example, a portable device in accordance with the principles of the present disclosure will recognize and operate with an HVDCP, according to the processing outlined in Figs. 5 and 6 above. Moreover, a portable device in accordance with the principles of the present disclosure will recognize and operate with non-HVDCP devices, such as an SDP, CDP, DCP, and in some embodiments, non-BC1.2 ports (e.g., Apple ®power adapters) per blocks 502, 504, and 506 in Fig. 5. From the HVDCP side, an HVDCP will operate with a portable device of the present disclosure in accordance with the processing outlined in Figs. 5 and 6 above. Moreover, an HVDCP will operate with a conventional portable device by virtue of the loop 602-604 in Fig. 6. Since a conventional portable device will not assert VDP SRC on the D+ signal line after DCP detection, processing in the HVDCP will take the NO branch from block 604. [0065] The above description illustrates various embodiments of the present invention along with examples of how aspects of the particular embodiments may be implemented. The above examples should not be deemed to be the only embodiments, and are presented to illustrate the flexibility and advantages of the particular embodiments as defined by the following claims. Based on the above disclosure and the following claims, other arrangements, embodiments, implementations and equivalents may be employed without departing from the scope of the present disclosure as defined by the claims. [0066] We claim the following:
Disclosed examples include integrated circuits and bipolar transistors with a first region of a first conductivity type in a substrate, a collector region of a second conductivity type disposed in thesubstrate, and a base region of the first conductivity type extending into the first region. A first emitter region of the second conductivity type extends into the first region and includes a lateral side spaced from and facing the base region. A second emitter region of the second conductivity type extends downward into the first region, abutting the top surface and an upper portion of the first lateral side of the first emitter region to mitigate surface effects and gain degradation caused by hydrogen injection from radiation to provide a radiation hardened bipolar transistor.
1.A bipolar transistor comprising:a semiconductor substrate including a top surface;a first region of a first conductivity type extending downwardly from the top surface into the substrate, the first region having a first doping concentration;a collector region of a second conductivity type disposed in the substrate;a base region of the first conductivity type extending downward from the top surface into the first region and adjacent to the top surface, the base region having a larger concentration than the first doping Second doping concentration;a first emitter region of the second conductivity type extending downward into the first region and adjacent to the top surface, the first emitter region having a third doping concentration, the first emission The polar region includes a first side spaced from the base region and facing the base region;a second emitter region of the second conductivity type extending downward into the first region, the second emitter region being immediately adjacent to the top surface and proximate to the first portion of the first emitter region An upper portion of one side, the second emitter region having a fourth doping concentration that is less than the third doping concentration.2.The bipolar transistor of claim 1 further comprising:a second base region of the first conductivity type extending downward from the top surface into the first region and adjacent to the top surface, the second base region having the second doping concentration;Wherein the first emitter region includes a second side spaced apart from the second base region and facing the second base region;Wherein the second emitter region is adjacent to an upper portion of the second side of the first emitter region.3.The bipolar transistor of claim 2,Wherein the collector region extends downwardly from the top surface into the substrate, the collector region being laterally spaced from the first region;Wherein the bipolar transistor further comprises:a conductive first base contact that is adjacent to the base region above the top surface,a conductive second base contact proximate the second base region above the top surface,a conductive emitter contact proximate the first emitter region above the top surface, andA conductive collector contact is adjacent the collector region above the top surface.4.The bipolar transistor of claim 2 wherein said first emitter region extends downwardly from said top surface into said first region to a first depth, wherein said second emitter region is from said top The surface extends downward into the first zone to a second depth, and wherein the second depth is less than the first depth.5.The bipolar transistor of claim 2, wherein the first conductivity type is a P type and the second conductivity type is an N type, and wherein the bipolar transistor is an NPN transistor.6.The bipolar transistor of claim 1 wherein said first emitter region extends downwardly from said top surface into said first region to a first depth, wherein said second emitter region is from said top The surface extends downward into the first zone to a second depth, and wherein the second depth is less than the first depth.7.The bipolar transistor of claim 1, wherein the first conductivity type is a P type and the second conductivity type is an N type, and wherein the bipolar transistor is an NPN transistor.8.The bipolar transistor of claim 1 further comprising:a conductive base contact adjacent the base region above the top surface, andA conductive emitter contact proximate the first emitter region above the top surface.9.An integrated circuit, an IC, comprising:a semiconductor substrate including a top surface;a first region of a first conductivity type extending downwardly from the top surface into the substrate, the first region having a first doping concentration;a collector region of a second conductivity type disposed in the substrate;a base region of the first conductivity type extending downward from the top surface into the first region and adjacent to the top surface, the base region having a greater than the first doping concentration Two doping concentration;a conductive base contact adjacent the base region above the top surface;a first emitter region of the second conductivity type extending downward into the first region and adjacent to the top surface, the first emitter region having a third doping concentration, the first emission The polar region includes a first side, the first side being spaced apart from the base region and facing the base region;a conductive emitter contact proximate the first emitter region above the top surface;a second emitter region of the second conductivity type extending downward into the first region, the second emitter region being immediately adjacent to the top surface and proximate to the first portion of the first emitter region An upper portion of one side, the second emitter region having a fourth doping concentration that is less than the third doping concentration.10.The IC of claim 9 further comprising a metallization structure disposed over said top surface of said substrate, said metallization structure comprising an external connection to said base contact and said emitter The conductive structure of the contacts.11.The IC of claim 10,Wherein the collector region extends downwardly from the top surface into the substrate, the collector region being laterally spaced from the first region;Wherein the bipolar transistor further includes a conductive collector contact, the conductive collector contact being adjacent to the collector region above the top surface;Wherein the metallization structure further comprises a conductive structure that allows external connection to the collector contacts.12.The IC of claim 10 further comprising a passivation layer disposed over the top layer of the metallization structure, the passivation layer comprising a tetraethyl orthosilicate material, i.e., a TEOS material.13.The IC according to claim 9,Wherein the collector region extends downwardly from the top surface into the substrate, the collector region being laterally spaced from the first region;Wherein the bipolar transistor further comprises a collector contact, the collector contact being adjacent to the collector region above the top surface.14.The IC of claim 9 further comprising:a second base region of the first conductivity type extending downward from the top surface into the first region and adjacent to the top surface, the second base region having the second doping concentration;Wherein the first emitter region includes a second side, the second side being spaced apart from the second base region and facing the second base region;Wherein the second emitter region is adjacent to an upper portion of the second side of the first emitter region.15.The IC of claim 9 wherein said first emitter region extends downwardly from said top surface into said first region to a first depth, wherein said second emitter region extends from said top surface Lower extending into the first zone to a second depth, and wherein the second depth is less than the first depth.16.The IC of claim 9, wherein the first conductivity type is a P type and the second conductivity type is an N type, and wherein the bipolar transistor is an NPN transistor.17.A method of fabricating a bipolar transistor, the method comprising:A dopant of a first conductivity type is implanted into the semiconductor substrate to form a first region extending downward from a top surface of the substrate, the first region having a first doping concentration;Injecting a dopant of the first conductivity type to form a base region, the base region extending downward from the top surface into the first region and in close proximity to the top surface, the base region having a second doping concentration greater than the first doping concentration;Injecting a dopant of a second conductivity type to form a first emitter region, the first emitter region extending downward into the first region and in close proximity to the top surface, the first emitter region having a a third doping concentration, the first emitter region includes a first side, the first side being spaced apart from the base region and facing the base region;Injecting a dopant of the second conductivity type to form a second emitter region, the second emitter region extending downward into the first region, the second emitter region being adjacent to the top surface and Adjacent to an upper portion of the first side of the first emitter region, the second emitter region has a fourth doping concentration that is less than the third doping concentration;Forming a conductive base contact adjacent to the base region above the top surface;Forming a conductive emitter contact proximate the first emitter region above the top surface;A metallization structure is formed over the top surface of the substrate, the metallization structure including a conductive structure that allows external connection to the base contact and the emitter contact.18.The method of claim 17 further comprising:Forming the first emitter region from the top surface down into the first region to a first depth;Forming the second emitter region downwardly from the top surface into the first region to a second depth;Wherein the second depth is less than the first depth.19.The method of claim 17, wherein the first conductivity type is a P type and the second conductivity type is an N type, and wherein the bipolar transistor is an NPN transistor.20.The method of claim 17 further comprising:Injecting a dopant of the second conductivity type to form a collector region, the collector region extending downward from the top surface into the substrate, the collector region being laterally spaced from the first region open;Forming a conductive collector contact proximate the collector region over the top surface;A conductive structure is formed in the metallization structure that allows external connection to the collector contacts.
Radiation enhanced bipolar transistorBackground techniqueFor various applications where systems and circuits are exposed to radiation, radiation hardened electronic circuits are required. Example applications include satellites and other spacecraft, aircraft, medical equipment (such as X-ray devices), and nuclear power plants. In such applications, radiation may reduce the gain of the bipolar transistor. Radiation hardening of an electronic circuit is quantified by means of "total ionizing dose" or "total radiation dose" (TID), which is a measure of the number of protons or heavy ions applied to the circuit or system. Ionizing radiation causes electron-hole pairs in silicon dioxide (SiO2). Protons (heavy ions) are released in the oxide and protons or holes are transported towards the silicon-oxygen interface in the presence of a bias field, resulting in the formation of interface traps at the interface. At high dose rates, there is a high yield (charge yield) of electron-hole pairs. A positive voltage pushes holes toward the interface while flushing the electrons away. The accumulation of holes at the interface forms a positive charge barrier and repels the generated protons. This keeps the protons away from the interface and slows the formation of interface states while promoting recombination in the oxide. The low dose rate corresponds to a reduced electron-hole pair generation. In this case, in the same manner as the high dose rate, the positive voltage pushes the holes toward the interface while washing away the electrons, but the trapped holes accumulate very low. The repulsive force of the trapped holes is low enough to allow the generated protons to move to form an interface state at the interface. Interface traps can adversely affect the operation of the bipolar transistor by reducing the gain β or Hfe. In addition, certain circuits, such as bipolar transistors, suffer from enhanced low dose rate radiation (ELDRS) effects. In particular, the transistor gain reduction effect at high radiation dose rates can be lower than at more moderate radiation levels. Total dose radiation causes charge production in SiO2 and allows interface traps to be generated at low dose rate conditions. It also creates hole traps in the oxide covering the base emitter junction, resulting in additional base emitter leakage. Both effects contribute to the drop in transistor gain and therefore require more base current for the same collector current.Furthermore, the effect on transistor gain can be increased with the amount of hydrogen used in the fabrication process. For example, nitride passivation of the upper metallization layer in integrated circuit (IC) fabrication uses ammonia NH3, + silane SiH4, of which 11 hydrogen atoms are released to form a single molecule of Si3NH4. Instead of passivating the upper metallization layer, tetraethyl orthosilicate (TEOS) may be used because the TEOS material does not use ammonia gas and does not generate hydrogen in the formation of SiO2. However, TEOS passivation is not as good as nitride passivation. Therefore, improved integrated circuits and bipolar transistors are expected to be used in applications involving radiation exposure without the need for low hydrogen fabrication techniques.Summary of the inventionThe disclosed examples include an integrated circuit and a vertical or lateral bipolar transistor having a first region of a first conductivity type in a substrate, a collector region of a second conductivity type disposed in the substrate, and A base region of a first conductivity type extending into the first region. A first emitter region of the second conductivity type extends into the first region, and the first emitter region includes a side spaced apart from the base region and facing the base region. The second emitter region of the second conductivity type extends down into the first region and is immediately adjacent the top surface of the first emitter region and the upper portion of the first side. The second emitter region is lighterly doped than the first emitter region to mitigate surface effects and gain attenuation caused by hydrogen injection from the radiation in the most sensitive region near the emitter-base junction region.Further disclosed examples include a method of fabricating a bipolar transistor, the method comprising implanting a dopant of a first conductivity type into a semiconductor substrate to form a first region extending downward from a top surface of the substrate, injecting the first conductive a dopant of the type to form a base region extending downward from the top surface into the first region and in close proximity to the top surface, and implanting a dopant of the second conductivity type to form a first emitter region, the first emitter The region includes a first side spaced from the base region and facing the base region. The method further includes implanting a dopant of a second conductivity type to form a second emitter region extending down into the first region, the second emitter region proximate the top surface and proximate the first side of the first emitter region The upper portion of the second emitter region has a fourth doping concentration that is less than the third doping concentration.DRAWINGS1 is a partial cross-sectional side view of an integrated circuit having a radiation hardened lateral NPN bipolar transistor having a first base region and a second base region, a first emitter region and a second emitter region And a top side collector.2 is a flow chart illustrating a method of fabricating a bipolar transistor including forming a first emitter region and a second emitter region.3-8 are partial cross-sectional side views of a radiation hardened NPN bipolar transistor that is subjected to fabrication processing in accordance with the method of FIG. 2.9 is a flow chart illustrating an alternate step for forming a first emitter region and a second emitter region of a bipolar transistor.10-13 are partial cross-sectional side views of a radiation hardened NPN bipolar transistor that is subjected to fabrication processing to form a first emitter region and a second emitter region in accordance with the method of FIG.14 is a flow chart illustrating another alternative to forming a first emitter region and a second emitter region of a bipolar transistor.15-18 are partial cross-sectional side views of a radiation hardened NPN bipolar transistor that is subjected to fabrication processing to form a first emitter region and a second emitter region in accordance with the method of FIG.19 is a partial cross-sectional side view of another example integrated circuit having a radiation hardened NPN bipolar transistor with a single base region.20 is a partial cross-sectional side view of yet another example integrated circuit having a radiation hardened NPN bipolar transistor having a first base region and a second base region and a bottom side collector.21 is a partial cross-sectional side view of another example integrated circuit having a radiation hardened NPN bipolar transistor having a single base region and a bottom side collector.22 is a partial cross-sectional side view of another example integrated circuit having a radiation hardened PNP bipolar transistor including a first base region and a second base region, a first emitter region, and a second The emitter region and the top side collector.detailed descriptionThroughout the drawings, like reference numerals refer to the like and the In the following discussion and in the claims, the terms "comprises," "comprising," "having," "having," And therefore should be interpreted to mean "including, but not limited to...". Also, the term "coupled" or "coupled" is intended to include an indirect or direct electrical connection or a combination thereof. For example, if the first device is coupled to or coupled to the second device, the connection can be through a direct electrical connection or through an indirect electrical connection via one or more intermediate devices or connectors.1 shows a radiation-hardened NPN bipolar transistor 100 fabricated laterally in a substrate 102, 104 of an integrated circuit (IC). In one example, the substrate is constructed from an N+ silicon wafer on which an epitaxial silicon layer 104 is grown, the epitaxial silicon layer 104 having a lower (N-) doping concentration and having an upper surface or top surface 101. In another example, the substrate may include a lower silicon wafer having an upper portion of a P conductivity type in which, for example, an N well is formed for isolation of an integrated circuit having various circuit types such as bipolar circuits and CMOS circuits, and the like. A bipolar transistor 100 is fabricated in the region. Forming a P-base region or forming a P-base region in the epitaxial portion 104 of the substrate, such as by implanting a P-type dopant (e.g., boron or the like) to form a P-well 106 extending downward from the top surface 101 into the substrate 104. Area 106. The first zone 106 has a first doping concentration, labeled "P-" in the drawing. A first base portion and a second base portion or base regions 108a and 108b are formed in the region 106 in the NPN bipolar transistor example of FIG. The base region 108 has a second doping concentration (eg, P+) that is greater than the doping concentration of the P-first region 106. Base regions 108a and 108b extend downwardly from top surface 101 into first region 106 and adjacent top surface 101.Forming N-type emitter structures 110, 112 in P-first region 106, the emitter structures 110, 112 including N+ first emitter regions 110 extending down into first region 106 and lighter, lighter doping The second emitter region 112. The first emitter region 110 has a third doping concentration (N+). The lightly doped second emitter region 112 has a fourth doping concentration (N-) that is less than the third doping concentration of the first emitter region 110. The first emitter region 110 is immediately adjacent to the top surface 101 and extends downwardly into the first region 106 to the depth 110D. In the example of FIG. 1, the first emitter region 110 includes a first side (on the left side in FIG. 1) spaced apart from the first base region 108a and facing the first base region 108a, and a second base The pole regions 108b are spaced apart and face the opposite second side of the second base region 108b (on the right in Figure 1). In some examples, the second emitter region 112 is formed as a ring that surrounds the upper side of the first emitter region 110, and the region 112 is in close proximity to the top surface 101 of the epitaxial layer 104 of the substrate. The second emitter region 112 is immediately adjacent to the top surface 101 and extends downward into the first region 106 to the second depth 112D, the second depth 112D being less than the first depth 110D of the first emitter region 110. In this example, emitter regions 110 and 112 extend in-page and off-page in the view of FIG. 1, and first base region 108a and second base region 108b are along both sides of emitter regions 110, 112. Extends to within and outside the page. In some examples, the first base region 108a or the second base region 108b may be omitted, for example, as shown in Figures 19 and 21 below, the NPN transistor includes a single base structure. In other examples, a single base structure 108 can be formed to surround the emitter regions 110, 112. In these examples, the second emitter region 112 may, but need not, extend along the entirety of the side or sides of the first emitter region 110 that face the base region 108.Transistor 100 further includes an N-type collector region 114 (labeled "N+" in Figure 1). The collector region 114 extends down into the substrate 104 in the top surface 101. Moreover, in such lateral NPN transistor 100 of FIG. 1, collector region 114 is laterally spaced from first region 106. The lateral NPN transistor design allows for top side collector contacts. In other examples, for example, as shown in Figures 20 and 21 below, the collector region is comprised of N-epitaxial regions 104 under the first region 106 and/or N+ semiconductor portions 102 of the substrate.The emitter structures 110, 112 enhance the robustness of the NPN transistor 100 to radiation effects. In particular, transistor 100 has improved immunity to gain attenuation in the presence of TID radiation, including slowing and avoidance of ELDRS effects, as compared to conventional bipolar transistor designs. For example, conventional lateral NPN bipolar transistors without a lightly doped second emitter region are subject to generation along the upper emitter-base junction due to charge trapping and interface traps at and near the interface above the base region. Inversion zone. This results in gain attenuation and increased leakage. In particular, under ionizing radiation, an inversion zone can occur at the upper portion of the emitter-base junction at or near the interface with the base oxide. Generally, due to the implantation and doping concentration, the emitter diffusion is not a sharp rectangle, but is rounded or slowly. This can behave as if there were multiple parallel NPN transistors with different characteristics. In this regard, a circular implanted or diffused emitter results in different transistor gains and other performance characteristics at different base and collector current levels. In particular, the thickness of the base region is a major driver of the gain and breakdown voltage performance of the transistor.The disclosed emitter structures 110, 112 mitigate the effects on the surface inversion regions to provide a radiation hardened robust bipolar transistor 100. A lightly doped emitter region 112 (LDE) is provided in transistor 100 to mitigate or avoid formation of an undesired emitter-base depletion region, and transistor performance characteristics can be adjusted by the properties of first emitter region 110 . The addition of one or more lightly doped second emitter regions 112 improves the surface effects caused by hydrogen ion injection from ionizing radiation, and thus minimizes service inversion. Thus, the combination of the first emitter region 110 and the second emitter region 112 provides improved gain, including low current Hfe or beta in the presence of high or low dose radiation, and heavy ion collisions in reverse bias conditions ( For example, the improved tolerance of the base-emitter impact ionization. Moreover, as shown, transistor 100 can be fabricated in an integrated circuit in which other techniques for radiation hardening can be incorporated, including the use of TEOS or other non-hydrogen upper passivation techniques. In other examples, nitrogen passivation can be used to passivate the upper metallization layer in the presence of one or more second emitter regions 112 for canceling or reducing the adverse effects of radiation exposure.Transistor 100 also includes one or more conductive contacts 122, 124, 126 and a metallization structure having interconnecting interconnect features to provide conductivity to the base and emitter (and optionally to the collector). The contacts and metallization structures can be formed using any suitable semiconductor device fabrication technique or material. For example, contacts 122, 124, and 126 may be directly or indirectly formed in respective base regions 108, first emitter regions 110 using copper or other conductive materials directly or using known silicide contact formation techniques and materials, respectively. And copper or other electrically conductive material over the upper surface of collector region 114. Transistor 100 can be formed in an IC that is part of a larger overall circuit, in which case the external conductivity of the respective base, emitter and/or collector of transistor 100 is not required. For example, the base, emitter, and/or collector of transistor 100 may be interconnected with other devices or components of an integrated circuit through suitable vias and contacts that pass through one or more metallization layers 130 , 140, 150 provide electrical connections. In other examples, transistor 100 is formed in an integrated circuit package that provides external connections (IC pins or pads, conductive terminals, etc.) to allow the various terminals of transistor 100 to be interconnected with external circuitry. For example, transistor 100 can be formed in a device such as the commercially available 2N2222, 2N3700, or 2N2484 product.As shown in the example of FIG. 1, conductive first base contact 122a is formed immediately above (eg, directly and/or indirectly electrically connected to) base region 108a above top surface 101, and second contact 122b is Formed on the second base region 108b. Similarly, a conductive emitter contact 124 is formed over a portion of the upper surface of the first emitter region 110. When it is desired to connect to the collector region 114 from above, the conductive collector contact 126 is formed to be adjacent to the collector region 114 above the top surface 101. Silicon dioxide or other oxide material 120 is known to be formed between contacts 122, 124 and 126. Three levels are provided in the example of FIG. 1, including metallization layers 130, 140, 150 that are stepped over the top surface 101 of the substrates 102, 104. In this example, the first metallization level or metallization layer 130 includes a non-conductive interlayer dielectric (ILD) material 130 (such as TEOS) and a conductive via structure 128 that provides connections to the contacts 122, 124, and 126. The second level includes ILD material 140 and conductive contacts and VS structures 142 and 144, and a third (eg, upper) level includes ILD material 150 and conductive structures 152 and 154. In the example of FIG. 1, the final or uppermost metallization structure layer or level provides a top side connection for the base, emitter, and collector of transistor 100, although this is not required in all embodiments. In addition, the IC includes a passivation layer 160 disposed over the top layer 150. As described above, the lightly doped second emitter region 112 is used to promote gain attenuation by hydrogen migration to the interface between the oxide 120 and the first emitter region 110 near the emitter-base junction. Immunity. Thus, the passivation layer 160 at the top of the IC can be formed by a nitrogen passivation technique that forms a Si3NH4 material layer 160 including ammonia and silane. In other examples, passivation layer 160 includes tetraethyl orthosilicate TEOS material to further promote radiation hardening of transistor 100.Referring also to Figures 2-8, the integrated circuit and transistor 100 of Figure 1 can be fabricated in accordance with any suitable semiconductor processing technique. FIG. 2 illustrates an example fabrication process or method 200 for fabricating bipolar transistor 100. 3-8 illustrate an NPN transistor 100 that is subjected to a fabrication process in accordance with method 200. Process 200 includes providing an N+ substrate (e.g., substrate 102) at 202. At 204, an N- epitaxial layer 104 is grown or otherwise formed over the base substrate 102 using the epitaxial process 300 illustrated in FIG. At 206 in FIG. 2, the field oxide is formed and patterned to expose a first portion or first region of the upper surface 101 of epitaxial layer 104 (eg, patterned field oxide 402 in FIG. 4).At 208 in FIG. 2, a P-base layer or first region 106 is implanted or otherwise formed in N- epitaxial layer 104. For example, FIG. 4 illustrates an implantation process 400 for implanting a P-type dopant or impurity (eg, boron in one example) into the exposed first region 106 of the epitaxial layer 104 of the N-substrate. This forms a first region 106 that extends downwardly from the top surface 101, wherein the first region 106 has a first doping concentration. At 210, the P-base layer dopant can be diffused using a thermal diffusion process. It will be appreciated that the implanted regions illustrated and described herein do not necessarily have a uniform doping concentration as a function of longitudinal depth, and that the concentration can vary in both the longitudinal and lateral directions. Moreover, the diffusion process at 210 can result in the growth of a certain amount of oxide over the exposed upper region (not shown) of the structure.At 220a, a first emitter region 110 and a second emitter region 112 are formed. In this example, an emitter structure is formed by implanting an N-type dopant (eg, phosphorous) to form a first emitter region 110 that extends down into the first region 106 and in close proximity to the top surface 101. In FIG. 5, an implantation process 500 and an implantation mask 502 are used to form the first emitter region 110. Moreover, in this example, at 223, mask 502 exposes first region 110 and collector region 114 that are simultaneously implanted. In other examples, the collector region 114 can be formed separately. At 224, one or more second emitter regions 112 are implanted using an implantation process 600 and a second mask 602 as shown in FIG. The injected one or more second regions 112 extend down into the first region 106 along one or more side edges of the first emitter region 110. In this example, a first emitter implant mask 502 and a second emitter implant mask 602 are used, respectively, wherein the second emitter implant mask 602 provides a larger window than the first mask 502 to provide a slave implant The second emitter region 112 extends laterally outward from the center of the first emitter region 110. In one example, the implantation energy of the implantation process 500 for forming the first emitter region 110 is higher than the implantation energy of the second emitter implantation process 600. As shown in FIG. 6, this provides the first first emitter region with a greater depth than the depth of the one or more second emitter regions 112. Moreover, the implant dose provided by the first emitter implant process 500 is one to two orders of magnitude greater than the implant dose of the second emitter implant process 600. For example, in one example, the dose of the first process 500 is on the order of 1013, and the dose of the second process 600 is on the order of 1011 to 1012 to form the N-lightly doped region 112 in one example. Then, at 226 in FIG. 2, the diffusion process 700 of FIG. 7 is used to diffuse the first emitter dopant and the second emitter dopant. In one example, assuming no further significant heat treatment of the IC, the diffusion process at 226 sets the depths 110D and 112D of the respective first emitter portion 110 and second emitter portion 112, as shown in FIG.Continuing at 228 in FIG. 2, a P-type dopant is implanted to form base regions 108a and 108b that extend downwardly from top surface 101 into first region 106 and in close proximity to top surface 101. In one example, the doping concentration of the base region 108 (P+) is greater than the doping concentration of the first region 106 (P-). FIG. 8 illustrates a process that uses a mask 802 having an opening to form respective first base regions 108a and second base regions 108b via implantation process 800. Then at 232, an anneal or other diffusion process is performed to diffuse the base dopants of these regions 108. At 232, a base oxide (eg, oxide 120 in FIG. 1) is formed, and at 234 is an emitter, a base, and optionally a collector to form a contact (eg, the contact in FIG. 1) 122, 124 and 126). At 236 in FIG. 2, metallization or other back end processing is performed to provide metallization structures 130, 140, and 150 including passivation layer 160 as shown in FIG.Referring now to Figures 9-13, in another example, the first emitter implant and the second emitter implant are diffused, respectively. Figure 9 shows an alternative step 220b for forming the first emitter region and the second emitter region of the transistor 100 of Figure 1, and Figures 10-13 show each of the processing processes in accordance with the method of Figure 9. Transistor 100 at the stage. Process 220b of Figure 9 can be substituted for process 220a in process 200 of Figure 2 above. At 902 in FIG. 9, the first emitter region 110 is formed by implanting an N-type dopant (eg, phosphorous) in the first layer 106. FIG. 10 illustrates a process of forming the first emitter region 110 using the mask 1002 and the implantation process 1000, and also forming the collector region 114 (904 in FIG. 9). Continuing at 906, the emitter dopant (eg, and collector dopant) is diffused using the diffusion process 1100 shown in FIG. At 908, one or more second emitter regions 112 are formed by implanting an N-type dopant in the first region 106 (illustrated as an implantation process 1200 using the second implantation mask 1202 as illustrated in FIG. 12). Thereafter at 910, the emitter dopant and the lightly doped second emitter dopant are diffused at 910 using the diffusion process 1300 illustrated in FIG.14-18 illustrate another example of forming a first emitter region and a second emitter region using process 220c (FIG. 5) in place of process 220a in process 200 of FIG. In this example, process 220c in FIG. 14 begins by implanting an N-type dopant to form a first emitter region 110 in a P-layer (first region) 106. Figure 15 illustrates a process that uses a first mask 1502 having an opening to form the first emitter region 110 and form an implant by using an implant process 1500 (e.g., a boron dopant in one example) Collector region 114 (at 1404 in Figure 4). Moreover, in this example, at 1406 of FIG. 14, one or more second emitter regions are formed using the quad or angled implantation processes 1600a, 1600b illustrated in FIGS. 16 and 17. 112. In one example, the angled implant process 1600 is performed using the same mask 1502 as used when forming the first emitter region 110. This provides one or more second implanted emitter regions 112 along the upper side of the first emitter region 110. Further, as shown in FIG. 17, in the case where a single mask 1502 is used, the implantation processes 1600a, 1600b in one example also provide a lightly doped region 1600 along the upper side of the implanted collector region 1400. In other examples using a single region 108, the angled implant process 1600 does not require the formation of lightly doped emitter regions 1200 on both sides of the first emitter region 110, and may only be along a single base region. The upper side of the first emitter region 110 of 108 provides a lightly doped region 1200, or a lightly doped region 1200 can be provided along both sides of the first emitter region 110 (eg, Figures 19 and 21). At 1408 in FIG. 14, the first emitter dopant and the second emitter dopant (emitter and LDE dopant) are diffused using the diffusion process 1800 shown in FIG.19 illustrates another example integrated circuit having a radiation hardened NPN bipolar transistor 100 having a single P+ base disposed laterally between emitter regions 110, 112 and implanted collector region 114. Area 108. In this case, a lightly doped (e.g., N-) second emitter region 112 is formed on both sides of the first emitter region 110. In an alternative embodiment, a single second emitter region 112 may be formed on the upper right side of the first emitter region 110 facing the single base region 108.20 illustrates another example integrated circuit having a radiation hardened NPN bipolar transistor 100 that includes a first P+ base region on opposite sides of a first emitter region 110 and a second emitter region 112. And a second P+ base region. In this case, as shown schematically in FIG. 20, the collector is not implanted into the top side of the substrate structures 102, 104, but instead the lower N+ region 102 and the N-region 104 provide transistor collectors. In this case, where transistor 100 is formed in an integrated circuit having other circuitry, the connection of the collector to other circuitry (not shown) can be accomplished via substrates 102,104. In the case where the transistor collector requires an external connection, a bottom side contact (not shown) may be formed.Figure 21 illustrates another radiation hardened NPN bipolar transistor example 100. In this case, transistor 100 includes a single base region 108 and a bottom side collector provided by N+ substrate structure 102 and N-substrate structure 104.Figure 22 shows a radiation hardened lateral PNP transistor 100 in which the conductivity types (N and P) are opposite with respect to the NPN transistor 100 of Figure 1 above.In yet another non-limiting example, a second emitter region (LDE) can be formed after the intrinsic base is formed at 288 in FIG. For example, an LDE mask is formed after the intrinsic base is formed at 228, and N-implantation is performed to form a second emitter region, after which diffusion P+ extrinsic base dopant and N-LDE doping are performed at 230. Annealing of the agent.The above examples are merely illustrative of several possible embodiments of the various aspects of the present disclosure, and equivalent variations and/or modifications will occur to those skilled in the art upon reading and understanding the disclosure. Modifications in the described embodiments are possible within the scope of the claims, and other embodiments are possible.
A processor includes a cache memory. The cache memory includes an array of cells, word lines and bit lines. A control module enables a word line of the word lines to access a first cell in the enabled word line. The control module disables the word line and maintains the word line in a disabled state to access a second cell in the word line.
CLAIMS What is claimed is: 1. A processor comprising: a cache memory that comprises: an array of cells; a plurality of word lines; and a plurality of bit lines; and a control module that enables a word line of said plurality of word lines to access a first cell in said word line, that disables said word line, and that maintains said word line in a disabled state to access a second cell in said word line. 2. The processor of claim 1 wherein said control module accesses a plurality of said cells in said array through separate cycled selection of said plurality of bit lines and generation of a single word line pulse associated with one of said plurality of word lines. 3. The processor of claim 1 wherein said control module operates in a discrete read mode and a sequential read mode. 4. The processor of claim 1 wherein said control module generates a sequential read signal to enable a sequential read mode and generates a word line signal based on said sequential read signal. 5. The processor of claim 1 wherein said control module generates a sequential read signal to enable a sequential read mode and precharges said plurality of bit lines based on said sequential read signal. 6. The processor of claim 1 wherein said control module, when in a discrete mode, performs row address decoding for each cycle associated with access of each of said cells. 7. The processor of claim 1 wherein said control module, when in a sequential mode, performs address decoding for a single row and enables a word line for a plurality of column cell sets, wherein said control module selects, senses and amplifies a column cell set of said plurality of column cell sets for column address decoding, and wherein said control module disables said word line for column address decoding and access of column cell sets of said plurality of column cell sets other than said column cell set. 8. The processor of claim 1 wherein said control module operates in a first mode and a second mode, wherein said control module, when in said first mode, pre-charges said plurality of bit lines and performs row address decoding for each read cycle, wherein said control module, when in said second mode, precharges said plurality of bit lines and performs row address decoding for a first read cycle and does not precharge said plurality of bit lines and does not perform row address decoding for a second read cycle, wherein said control module, when in said second mode, enables an address counter that is associated with pointing a column address to a next location after each column access, and wherein said address counter, when said column address is associated with an end of a word, is reset and memory access is permitted for a next discrete or sequential read access. 9. The processor of claim 1 wherein said control module precharges said plurality of bit lines when accessing a first set of cells of said array and does not precharge said plurality of bit lines when accessing a second set of cells of said array. 10. The processor of claim 1 wherein said cache memory includes at least one of an instruction cache and a static random access memory (SRAM). 11. The processor of claim 1 further comprising a column decoder that selects said second cell via a column select signal, wherein said control module generates a latch signal to latch bit information in said second cell based on said column select signal. 12. The processor of claim 1 further comprising a plurality of latches latching bit information in said first and second cells. 13. The processor of claim 1 further comprising a sensing-amplification module that detects and amplifies bit information in said first and second cells. 14. The processor of claim 1 further comprising a row decoder, wherein said control module generates a word line signal via said row decoder to access a first set of cells. 15. The processor of claim 1 wherein said word line signal includes an extended period to increase bit line signal separation and to compensate for leakage. 16. The processor of claim 15 wherein said extended period increases bit line separation time and sets bit line separation at a predetermined voltage. 17. The processor of claim 15 wherein said extended period is based on a predetermined number of read cycles. 18. The processor of claim 1 further comprising a column decoder, wherein said control module generates bit line signals via said column decoder to access a set of cells. 19. The processor of claim 1 wherein said control module precharges a plurality of bit lines after a word line extended period. 20. The processor of claim 19 wherein said control module precharges said plurality of bit lines once for multiple cell access cycles. 21. The processor of claim 1 further comprising a sense-amplifier that has a plurality of common lines and that receives a precharge signal to precharge said common lines before latching bit information between accesses to sets of cells. 22. An integrated circuit comprising the processor of claim 1. 23. The integrated circuit of claim 22 further comprising an external memory that is in communication with the processor. 24. A cellular phone comprising the processor of claim 1. 25. The cellular phone of claim 24 further comprising an external memory that is in communication with the processor. 26. A communication system comprising the processor of claim 1. 27. The communication system of claim 26 further comprising an external memory that is in communication with the processor. 28. A method comprising: providing a cache memory that comprises: an array of cells; a plurality of word lines; and a plurality of bit lines; enabling a word line of said plurality of word lines to access a first cell in said word line; disabling said word line; and maintaining said word line in a disabled state to access a second cell in said word line. 29. The method of claim 28 further comprising: accessing a plurality of said cells in said array through separate cycled selection of said plurality of bit lines; and generating a single word line pulse associated with one of said plurality of word lines. 30. The method of claim 28 comprising operating in a discrete read mode and a sequential read mode. 31. The method of claim 28 further comprising: generating a sequential read signal to enable a sequential read mode; and generating a word line signal based on said sequential read signal. 32. The method of claim 28 further comprising: generating a sequential read signal to enable a sequential read mode; and precharges said plurality of bit lines based on said sequential read signal. 33. The method of claim 28 further comprising performing row address decoding for each cycle associated with access of each of said cells when in a discrete mode. 34. The method of claim 28 further comprising, when in a sequential mode: performing address decoding for a single row and enabling a word line for column cell sets; selecting, sensing and amplifying a column cell set of the column cell sets is for column address decoding; and disabling said word line for column address decoding and accessing of column cell sets of the column cell sets other than the column cell set. 35. The method of claim 28 further comprising: pre-charging said plurality of bit lines and performing row address decoding for each read cycle when in a first mode, and when in said second mode, precharging said plurality of bit lines and performing row address decoding for a first read cycle and refraining from precharging said plurality of bit lines and refraining from performing row address decoding for a second read cycle when in said second mode, enabling an address counter that is associated with pointing a column address to a next location after each column access, and when said column address is associated with an end of a word, resetting said address counter and permitting access to memory for a next discrete or sequential read access. 36. The method of claim 28 further comprising: precharging said plurality of bit lines when accessing a first set of cells of said array; and refraining from precharging said plurality of bit lines when accessing a second set of cells of said array. 37. The method of claim 28 further comprising: selecting said second cell via a column select signal; and generating a latch signal to latch bit information in said second cell based on said column select signal. 38. The method of claim 28 further comprising latching bit information in said first and second cells. 39. The method of claim 28 further comprising detecting and amplifying bit information in said first and second cells. 40. The method of claim 28 further comprising generating a word line signal via a row decoder to access a first set of cells. 41. The method of claim 28 wherein said word line signal includes an extended period to increase bit line signal separation and to compensate for leakage. 42. The method of claim 41 wherein said extended period increases bit line separation time and sets bit line separation at a predetermined voltage. 43. The method of claim 41 wherein said extended period is based on a predetermined number of read cycles. 44. The method of claim 28 further comprising generating bit line signals via a column decoder to access a first set of cells. 45. The method of claim 28 further comprising precharging a plurality of bit lines after a word line extended period. 46. The method of claim 45 further comprising precharging said plurality of bit lines once for multiple cell access cycles. 47. The method of claim 28 further comprising receiving a precharge signal via a sense-amplifier to precharge common lines of the sense-amplifier before latching bit information between accesses to sets of cells.
PROCESSOR INSTRUCTION CACHE WITH DUAL-READ MODESCROSS-REFERENCE TO RELATED APPLICATIONS [0001] This application claims the benefit of U.S. Utility Application No. 11/870,833, filed on October 11 , 2007, which claims the benefit of U.S. Provisional Application No. 60/829,438, filed on October 13, 2006. The disclosure of the above applications is incorporated herein by reference in its entirety.FIELDThe present disclosure relates to semiconductor integrated circuits and processors, and more particularly to processor structures and memory cell access techniques.BACKGROUNDThe background description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure.Memory in a cellular phone or a computer system may be arranged in a memory hierarchy, which includes memory devices of different speeds, types and sizes. The type, size and proximity of a memory device to a processor affect speed of that memory device. Due to costs of memory and limited space near the processor, a memory hierarchy may be organized into several levels. [0005] Many processors use and/or have memory caches to store copies of highly used data and instructions in order to improve access speed and overall processing speed. A memory cache, such as an instruction (l)-cache, is a portion of memory that may include high-speed static random access memory (SRAM). SRAM is included instead of slower dynamic RAM (DRAM), which is commonly used for a main memory. The memory cache may be referred to as a cache store or RAM (Random Access Memory) cache. Memory caches may be included at the highest level of memory and on the same integrated circuit (IC) as the processor. Such internal memory caches are also referred to as local or Level 1 (L1 ) caches.A memory cache includes an array of cells. Each cell stores a bit of information. An instruction, which may include, for example, 4-8 bits is stored and accessed through a read cycle. To access a word of instructions multiple read cycles are executed. During each read cycle, cells associated with an instruction are accessed by toggling both a row path (word line) and multiple column paths (bit lines) of the array for that word. The toggling of row and column paths includes tasks such as decoding row and column addresses, generating a word line signal, precharging bit lines, sensing-amplification, and latching data. Sensing-amplification refers to the detection and amplification of stored bit information. A significant amount of energy is associated with the stated tasks.SUMMARYIn one embodiment, a processor is provided that includes a cache memory. The cache memory includes an array of cells, word lines and bit lines. A control module enables a word line of the word lines to access a first cell in the enabled word line. The control module disables the word line and maintains the word line in a disabled state to access a second cell in the word line.In other features, the control module accesses multiple cells in the array through separate cycled selection of the bit lines. The control module generates a single word line pulse associated with one of the word lines. In other features, the control module operates in a discrete read mode and a sequential read mode.In yet other features, the control module generates a sequential read signal to enable a sequential read mode and generates a word line signal based on the sequential read signal. [0010] In still other features, the control module generates a sequential read signal to enable a sequential read mode and precharges the bit lines based on the sequential read signal.In other features, the control module, when in a discrete mode, performs row address decoding for each cycle associated with access of each of the cells.In further features, the control module, when in a sequential mode, performs address decoding for a single row and enables a word line for column cell sets. The control module selects, senses and amplifies a column cell set of the column cell sets for column address decoding. The control module disables the word line for column address decoding and access of column cell sets of the column cell sets other than the column cell set.In other features, the control module operates in a first mode and in a second mode. The control module, when in the first mode, pre-charges the bit lines and performs row address decoding for each read cycle. The control module, when in the second mode, precharges the bit lines and performs row address decoding for a first read cycle and does not precharge the bit lines and does not perform row address decoding for a second read cycle. The control module, when in the second mode, enables an address counter that is associated with pointing a column address to a next location after each column access. The address counter, when the column address is associated with an end of a word, is reset and memory access is permitted for a next discrete or sequential read access.In other features, the control module precharges the bit lines when accessing a first set of cells of the array and does not precharge the bit lines when accessing a second set of cells of the array.In yet other features, the cache memory includes at least one of an instruction cache and a static random access memory (SRAM).In other features, a column decoder is further included and selects the second cell via a column select signal. The control module generates a latch signal to latch bit information in the second cell based on the column select signal. In other features, latches are included and latch bit information in the first and second cells. In other features, a sensing- amplification module is included and detects and amplifies bit information in the first and second cells.In still other features, a row decoder is included. The control module generates a word line signal via the row decoder to access a first set of cells.In other features, the word line signal includes an extended period to increase bit line signal separation and to compensate for leakage. In other features, the extended period increases bit line separation time and sets bit line separation at a predetermined voltage. In other features, the extended period is based on a predetermined number of read cycles.In other features, a column decoder is included. The control module generates bit line signals via the column decoder to access a first set of cells. [0020] In further features, the control module precharges bit lines after a word line extended period. In other features, the control module precharges the bit lines once for multiple cell access cycles.In other features, a sense-amplifier is included that has common lines and that receives a precharge signal to precharge the common lines before latching bit information between accesses to sets of cells.In other features, an integrated circuit is provided that includes the processor. In other features, the integrated circuit further includes an external memory that is in communication with the processor.In still other features, a cellular phone is provided that includes the processor. In other features, the cellular phone further includes an external memory that is in communication with the processor.In yet other features, a communication system is provided that includes the processor. In other features, the communication system further includes an external memory that is in communication with the processor. [0025] In other features, a method is provided and includes providing a cache memory with an array of cells, word lines, and bit lines. A word line of the word lines is enabled to access a first cell in the word line. The word line is disabled. The word line is maintained in a disabled state to access a second cell in the word line.In other features, the method further includes accessing multiple cells in the array through separate cycled selection of the bit lines. A single word line pulse associated with one of the word lines is generated. In other features, the method includes operating in a discrete read mode and a sequential read mode.In other features, the method further includes generating a sequential read signal to enable a sequential read mode. A word line signal is generated based on the sequential read signal.In further features, the method further includes generating a sequential read signal to enable a sequential read mode. The bit lines are precharged based on the sequential read signal.In still other features, the method further includes performing row address decoding for each cycle associated with access of each of the cells when in a discrete mode.In yet other features, the method further includes, when in a sequential mode, performing address decoding for a single row and enabling a word line for column cell sets. A column cell set of the column cell sets is selected, sensed and amplified for column address decoding. The word line is disabled for column address decoding and accessing of column cell sets of the column cell sets other than the column cell set.In other features, the method further includes pre-charging the bit lines and performing row address decoding for each read cycle when in a first mode. When in the second mode, the method further includes precharging the bit lines and performing row address decoding for a first read cycle and refraining from precharging the bit lines and refraining from performing row address decoding for a second read cycle. When in the second mode, an address counter is enabled that is associated with pointing a column address to a next location after each column access. When the column address is associated with an end of a word, the address counter is reset and permitted access to memory for a next discrete or sequential read access. [0032] In other features, the method further includes precharging the bit lines when accessing a first set of cells of the array. The method further includes refraining from precharging the bit lines when accessing a second set of cells of the array. [0033] In other features, the method further includes selecting the second cell via a column select signal and generating a latch signal to latch bit information in the second cell based on the column select signal. In other features, the method further includes latching bit information in the first and second cells. In other features, the method further includes detecting and amplifying bit information in the first and second cells. In other features, the method further includes generating a word line signal via a row decoder to access a first set of cells.In further features, the word line signal includes an extended period to increase bit line signal separation and to compensate for leakage. In other features, the extended period increases bit line separation time and sets bit line separation at a predetermined voltage. In other features, the extended period is based on a predetermined number of read cycles. In other features, the method further includes generating bit line signals via a column decoder to access a first set of cells. [0035] In still other features, the method further includes precharging bit lines after a word line extended period. In other features, the method further includes precharging the bit lines once for multiple cell access cycles.In yet other features, the method further includes receiving a precharge signal via a sense-amplifier to precharge common lines before latching bit information between accesses to sets of cells.In other features, a processor is provided and includes a cache memory with an array of cells, word lines, and bit lines. Control means for enabling a word line of the word lines to access a first cell in the word line is included. The control means disables the word line and maintains the word line in a disabled state to access a second cell in the word line.In other features, the control means accesses multiple cells in the array through separate cycled selection of the bit lines and generates a single word line pulse associated with one of the word lines. In other features, the control means operates in a discrete read mode and a sequential read mode.In other features, the control means generates a sequential read signal to enable a sequential read mode and generates a word line signal based on the sequential read signal.In other features, the control means generates a sequential read signal to enable a sequential read mode and precharges the bit lines based on the sequential read signal.In other features, the control means, when in a discrete mode, performs row address decoding for each cycle associated with access of each of the cells.In still other features, the control means, when in a sequential mode, performs address decoding for a single row and enables a word line for column cell sets. The control means selects, senses and amplifies a column cell set of the column cell sets for column address decoding. The control means disables the word line for column address decoding and access of column cell sets of the column cell sets other than the column cell set.In yet other features, the control means operates in a first mode and a second mode. The control means, when in the first mode, pre-charges the bit lines and performs row address decoding for each read cycle. The control means, when in the second mode, precharges the bit lines and performs row address decoding for a first read cycle and does not precharge the bit lines and does not perform row address decoding for a second read cycle. The control means, when in the second mode, enables an address counter that is associated with pointing a column address to a next location after each column access. The address counter, when the column address is associated with an end of a word, is reset and memory access is permitted for a next discrete or sequential read access.In further features, the control means precharges the bit lines when accessing a first set of cells of the array and that does not precharge the bit lines when accessing a second set of cells of the array. [0045] In other features, the cache memory includes at least one of an instruction cache and a static random access memory (SRAM).In other features, column decoding means for selecting the second cell via a column select signal is further included. The control means generates a latch signal to latch bit information in the second cell based on the column select signal. In other features, latching means for latching bit information in the first and second cells is further included. In other features, sensing-amplification means for detecting and amplifying bit information in the first and second cells is further included. In other features, the control means generates a word line signal via a row decoder to access a first set of cells.In yet other features, the word line signal includes an extended period to increase bit line signal separation and to compensate for leakage. In other features, the extended period increases bit line separation time and sets bit line separation at a predetermined voltage. In other features, the extended period is based on a predetermined number of read cycles. In other features, the control means generates bit line signals via a column decoder to access a first set of cells.In still other features, the control means precharges multiple bit lines after a word line extended period. In other features, the control means precharges the bit lines once for multiple cell access cycles.In other features, sense-amplifier means for receiving a precharge signal to precharge common lines of the sense-amplifier means prior to latching bit information between accesses to sets of cells is further included.In other features, an integrated circuit is provided that includes the processor. In other features, the integrated circuit further includes an external memory that is in communication with the processor.In other features, a cellular phone is provided and includes the processor. In other features, the cellular phone further includes an external memory that is in communication with the processor. [0052] In other features, a communication system is provided that includes the processor. In other features, the communication system further includes an external memory that is in communication with the processor. [0053] In further features, a processor is provided and includes a cache memory with an array of cells, word lines, and bit lines. A control module accesses cells associated with instructions stored in the cache memory during access cycles. The control module precharges the bit lines when accessing a first set of cells of the array and does not precharge the bit lines when accessing a second set of cells of the array.In still other features, the control module enables a word line of the cache memory to access a first cell in the word line. The control module disables the word line and maintains the word line in a disabled state when accessing a second cell in the word line.In yet other features, the control module accesses multiple cells in the array through separate cycled selection of the bit lines and generates a single word line pulse associated with one of the word lines. In other features, the control module operates in a discrete read mode and a sequential read mode.In other features, the control module generates a sequential read signal to enable a sequential read mode and generates a word line signal based on the sequential read signal.In other features, the control module generates a sequential read signal to enable a sequential read mode and precharges the bit lines based on the sequential read signal.In other features, the control module, when in a discrete mode, performs row address decoding for each cycle associated with access of each of the cells.] In other features, the control module, when in a sequential mode, performs row address decoding for the first cell and maintains the word line in a disabled state to access the second cell.In still other features, the control module operates in a first mode and a second mode. The control module, when in the first mode, pre- charges the bit lines and performs row address decoding for each read cycle. The control module, when in the second mode, precharges the bit lines and performs row address decoding for a first read cycle and does not precharge the bit lines and does not perform row address decoding for a second read cycle.In yet other features, the cache memory includes at least one of an instruction cache and a static random access memory (SRAM). [0062] In further features, a column decoder that selects the second cell via a column select signal is further included. The control module generates a latch signal to latch bit information in the second cell based on the column select signal. In other features, latches are further included that latch bit information in the first and second cells. In other features, a sensing- amplification module is further included that detects and amplifies bit information in the first and second cells. In other features, a row decoder is further included.The control module generates a word line signal via the row decoder to access the first cell.In other features, the word line signal includes an extended period to increase bit line signal separation. In other features, the extended period increases bit line separation time and sets bit line separation at a predetermined voltage. In other features, the extended period is based on a predetermined number of read cycles.In other features, a column decoder is further included. The control module generates bit line signals via the column decoder to access the first cell.In other features, the control module precharges bit lines after a word line extended period. In other features, the control module precharges the bit lines once for multiple cell access cycles. [0066] In still other features, a sense-amplifier is further included that has common lines and that receives a precharge signal to precharge the common lines before latching bit information in the first and second set of cells. [0067] In other features, an integrated circuit is provided and includes the processor. In other features, the integrated circuit further includes an external memory in communication with the processor. [0068] In yet other features, a communication device is provided and includes the processor. In other features, the communication device further includes an external memory in communication with the processor.In further features, a method is provided and includes providing a cache memory with an array of cells, word lines, and bit lines. The method includes accessing cells associated with instructions stored in the cache memory during access cycles. The bit lines are precharged when accessing a first set of cells of the array. Precharging of the bit lines is not performed when accessing a second set of cells of the array. [0070] In other features, the method further includes enabling a word line of the cache memory to access a first cell in the word line. The word line is disabled and is maintained in a disabled state when accessing a second cell in the word line.In other features, the method further includes accessing multiple cells in the array through separate cycled selection of the bit lines. A single word line pulse associated with one of the word lines is generated. In other features, the method further includes operating in a discrete read mode and a sequential read mode.In other features, the method further includes generating a sequential read signal to enable a sequential read mode. A word line signal based on the sequential read signal is generated.In other features, the method further includes generating a sequential read signal to enable a sequential read mode. The bit lines are precharged based on the sequential read signal. [0074] In further features, the method further includes performing row address decoding for each cycle associated with access of each of the cells when in a discrete mode.In yet other features, the method further includes performing row address decoding for the first cell. The word line is maintained in a disabled state to access the second cell when in a sequential mode.In still other features, the method further includes pre-charging the bit lines and performing row address decoding for each read cycle when in a first mode. When in a second mode, the bit lines are precharged and row address decoding is performed for a first read cycle and precharging of the bit lines and row address decoding is not performed for a second read cycle.In other features, the method further includes selecting the second cell via a column select signal. A latch signal is generated to latch bit information in the second cell based on the column select signal. In other features, the method further includes latching bit information in the first and second cells. In other features, the method further includes detecting and amplifying bit information in the first and second cells. In other features, the method further includes generating a word line signal via a row decoder to access the first cell.In other features, the word line signal includes an extended period to increase bit line signal separation. In other features, the extended period increases bit line separation time and sets bit line separation at a predetermined voltage. In other features, the extended period is based on a predetermined number of read cycles. In other features, the method further includes generating bit line signals via the column decoder to access the first cell.In other features, the method further includes precharging bit lines after a word line extended period. In other features, the method further includes precharging the bit lines once for multiple cell access cycles.In further features, the method further includes receiving a precharge signal to precharge common lines of a sense-amplifier prior to latching bit information in the first and second set of cells. [0081] In other features, a processor is provided and includes a cache memory with an array of cells, word lines, and bit lines. The control means accesses cells associated with instructions stored in the cache memory during access cycles. The control means precharges the bit lines when accessing a first set of cells of the array and does not precharge the bit lines when accessing a second set of cells of the array.In still other features, the control means enables a word line of the cache memory to access a first cell in the word line. The control means disables the word line and maintains the word line in a disabled state when accessing a second cell in the word line.In yet other features, the control means accesses multiple cells in the array through separate cycled selection of the bit lines and generates a single word line pulse associated with one of the word lines. In other features, the control means operates in a discrete read mode and a sequential read mode.In other features, the control means generates a sequential read signal to enable a sequential read mode and generates a word line signal based on the sequential read signal. [0085] In other features, the control means generates a sequential read signal to enable a sequential read mode and precharges the bit lines based on the sequential read signal.In other features, the control means, when in a discrete mode, performs row address decoding for each cycle associated with access of each of the cells.In other features, the control means, when in a sequential mode, performs row address decoding for the first cell and maintains the word line in a disabled state to access the second cell.In other features, the control means operates in a first mode and a second mode. The control means, when in the first mode, pre-charges the bit lines and performs row address decoding for each read cycle. The control means, when in the second mode, precharges the bit lines and performs row address decoding for a first read cycle and does not precharge the bit lines or perform row address decoding for a second read cycle. [0089] In still other features, the cache memory includes at least one of an instruction cache and a static random access memory (SRAM).In further features, column decoding means for selecting the second cell via a column select signal is included. The control means generates a latch signal to latch bit information in the second cell based on the column select signal. In other features, latching means for latching bit information in the first and second cells is included. In other features, sensing-amplification means for detecting and amplifying bit information in the first and second cells is included. In other features, the control means generates a word line signal via a row decoder to access the first cell.In yet other features, the word line signal includes an extended period to increase bit line signal separation. In other features, the extended period increases bit line separation time and sets bit line separation at a predetermined voltage. In other features, the extended period is based on a predetermined number of read cycles. In other features, the control means generates bit line signals via a column decoder to access the first cell.In other features, the control means precharges bit lines after a word line extended period. In other features, the control means precharges the bit lines once for multiple cell access cycles.In other features, sense-amplifier means is included for receiving a precharge signal to precharge common lines of the sense-amplifier means prior to latching bit information in the first and second set of cells. [0094] In other features, an integrated circuit is provided and includes the processor. In other features, the integrated circuit further includes an external memory in communication with the processor.In other features, a communication device is provided and includes the processor. In other features, the communication device further includes an external memory in communication with the processor.In still other features, the systems and methods described above are implemented by a computer program executed by one or more processors. The computer program can reside on a computer readable medium such as but not limited to memory, non-volatile data storage and/or other suitable tangible storage mediums.Further areas of applicability of the present disclosure will become apparent from the detailed description provided hereinafter. It should be understood that the detailed description and specific examples, while indicating the preferred embodiment of the disclosure, are intended for purposes of illustration only and are not intended to limit the scope of the disclosure. BRIEF DESCRIPTION OF THE DRAWINGSThe present disclosure will become more fully understood from the detailed description and the accompanying drawings, wherein:FIG. 1 is a signal timing diagram illustrating operation of a discrete read access system for a processor memory;[00100] FIG. 2 is a functional block diagram of a cellular phone incorporating a phone processor with a multi-mode accessing control module in accordance with an embodiment of the present disclosure;[00101] FIG. 3 is a functional block diagram of a multi-mode processor in accordance with an embodiment of the present disclosure;[00102] FIG. 4 is a block and schematic diagram of a portion of the processor of FIG. 3;[00103] FIG. 5 is an exemplary storage cell and corresponding bit line precharge circuit in accordance with an embodiment of the present disclosure; [00104] FIG. 6 is a schematic diagram of a sense-amplifier circuit and corresponding sense-amplifier precharge circuit in accordance with an embodiment of the present disclosure[00105] FIG. 7 is a flow diagram illustrating a method of operating a multi-mode processor in accordance with an embodiment of the present disclosure;[00106] FIG. 8 is a signal timing diagram illustrating operation of the multi-mode processor during a sequential read mode of FIG. 7 in accordance with an embodiment of the present disclosure;[00107] FIG. 9A is a functional block diagram of a hard disk drive; [00108] FIG. 9B is a functional block diagram of a DVD drive;[00109] FIG. 9C is a functional block diagram of a high definition television;[00110] FIG. 9D is a functional block diagram of a vehicle control system; [00111] FIG. 9E is a functional block diagram of a set top box; and[00112] FIG. 9F is a functional block diagram of a mobile device. DETAILED DESCRIPTION[00113] The following description is merely exemplary in nature and is in no way intended to limit the disclosure, its application, or uses. For purposes of clarity, the same reference numbers will be used in the drawings to identify similar elements. As used herein, the phrase at least one of A, B, and C should be construed to mean a logical (A or B or C), using a non-exclusive logical or. It should be understood that steps within a method may be executed in different order without altering the principles of the present disclosure.[00114] As used herein, the terms processor and module may refer to an Application Specific Integrated Circuit (ASIC), an electronic circuit, a shared, dedicated, or group device and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.[00115] Also, in the following description the terms assert and assertion may refer to the generation of a pulse or the transitioning of a signal line from a deactive state to an active state. For example, a signal line may be transitioned from a LOW state to a HIGH state. The terms assert and assertion may also refer to the enabling of, by providing power to, one or more cells or cell lines for cell selection. The cell lines may include word lines and bit lines. [00116] Traditionally, when reading a word of instructions from a cache memory of a processor, cells associated with each instruction (instruction cell set) of that word are individually accessed and latched. A word may include multiple instructions and be located along a single word line. Multiple read cycles are executed to access each instruction cell set on that word line. Each read cycle includes toggling of a row path and multiple column paths associated with a particular instruction. The toggling includes decoding row and column addresses, generating a word line signal, precharging bit lines associated with the word of interest, and sensing-amplification and latching of cell bits. Each word line may include, for example, four (4) to eight (8) instructions. This is illustrated and further described with the timing diagram of FIG. 1.[00117] Referring now to FIG. 1 , a signal timing diagram illustrating operation of a discrete read access system for a processor memory is shown. The timing diagram includes multiple signals that are based on a clock signal 10. The timing diagram includes a word line signal 12, first and second bit line signals (voltage levels of bit lines) 14, 16, a sense-amplification signal 18, column select signals 20 and an instruction output signal 22. During read access of a word of instructions on a processor memory, the word line signal 12 is asserted, as shown by word line pulses 24. A word line pulse 24 is generated for each cell access cycle, such as a read cycle. Each read cycle includes accessing and latching bit information in cells associated with a particular instruction. The word line signal pulses 24 are generated based on the rising edges 26 of the clock signal 10, as denoted by arrows 27. As shown, a word line signal pulse is generated for each clock pulse 28.[00118] Activation of a word line causes bit line separation between voltage levels of bit lines. An example of bit line separation is shown and denoted by varying gap 30 between the bit line signals 14, 16. Bit line separation, which is equal to a difference in voltage between bit lines of a cell, increases with the amount of time that the word line is enabled. Increase in bit line separation is shown by declined ramp portion 32 of the second bit line signal 16 relative to the first bit line signal 14. Bit lines associated with the word of instructions are precharged prior to the generation of a first word line signal pulse and after each word line signal pulse during a deactivation state. Bit line separation is returned to a zero separation state when precharged and upon disabling of the word line signal 12, illustrated by falling edges 38 of the word line signal 12. A decrease in bit line separation is shown by inclined ramp portions 40 of the second bit line signal 16 relative to the first bit line signal 14. [00119] The sense-amplification signal 18 is generated to initiate acquiring, amplifying, and latching of bit information stored in a cell array. The sense amplification signal 18 is generated based on the rising edges 26, as denoted by arrows 41. A column or columns of a cell array that are associated with an instruction are selected. The selection may occur simultaneously with the generation of a word line signal or signal pulse. Five column selection signals are shown, which represent the individual selection of column sets associated with five instructions. Various numbers of instructions may be acquired. The sense-amplification signal 18 is generated to detect bit line separation for the selected cells, which provides bit information.[00120] The sense-amplification signal 18 is generated with the falling edges of the word line signal 12 and the column selection signals 20. The bit information for each cell in a set of cells is latched and provided as the instruction output signal 22 based on rising edges 42 of the sense-amplification signal 18, as denoted by arrows 44. Four instructions of the instruction output signal 22 are shown and denoted Instructions [0]-lnstructions[3].[00121] As a result of the above-described cell access technique for retrieving cell information for a word of instructions, many instructions on a word line are retrieved and discarded for each read cycle. The term discarded refers to the non-selection and non-latching of bits within asserted cells. Since all of the cells along a word line and/or all of the cells associated with a word of instructions are asserted for each read cycle and since only one instruction is latched per word line assertion, bits of other asserted non-selected cells in that word line are discarded. A significant amount of energy is wasted by re-toggling the same row path and by precharging bit lines to obtain additional instructions in the same word line. The embodiments disclosed below reduce the amount of power needed to retrieve and latch a word of instructions. [00122] Referring to FIG. 2, a functional block diagram of a cellular phone 50 is shown. The cellular phone 50 may be considered a communication system and/or may be part of a communication system and includes a phone processor 52 that has a multi-mode accessing control module 56. The cellular phone 50 includes a power supply 62, a memory 64, a storage device 66, and a cellular network interface 67. The phone processor 52 may be part of or include an ASIC. The phone processor 52 also includes an onboard processor memory 69. The processor memory 69 may for example be an instruction (l)-cache, a static random access memory (SRAM), some other onboard processor memory, or a combination thereof. The control module 56 operates in multiple read modes in association with the processor memory 69. The cellular phone 50 may also include a network interface 68, a microphone 70, an audio output 72 such as a speaker and/or output jack, a display 74, and a user input device 76 such as a keypad and/or pointing device. If the network interface 68 includes a wireless local area network interface, an antenna (not shown) may be included.[00123] The control module 56 operates in two or more modes, including a first mode or discrete memory access read mode and a second mode or sequential memory access read mode. During the discrete read mode, the control module accesses and latches bits stored in individual cells within the processor memory through respective word line and bit line assertion. In other words, for each read cycle bits associated with a single cell or a single instruction are latched. For each read cycle a word line signal pulse is generated.[00124] During the sequential read mode, the control module 56 accesses and latches bits in multiple cells or in multiple instructions within the processor memory 69 along a word line for a single word line assertion. Put another way, for a single generated word line pulse, a word of instructions may be latched. During the sequential read mode, multiple instructions may be read regardless of order along a word line. Since a word line is asserted once for a word of instructions, a power savings is achieved over the discrete read mode.[00125] The phone processor 52 may receive input signals from the cellular network interface 67, the network interface 68, the microphone 70, and/or the user input device 76. The phone processor 52 may process signals, including encoding, decoding, filtering, and/or formatting, and generate output signals. The output signals may be communicated to one or more of the memory 64, the storage device 66, the cellular network interface 67, the network interface 68, and the audio output 72. [00126] The memory 64 may include random access memory (RAM) and/or nonvolatile memory such as flash memory, semiconductor memory, solid state memory, phase change memory, or multi-state memory, in which each memory cell has more than two states. The storage device 66 may include an optical storage drive, such as a DVD drive, and/or a hard disk drive (HDD). The power supply 62 provides power to the components of the cellular phone 50.[00127] Referring also to FIG. 3, a functional block diagram of a multi- mode processor 100 is shown. The processor 100 may be used as part of or in replacement of the phone processor 52 of the embodiment of FIG. 2. The processor 100 includes a memory cell array 102, which includes rows and columns of memory cells. The memory cells are accessed through row and column selection. A row is selected by asserting a word line and a column is selected by asserting or precharging a bit line or pair of bit lines. Word line signals are denoted as 104 and bit line signals are denoted as 106. An address and control signal latch 110 receives address information, which is used by a row decoder 112 and a column decoder 114 to select the rows and columns of the memory cell array 102. The address and control signal latch 110, as well as other elements of the processor 100, such as the row decoder 112 and the column decoder 114, may be considered part of the multi-mode control module 56.[00128] The address and control signal latch 110 may include the multi- mode accessing control module 56 and/or a timing control module 116. The address and control signal latch 1 10 may be in communication with a bus and receive a signal that has data DO-DN that is stored on the memory cell array 102 during a write mode. The address and control signal latch 110 may also receive a signal-to-quantization ratio (SQR) signal, a write enable signal (WEN), a chip enable signal (CEN), and an output enable signal (OEN) for respective improvement in SQR, enablement of the write mode, operation of the memory cell array, and generation of an output signal.[00129] The processor 100 further includes a bit line precharge circuit 120, a column multiplexer 122, a sense-amplifier/write driver module 124, and a data latch/output buffer module 126. The bit line precharge circuit 120 is used to precharge the bit lines of the memory cell array 102. The bit line precharge circuit 120 may include drivers, buffers, transistors and/or other bit line asserting elements. The bit line precharge circuit 120 may be coupled between the memory cell array 102 and the column multiplexer 122 or may be located on an opposite side of the memory cell array 102 as the column multiplexer 122, as shown by dashed bit line precharge circuit 120'.[00130] During a read mode, the column multiplexer 122 is used to select the columns of the memory cell array 102 for latch purposes via column selection signals 128. After precharging of the bit lines, the column decoder 114, via the column multiplexer 122, selects certain columns. Stored bits, associated with the selected columns, are provided to one or more sense amplifiers of the sense-amplifier/write driver module 124 for amplification prior to reception by the data latch/output buffer module 126. The stored bits are received as bit information signals 130. The sense-amplifier/write driver module 124 receives a read/write mode signal 136, a sense-amplifier (SA) precharge signal 138, and a SA enable signal 140. The read/write mode signal 136 is a command signal for read or write operation. The SA precharge signal 138 and the SA enable signal 140 are generated to initiate and activate SA cells of the sense-amplifier/write driver module 124. The amplified data is latched and provided in the form of an output signal 134 by the data latch/output buffers module 126 based on a latch signal 142 from the address and control signal latch 110.[00131] During the write mode, cells in the memory cell array 102 are similarly asserted via the row decoder 112 and the column decoder 114. The received data DO-DN is provided to the bit lines via write drivers in the sense- amplifier/write driver module 124.[00132] Referring also to FIG. 4, a block and schematic diagram of a portion of the processor 100 is shown. The memory cell array 102 includes cells 150, which each store a bit of information. The cells 150 are asserted via word lines 152 by the row decoder 112 and via bit lines 154 by the column decoder 114 and the bit line precharge circuit 120. Row decoding and column decoding is based on an address input signal 155. Each of the cells 150 has an associate first bit line and a second bit line, such as first bit lines 156 and second bit lines 158 for cells 160, 162, respectively. The first bit lines 156 are coupled together by a first common line 164 through respective transistors 166 of the column multiplexer 122. The second bit lines 158 are coupled together at a second common line 168 through respective transistors 170 of the column multiplexer 122. The column decoder 114 selects the cells 150 via the column multiplexer 122. The column multiplexer 122 may include transistors, as shown, or other bit line selection devices. The transistors may include p-channel metal-oxide- semiconductor field-effect (PMOS) transistors, as shown, or other transistors. [00133] For the example embodiment shown, a sense-amplifier (SA) 180, a latch (shown as a D-flip flop) 182, and a write driver 184 are included. The SA 180 is coupled to the column multiplexer 122. The SA 180 and the write driver 184 are part of the sense-amplifier/write driver module 124. The SA 180 includes first and second inputs lines 186, 188, a SA enable input 190, a SA precharge input 192 and a SA output 194. The first input line 186 is coupled to the first common line 164 and the second input line 188 is coupled to the second common line 168. The first common line 164 and the second common line 168 have SA bit A and SA bit B signals, respectively. The SA enable input 190 and the SA precharge input 192 receive the SA enable signal 140 and the SA precharge signal 138, which may be generated by the control module 56 and/or the address and control signal latch 110.[00134] Information on selected bit lines is provided via the column multiplexer 122 and detected and amplified by the SA 180. An SA output signal 196 from the SA output 194 is provided to the latch 182 at terminal D. Data at terminal D is latched and provided to data output terminal Q of the latch 182 and outputted as a data output signal 198. The data is latched based on the received latch signal 142. The received latch signal 142 may be generated by the control module 56 and/or the address and control signal latch 110. The SA 180 receives the SA precharge signal 138 and asserts the SA input lines 186, 188 (best seen in FIG. 5).[00135] The latch 182 and the write driver 184 are respectively used for output and writing purposes. The latch 182 may be a D-flip flop as shown or some other latching device. The latch 182 acquires data on the SA output 194 and may be part of the data latch/output buffer module 126. The write driver 184 receives a data input signal 200 and provides data, which may be amplified, on the common lines 164, 168. From the common lines 164, 168 the data may be provided to the appropriate column of bit lines. [alpha][00136] Referring to FIG. 5, an exemplary storage cell 210 and a corresponding bit line precharge circuit 212 is shown. The cell 210 is provided to illustrate one example configuration of a cell, which may be incorporated in the memory cell array 102 described above. Other configurations may be used. The cell 210, as shown, includes four storage transistors M1-M4 and two access transistors M5, M6. The four storage transistors M1-M4 form two-cross-coupled inverters that store a bit of information. The access transistors M5, M6 control access to the four storage transistors M1-M4, during read and write operations. The four storage transistors M1 -M4 serve as a storage cell. The precharge circuit 212 includes transistors M7, M8, M9. The transistors M1 -M9 may be PMOS or n-channel MOSFET (NMOS) transistors, as shown, or other transistors. In the embodiment shown, the transistors M2, M4, and M7-M9 are PMOS transistors and the transistors M1 , M3, M5, M6 are NMOS transistors. The transistors M1 -M9 have respective source terminals MSi-MS9, drain terminals MDi-MD9, and gate terminals MGi-MG9.[00137] The cell 210 has a word line 214 and may have the first and second bit lines 156, 158. The first and second transistors M1 , M2 are coupled in series and in parallel to the third and fourth transistors M3, M4, which are also coupled in series. The source terminals MS2, MS4 are coupled to a positive power source terminal Vdd. The drain terminals MD2, M04 are coupled to source terminals Msi, Ms3. The gate terminals MQI , MQ2 are coupled together and to source terminal MS3- The gate terminals MG3, M04 are coupled together and to the drain terminal MD2- The drain terminals MDi, M03 are coupled to a negative power source terminal Vss. The source terminal Mss is coupled to the drain terminal MD2- The source terminal Ms[beta] is coupled to the drain terminal MD4. The source terminal Mss and the drain terminal MD7 are coupled together and to the first bit line 156. The source terminal Ms[beta] and the drain terminal MDg are coupled together and to the second bit line 158. [00138] Capacitance devices 220, 222, are shown and represent respective capacitance of bit line storage circuits associated with the bit lines 156, 158. The capacitance devices 220, 222 may be discrete storage capacitors, as shown, or may represent capacitance measured at each of the bit lines 156, 158 relative to points of reference. [00139] The bit line precharge circuit 212 receives a bit line precharge signal 224 via the bit line precharge input 226, which is provided to the gates MG7-MG9. The sources MS7, MS9 are coupled to the power source terminal Vdd. The drain MD7 is coupled to the source Ms[beta] and the drain MDg is coupled to the drain MD8-[00140] Access to the cell 210 is enabled by assertion of the word line214, which controls the access transistors M5, M6. In general, voltage potential of the second bit line 158 is an inverse of the voltage potential of the first bit line156. The cell 210 has three modes of operation, standby, read and write. Bit values, such as a zero (o) and a one (1) that are stored at locations denoted Q and Q . During standby mode the word line 214 is not asserted and the transistors M1-M4 reinforce each other. [00141] During the read mode, a read cycle is started by precharging both of the bit lines 156, 158. The word line 214 is then asserted, thereby enabling the transistors M5, M6. The stored values Q and Q are transferred to the bit lines 156, 158 by maintaining charge on one of the bit lines and discharging the other bit line. The bit line for which charge is maintained is pulled to Vdd. The bit line that is discharged is pulled to ground.[00142] During the write mode, a value to be written is applied to the bit lines 156, 158. The word line 214 is then asserted and the value to be stored is latched into the cell 210. Write drivers override the previous state of the cross- coupled inverters. [00143] Referring to FIG. 6, a schematic diagram of a SA circuit 230 including a SA precharge circuit 232 is shown. The SA circuit 230 and the SA precharge circuit 232 may be used as part of or in replacement of the SA 180. The SA circuit 230 includes five transistors T1-T4, which form an SA cell 234 and are cross-coupled, and a fifth transistor T5. The SA precharge circuit 232 includes three transistors T6-T8. The transistors T1-T8 may be PMOS or n- channel MOSFET (NMOS) transistors, as shown, or other transistors. In the embodiment shown, the transistors T2, T4, and T6-T8 are PMOS transistors and the transistors T1 , T3, T5 are NMOS transistors. Each of the transistors T1-T8 has respective source terminals TSi-TS8, drain terminals TDi-TD8, and gate terminals TGi-TG8-[00144] The first and second transistors T1 , T2 are coupled in series and in parallel to the third and fourth transistors T3, T4, which are also coupled in series. The source terminals Ts2, Ts4 are coupled to a positive power source terminal Vdd. The drain terminals TD2> T04 are coupled to the source terminals Tsi, Ts3. The gate terminals TGi, T02 are coupled together and to the source terminal Ts3. The gate terminals TQ3, TQ4 are coupled together and to the drain terminal TD2- The drain terminals TD2, T04 may be respectively coupled to the common lines 186, 188, which may be referred to as SA common lines. The drain terminals T01, TD3 are coupled to the source terminal TS5. The gate terminal TQS may be coupled to the SA enable input 190. The drain terminal TD5 is coupled to a negative power source terminal Vss. The drain terminals TD2. TD[Theta] are coupled together. The drain terminals T04, TDs are coupled together.[00145] The SA precharge circuit 232 receives the SA precharge signal 138 via the SA precharge input 192, which is provided to the gate terminals TG6- TQ8- The source terminals TS6, TSe are coupled to the power source terminal Vdd. The drain terminal Toe is coupled to the source terminal Ts7 and the drain terminal TDS is coupled to the drain terminal T07.[00146] Inverters 240, 242 are coupled to the common lines 186, 188. One of the common lines 186, 188 is provided to the data input D of the latch 182. Although the second common line 188 is shown as being coupled to the data input D, the first common line 186 may be coupled to the data input D. [00147] Referring to FIGs. 7 and 8, a flow diagram illustrating a method of operating a multi-mode processor and a signal timing diagram, illustrating operation of the multi-mode processor during a sequential read mode, are shown. The timing diagram includes multiple signals that are based on a clock signal 300 and a sequential read signal 302. The sequential read signal 302 is indicative of a sequential read mode. For the example shown, when the sequential read signal 302 is HIGH, an associated multi-mode processor is operated in a sequential read mode; otherwise the processor is operated in a discrete read mode. The timing diagram includes a word line signal 304, bit line signals (voltage levels) 306, 308, a SA enable signal 310, a SA bit A signal 312, a SA bit B signal 314, column select signals 316 and an instruction output signals 318. Although several of the steps of the method of FIG. 7 are described below with respect to the timing diagram and embodiment of FIG. 8, the method may be modified to apply to other timing diagrams and/or embodiments of the present disclosure.[00148] In step 400, a read signal is generated to read a word of instructions from a processor memory, such as the processor memory 69. In step 401 , received addresses for the word of instructions are row and column decoded, such as by the row and column decoders 112, 114.[00149] In step 402, the processor determines whether two or more instructions, which are to be read, are located along a single word line. When two or more instructions are located along a single word line, the processor or associated control module, such as the multi-mode accessing control module 56, proceeds to step 404. When the processor is reading a single instruction along a word line, the processor proceeds to step 440.[00150] In step 403, the processor or control module generates the sequential read signal 302, illustrated by rising edge 320. [00151] In step 404, the processor prepares for a sequential read.Between the rising edge 320 and a rising edge 322 of the clock signal 300, the processor may perform tasks to prepare for the sequential read. The tasks may include setting parameters for generation of a single extended word line signal, the generation of SA precharge signals for read cycles, precharging of bit lines, precharging of common lines, or other tasks. One or more of the preparation tasks stated may not be performed.[00152] In step 405, the processor or control module generates a discrete read signal.[00153] In step 406, an instruction counter is initialized. After performance of step 405 or 406, step 407 is performed.[00154] In step 407, bit lines for cells along a word line and associated with the word of instructions are precharged.[00155] In step 408, the processor generates the word line signal 304, illustrated by rising edge 324 of the word line signal 304, in the middle of a clock pulse 326 based on the row decoded addresses. The word line signal 304 is in the form of a pulse, which is generated for a first instruction, Instruction [O]. The word line signal 304 is not generated for subsequent instructions, which are accessed along the same or a single word line. The word line signal 304 remains in an active or HIGH state until after detection of a falling edge 328 of the clock signal 300 and until approximately the middle of a subsequent LOW clock signal state. This provides an extended active word cycle, which increases bit line separation. An extended period of the word line signal 304 is denoted as E,.[00156] In order to accurately read bits from a cell array, a minimum bit line separation is provided. The minimum separation may be approximately equal to or greater than 10OmV. In one embodiment, the extended active word cycle is set to allow for a bit line separation of approximately equal to the minimum separation plus at least 3OmV, as denoted by maximum bit line separation BLmax. In another embodiment, the extended word cycle is set to allow for bit line separation of approximately 15OmV. The extended word cycle is directly related to number of read cycles for a given word of instructions or number of instructions read for the generated word line signal 304. The additional separation aids in assuring that an accurate read occurs for each read cycle.[00157] In step 410, with the generation of the word line signal 304, bit line separation begins and thus, voltage potential across bit line pairs increases. The maximum bit line separation BLmax occurs approximately with a falling edge 330 of the word line signal 304. The generation of the word line signal causes bit line separation between voltage levels of bit lines.[00158] In step 412, a column selection signal, such as one of the column selection signals 316, is generated to select one or more columns or pair of bit lines associated with an instruction. The column selection signal is generated based on the column decoded addresses. The column selection signal may be provided to a column multiplexer, such as the column multiplexer 122, for selection of the appropriate bit lines. The selection may occur simultaneously with and/or during the same time period as the generation of the word line signal. In step 414, with the generation of the column selection signal, voltage potential of the common lines begins to separate. [00159] In step 416, the SA enable signal 310 is generated. SA pulses332 are generated based on rising edges of the clock signal 300, as denoted by arrows 334. The SA enable signal 310 activates a SA cell. For example, the SA enable signal 310 may activate the fifth transistor STS, which enables current flow through the SA cell and detection and amplification of SA bit A and/or SA bit B signals 312, 314. The SA bit A and/or SA bit B values are detected and amplified for each of the selected cells. For each cell, a first common line is pulled to voltage potential Vdd and a second common line is pulled to ground.[00160] In step 418, the instruction output signal 318 is generated, which includes data from each of the selected cells for the current instruction. Each instruction portion of the instruction output signal 318 is generated based on the rising edges 370 of the SA enable signal 310, as denoted by arrows 372. Either a SA bit A or a SA bit B value is provided to a latch for each of the selected cells. The SA bit A or SA bit B signals 312, 314 may be inverted prior to being received by the latch. A latch signal may be generated to latch the SA bit A or SA bit B values to generate the instruction output signal.[00161] In step 419, the word line signal 304 is deactivated or transitioned from a HIGH state to a LOW state. The deactivation of the word line signal 304 causes the potential of the bit lines to drift as a result of leakage. Over time and read cycles the potential across the bit lines decreases. In the example shown, a voltage potential of a first bit line decreases by approximately 2mV over a 40ns period and relative to a first original state of the first bit line, as denoted by bit line drift BL0. The voltage potential of a second bit line increased toward the first bit line. The extended active word cycle provides enough bit line separation assure that there is enough bit line separation during a last read cycle along a word line.[00162] When multiple instructions are located along a single word line, step 420 is performed. When the processor is reading a single instruction along a word line control may end. [00163] In step 420, the instruction counter is incremented by one (1 ).In step 421 , the column address may be advanced. The column address may be advanced linearly, successively, or in an interleaved fashion. In step 422, control determines whether the instruction counter is greater than a maximum instruction counter value. The maximum instruction counter value may be a predetermined and/or stored value. When the instruction counter is not greater than the maximum instruction counter value then step 424 is performed, otherwise control may end.[00164] In step 423, upon detection of a rising edge of the next clock cycle, the SA enable signal 310 is transitioned from a HIGH state to a LOW state and a SA precharge signal is generated to precharge the SA common lines. The precharge for each cycle is shown by the rising edges 350 of the SA bit B signal 314 of one of the SA common lines. The rising edges 350 are based on the rising edges of the clock signal 300, as denoted by arrows 352. This illustrates potential between the SA common lines decreasing. Note that the energy used to precharge the SA common lines is less than the energy used to precharge the bit lines. For example, the energy to precharge the SA common lines may be approximately 1/10 that to precharge the bit lines. Thus, energy is saved in performing a SA precharge for each read cycle, as opposed to performing a bit line precharge for each read cycle.[00165] In step 424, column decoding is performed to determine bit lines associated with a next instruction. Step 424 may be performed simultaneously with or during the same time period as step 422. In step 426, a next column selection signal is generated, such as one of the column selection signals 360, and voltage potential of the SA common lines begin to separate.[00166] In step 428, the SA enable signal 310 is activated. The SA enable signal activates a SA cell. For example, the SA enable signal may activate the fifth transistor STS, which enables current flow through the SA cell and detection and amplification of SA bit A and SA bit B signals 312, 314 for each of the selected cells.[00167] In step 432, the next instruction output signal is generated. Either a SA bit A or a SA bit B value is provided to a latch for each of the selected cells. The SA bit A or SA bit B signals 312m 314 may be inverted prior to being received by the latch. A latch signal may be generated to latch the SA bit A or SA bit B values to generate the instruction output signal. [00168] As an alternative, a SA enable signal and a latch signal may be generated to acquire, amplify, and latch bit information associated with the selected instruction for a current read cycle. The sense-amplification signal may be generated to detect bit line separation for the selected cells, which provides bit information. The sense-amplification signal may be generated with the falling edges of the word line signal and the column selection signal. The bit information for each cell may be latched and provided as an output signal, denoted as instructions in the output signal.[00169] Upon completion of step 432, the processor may return to step 420 and repeat steps 420-432 for a next instruction.[00170] The above-described steps are meant to be illustrative examples; the steps may be performed sequentially, synchronously, simultaneously, or in a different order depending upon the application.[00171] The wireless network devices and systems disclosed herein, may abide to IEEE standards, such as 802.11 , 802.11a, 802.11 b, 802.1 1 g, 802.11 h, 802.11 n, 802.16, and 802.20. Also, the embodiments disclosed herein may utilize and/or incorporate Bluetooth devices and techniques.[00172] Referring now to FIGs. 9A-9F, various exemplary implementations incorporating the teachings of the present disclosure are shown.[00173] Referring now to FIG. 9A, the teachings of the disclosure can be implemented in a processor 513 to access memory cells in a processor memory 517 of a hard disk drive (HDD) 500. The HDD 500 includes a hard disk assembly (HDA) 501 and an HDD printed circuit board (PCB) 502. The HDA 501 may include a magnetic medium 503, such as one or more platters that store data, and a read/write device 504. The read/write device 504 may be arranged on an actuator arm 505 and may read and write data on the magnetic medium 503. Additionally, the HDA 501 includes a spindle motor 506 that rotates the magnetic medium 503 and a voice-coil motor (VCM) 507 that actuates the actuator arm 505. A preamplifier device 508 amplifies signals generated by the read/write device 504 during read operations and provides signals to the read/write device 504 during write operations. [00174] The HDD PCB 502 includes a read/write channel module (hereinafter, "read channel") 509, a hard disk controller (HDC) module 510, a buffer 511 , nonvolatile memory 512, the processor 513, and a spindle/VCM driver module 514. The read channel 509 processes data received from and transmitted to the preamplifier device 508. The HDC module 510 controls components of the HDA 501 and communicates with an external device (not shown) via an I/O interface 515. The external device may include a computer, a multimedia device, a mobile computing device, etc. The I/O interface 515 may include wireline and/or wireless communication links. [00175] The HDC module 510 may receive data from the HDA 501 , the read channel 509, the buffer 511 , nonvolatile memory 512, the processor 513, the spindle/VCM driver module 514, and/or the I/O interface 515. The processor 513 may process the data, including encoding, decoding, filtering, and/or formatting. The processed data may be output to the HDA 501 , the read channel 509, the buffer 511 , nonvolatile memory 512, the processor 513, the spindle/VCM driver module 514, and/or the I/O interface 515.[00176] The HDC module 510 may use the buffer 511 and/or nonvolatile memory 512 to store data related to the control and operation of the HDD 500. The buffer 511 may include DRAM, SDRAM, etc. The nonvolatile memory 512 may include flash memory (including NAND and NOR flash memory), phase change memory, magnetic RAM, or multi-state memory, in which each memory cell has more than two states. The spindle/VCM driver module 514 controls the spindle motor 506 and the VCM 507. The HDD PCB 502 includes a power supply 516 that provides power to the components of the HDD 500.[00177] Referring now to FIG. 9B, the teachings of the disclosure can be implemented in a processor 524 to access memory cells of a processor memory 537 of a DVD drive 518 or of a CD drive (not shown). The DVD drive 518 includes a DVD PCB 519 and a DVD assembly (DVDA) 520. The DVD PCB 519 includes a DVD control module 521 , a buffer 522, nonvolatile memory 523, the processor 524, a spindle/FM (feed motor) driver module 525, an analog front-end module 526, a write strategy module 527, and a DSP module 528. [00178] The DVD control module 521 controls components of the DVDA520 and communicates with an external device (not shown) via an I/O interface529. The external device may include a computer, a multimedia device, a mobile computing device, etc. The I/O interface 529 may include wireline and/or wireless communication links.[00179] The DVD control module 521 may receive data from the buffer 522, nonvolatile memory 523, the processor 524, the spindle/FM driver module 525, the analog front-end module 526, the write strategy module 527, the DSP module 528, and/or the I/O interface 529. The processor 524 may process the data, including encoding, decoding, filtering, and/or formatting. The DSP module 528 performs signal processing, such as video and/or audio coding/decoding. The processed data may be output to the buffer 522, nonvolatile memory 523, the processor 524, the spindle/FM driver module 525, the analog front-end module 526, the write strategy module 527, the DSP module 528, and/or the I/O interface 529.[00180] The DVD control module 521 may use the buffer 522 and/or nonvolatile memory 523 to store data related to the control and operation of the DVD drive 518. The buffer 522 may include DRAM, SDRAM, etc. The nonvolatile memory 523 may include flash memory (including NAND and NOR flash memory), phase change memory, magnetic RAM, or multi-state memory, in which each memory cell has more than two states. The DVD PCB 519 includes a power supply 530 that provides power to the components of the DVD drive 518.[00181] The DVDA 520 may include a preamplifier device 531 , a laser driver 532, and an optical device 533, which may be an optical read/write (ORW) device or an optical read-only (OR) device. A spindle motor 534 rotates an optical storage medium 535, and a feed motor 536 actuates the optical device 533 relative to the optical storage medium 535.[00182] When reading data from the optical storage medium 535, the laser driver provides a read power to the optical device 533. The optical device 533 detects data from the optical storage medium 535, and transmits the data to the preamplifier device 531. The analog front-end module 526 receives data from the preamplifier device 531 and performs such functions as filtering and A/D conversion. To write to the optical storage medium 535, the write strategy module 527 transmits power level and timing data to the laser driver 532. The laser driver 532 controls the optical device 533 to write data to the optical storage medium 535.[00183] Referring now to FIG. 9C, the teachings of the disclosure can be implemented in a HDTV control module 538 to access memory cells of an internal memory 544 of a high definition television (HDTV) 537. The HDTV 537 includes the HDTV control module 538, a display 539, a power supply 540, memory 541 , a storage device 542, a network interface 543, and an external interface 545. If the network interface 543 includes a wireless local area network interface, an antenna (not shown) may be included.[00184] The HDTV 537 can receive input signals from the network interface 543 and/or the external interface 545, which can send and receive data via cable, broadband Internet, and/or satellite. The HDTV control module 538 may process the input signals, including encoding, decoding, filtering, and/or formatting, and generate output signals. The output signals may be communicated to one or more of the display 539, memory 541 , the storage device 542, the network interface 543, and the external interface 545. [00185] Memory 541 may include random access memory (RAM) and/or nonvolatile memory such as flash memory, phase change memory, or multi-state memory, in which each memory cell has more than two states. The storage device 542 may include an optical storage drive, such as a DVD drive, and/or a hard disk drive (HDD). The HDTV control module 538 communicates externally via the network interface 543 and/or the external interface 545. The power supply 540 provides power to the components of the HDTV 537.[00186] Referring now to FIG. 9D, the teachings of the disclosure may be implemented in a vehicle control module 547 to access memory cells of an internal memory 551 of a vehicle 546. The vehicle 546 may include the vehicle control module 547, a power supply 548, memory 549, a storage device 550, and a network interface 552. If the network interface 552 includes a wireless local area network interface, an antenna (not shown) may be included. The vehicle control module 547 may be a powertrain control system, a body control system, an entertainment control system, an anti-lock braking system (ABS), a navigation system, a telematics system, a lane departure system, an adaptive cruise control system, etc. [00187] The vehicle control module 547 may communicate with one or more sensors 554 and generate one or more output signals 556. The sensors 554 may include temperature sensors, acceleration sensors, pressure sensors, rotational sensors, airflow sensors, etc. The output signals 556 may control engine operating parameters, transmission operating parameters, suspension parameters, etc.[00188] The power supply 548 provides power to the components of the vehicle 546. The vehicle control module 547 may store data in memory 549 and/or the storage device 550. Memory 549 may include random access memory (RAM) and/or nonvolatile memory such as flash memory, phase change memory, or multi-state memory, in which each memory cell has more than two states. The storage device 550 may include an optical storage drive, such as a DVD drive, and/or a hard disk drive (HDD). The vehicle control module 547 may communicate externally using the network interface 552.[00189] Referring now to FIG. 9E, the teachings of the disclosure can be implemented in a set top control module 580 to access memory cells of an internal memory 586 of a set top box 578. The set top box 578 includes the set top control module 580, a display 581 , a power supply 582, memory 583, a storage device 584, and a network interface 585. If the network interface 585 includes a wireless local area network interface, an antenna (not shown) may be included.[00190] The set top control module 580 may receive input signals from the network interface 585 and an external interface 587, which can send and receive data via cable, broadband Internet, and/or satellite. The set top control module 580 may process signals, including encoding, decoding, filtering, and/or formatting, and generate output signals. The output signals may include audio and/or video signals in standard and/or high definition formats. The output signals may be communicated to the network interface 585 and/or to the display 581. The display 581 may include a television, a projector, and/or a monitor.[00191] The power supply 582 provides power to the components of the set top box 578. Memory 583 may include random access memory (RAM) and/or nonvolatile memory such as flash memory, phase change memory, or multi-state memory, in which each memory cell has more than two states. The storage device 584 may include an optical storage drive, such as a DVD drive, and/or a hard disk drive (HDD).Referring now to FIG. 9F, the teachings of the disclosure can be implemented in a mobile device control module 590 to access memory cells of an internal memory 595 of a mobile device 589. The mobile device 589 may include the mobile device control module 590, a power supply 591 , memory 592, a storage device 593, a network interface 594, and an external interface 599. If the network interface 594 includes a wireless local area network interface, an antenna (not shown) may be included.The mobile device control module 590 may receive input signals from the network interface 594 and/or the external interface 599. The external interface 599 may include USB, infrared, and/or Ethernet. The input signals may include compressed audio and/or video, and may be compliant with the MP3 format. Additionally, the mobile device control module 590 may receive input from a user input 596 such as a keypad, touchpad, or individual buttons. The mobile device control module 590 may process input signals, including encoding, decoding, filtering, and/or formatting, and generate output signals.The mobile device control module 590 may output audio signals to an audio output 597 and video signals to a display 598. The audio output 597 may include a speaker and/or an output jack. The display 598 may present a graphical user interface, which may include menus, icons, etc. The power supply 591 provides power to the components of the mobile device 589. Memory 592 may include random access memory (RAM) and/or nonvolatile memory such as flash memory, phase change memory, or multi-state memory, in which each memory cell has more than two states. The storage device 593 may include an optical storage drive, such as a DVD drive, and/or a hard disk drive (HDD). The mobile device may include a personal digital assistant, a media player, a laptop computer, a gaming console, or other mobile computing device.Those skilled in the art can now appreciate from the foregoing description that the broad teachings of the disclosure can be implemented in a variety of forms. Therefore, while this disclosure includes particular examples, the true scope of the disclosure should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, the specification, and the following claims.
Methods and apparatus to perform event handling operations are described. In one embodiment, after an event (such as an architectural event occurs), the corresponding occurrence response (e.g., a yield event) may cause generation of an interrupt. Other embodiments are also described.
CLAIMS What is claimed is: 1. A processor comprising: a first logic to detect whether an architectural event has occurred at a privilege level less restrictive than a user privilege level; and a second logic to cause an interrupt to be generated in response to occurrence of the architectural event. 2. The processor of claim 1, further comprising a storage unit to store one or more bits of data to indicate whether the second logic is to cause generation of the interrupt. 3. The processor of claim 1, wherein the privilege level corresponds to a supervisor privilege level. 4. The processor of claim 1, further comprising one or more channels to store one or more of: a channel identifier field, a yield interrupt enable field, a scenario identifier field, a scenario status field, or an entry valid field. 5. The processor of claim 1, further comprising a storage unit to store event handling data in one or more entries, wherein each entry comprises one or more of: a yield mask field, a mask field, a yield delivery status field, a delivery status field, a yield delivery mode field, a delivery mode field, or a vector field. 6. The processor of claim 1, wherein the interrupt corresponds to a performance monitor interrupt. 7. The processor of claim 1, further comprising a storage unit to store one or more bits of data to indicate whether the second logic is to cause the interrupt to be generated, wherein one of the first logic or the second logic updates a corresponding entry of the storage unit after occurrence of the event. 8. The processor of claim 1, further comprising a memory to store an interrupt service routine that is invoked in response to the interrupt. 9. The processor of claim 1, further comprising an execution unit that comprises the first logic. 10. The processor of claim 1, further comprising one or more channels to store data corresponding to a plurality of the architectural events. 11. The processor of claim 1, wherein one or more of the first logic, the second logic, a plurality of processor cores, or a cache are on a same integrated circuit die. 12. A method comprising: generating a signal to indicate an occurrence of an event at a privilege level that is less restrictive than a user privilege level; and causing an interrupt to be generated corresponding to the occurred event in response to the generated signal. 13. The method of claim 12, further comprising defining one or more conditions to monitor at the privilege level. 14. The method of claim 12, further comprising invoking an interrupt service routine in response to the interrupt. 15. The method of claim 12, further comprising accessing a storage unit to determine whether causing the interrupt to be generated is enabled. 16. A computing system comprising: a memory to store event handling data corresponding to one or more events that are to be monitored; a first logic to cause generation of a yield event in response to occurrence of one of the one or more events; and a second logic to generate an interrupt corresponding to the yield event based on event handling data stored in the memory. 17. The system of claim 16, wherein at least one of the one or more monitored events corresponds to an architectural event. 18. The system of claim 16, further comprising one or more channels to store one or more of: a channel identifier field, a yield interrupt enable field, a scenario identifier field, a scenario status field, or an entry valid field. 19. The system of claim 16, wherein the interrupt corresponds to a performance monitor interrupt. 20. The system of claim 16, further comprising an audio device coupled to the memory.
EVENT HANDLING FOR ARCHITECTURAL EVENTS AT HIGH PRIVILEGE LEVELSBACKGROUNDThe present disclosure generally relates to the field of electronics.More particularly, an embodiment of the invention relates to techniques for controlling flow after occurrence of architectural events at high privilege levels in a processor.Various mechanisms may be used to change the flow of control(such as the processing path or instruction sequence being followed) in a processor. For example, an interrupt may be used to change the flow of control in a processor. Generally, an interrupt may be triggered by an external interrupt signal provided to a processor. The processor may respond to the interrupt by jumping to an interrupt handler routine. In some cases, interrupts may be masked by the operating system executing at a supervisor privilege level, such that a software program executing at a relatively lower privilege level than the operating system may have no opportunity to modify such control flow changing events without modifying the operating system (OS).BRIEF DESCRIPTION OF THE DRAWINGSThe detailed description is provided with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items. [0004] Figs. 1, 6, and 7 illustrate block diagrams of embodiments of computing systems, which may be utilized to implement various embodiments discussed herein.Fig. 2 illustrates a block diagram of portions of a processor core and other components of a computing system, according to an embodiment of the invention.Figs. 3 and 4 illustrate portions of various types of data, according to various embodiments.Fig. 5 illustrates a flow diagram of a method to generate an interrupt in response to occurrence of a yield event, according to an embodiment.DETAILED DESCRIPTIONIn the following description, numerous specific details are set forth in order to provide a thorough understanding of various embodiments. However, various embodiments of the invention may be practiced without the specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to obscure the particular embodiments of the invention. Further, various aspects of embodiments of the invention may be performed using various mechanisms, such as integrated semiconductor circuits ("hardware"), computer-readable instructions organized into one or more programs ("software"), or some combination of hardware and software. For the purposes of this disclosure reference to "logic" shall mean either hardware, software, or some combination thereof.Some of the embodiments discussed herein may be utilized to perform event handling operations. In an embodiment, an "event" refers to a condition that may or may not require some action to be taken by logic. Furthermore, events may be classified into different types based on the action that is to be taken. For example, certain exceptions (such as divide by zero) may be characterized as synchronous events that occur each time a corresponding instruction is executed. On the other hand, interrupts that are generated by external devices may be characterized as asynchronous events, in part, because they may occur at any time. In one embodiment, an "architectural event" refers to an event or condition that may be monitored (e.g., by programming information corresponding to the architectural event into a state (e.g., such as a channel discussed with reference to Fig. 2). In an embodiment, software may configure a channel to monitor certain architectural events which may not otherwise be observable by software and/or hardware. For example, a last level cache miss may be defined as an architectural event that is used to perform dynamic profile guided optimizations. Also, an architectural event may be defined to monitor conditions that are occurring on a co-processor that is located on the same integrated circuit chip as a processor. In an embodiment, an "architectural event" may generally refer to an event or condition that occurs within processing resources or other logic present on the same integrated circuit chip as a processor.In one embodiment, after an event (such as an architectural event occurs) occurs at a privilege level higher than a user privilege level (e.g., a highest privilege level that may also be referred to as a privilege level 0 or supervisor privilege level), the corresponding occurrence response (e.g., a yield event) may cause generation of an interrupt. In an embodiment, the term "privilege level" refers to an attribute associated with the execution mode that determines which operations are allowed and which are disallowed. For example, application programs may be executed at a privilege level (e.g., a user privilege level) that does not allow the application programs to interfere with system state or otherwise to execute instructions that interfere with system state. In some embodiments, the operating system may execute at a supervisor privilege level, e.g., to manipulate system state. Further, a high privilege level (such as a privilege level higher than a user privilege level) may allow operating system software to safeguard system state such that application programs executing at a lower privilege level are disallowed from manipulating system state. Additionally, some embodiments may enable handling of events at a privilege level that is higher than a user privilege level, e.g., without requiring changes to an operating system or other software executing at a supervisor privilege level (such as a device driver). In some embodiments, generation of an interrupt (e.g., corresponding to the occurrence response, such as a yield event) for a high privilege level (e.g., a supervisor privilege level) may provide a relatively easier migration path that may reduce the impact on changes to the operation system code that executes in supervisor privilege level, in part, because supervisor privilege level software may be already aware of how to deal with pre-emption due to interrupts.In an embodiment, various logic provided in a processor may be used to perform event handling tasks, such as the processors discussed with reference to Figs. 1, 2, 6, and 7. More particularly, Fig. 1 illustrates a block diagram of a computing system 100, according to an embodiment of the invention. The system 100 may include one or more processors 102-1 through 102-N (generally referred to herein as "processors 102" or "processor 102"). The processors 102 may communicate via an interconnection network or bus 104. Each processor may include various components some of which are only discussed with reference to processor 102-1 for clarity. Accordingly, each of the remaining processors 102-2 through 102-N may include the same or similar components discussed with reference to the processor 102-1.In an embodiment, the processor 102-1 may include one or more processor cores 106-1 through 106-M (referred to herein as "cores 106" or more generally as "core 106"), a shared cache 108, and/or a router 110. The processor cores 106 may be implemented on a single integrated circuit (IC) chip. Moreover, the chip may include one or more shared caches (such as cache 108) and/or private caches (such as level 1 (Ll) cache 111-1, generally referred to herein as "Ll cache 111"), buses or interconnections (such as a bus or interconnection network 112), memory controllers (such as those discussed with reference to Figs. 6 and 7), or other components.In one embodiment, the router 110 may be used to communicate between various components of the processor 102-1 and/or system 100. Moreover, the processor 102-1 may include more than one router 110. Furthermore, the multitude of routers (110) may be in communication to enable data routing between various components inside or outside of the processor 102-1.The shared cache 108 may store data (e.g., including instructions) that are utilized by one or more components of the processor 102-1, such as the cores 106. For example, the shared cache 108 may locally cache data stored in a memory 114 for faster access by components of the processor 102. In an embodiment, the cache 108 may include a mid-level cache (such as a level 2 (L2), a level 3 (L3), a level 4 (L4), or other levels of cache), a last level cache (LLC), and/or combinations thereof. Moreover, various components of the processor 102-1 may communicate with the shared cache 108 directly, through a bus (e.g., the bus 112), and/or a memory controller or hub. As shown in Fig. 1, event handling data 120 may be stored in the memory 114 (or an interrupt controller as will be further discussed with reference to Fig. 4). Moreover, the event handling data 120 may be utilized by a component of the core 106 to generate an interrupt in response to an event occurrence, as will be further discussed herein, for example, with reference to Figs. 2-5.Fig. 2 illustrates a block diagram of portions of a processor core106 and other components of a computing system, according to an embodiment of the invention. In one embodiment, the arrows shown in Fig. 2 illustrate the flow direction of instructions through the core 106. One or more processor cores (such as the processor core 106) may be implemented on a single integrated circuit chip (or die) such as discussed with reference to Fig. 1. Moreover, the chip may include one or more shared and/or private caches (e.g., cache 108 of Fig. 1), interconnections (e.g., interconnections 104 and/or 112 of Fig. 1), memory controllers, or other components.As illustrated in Fig. 2, the processor core 106 may include a fetch unit 202 to fetch instructions for execution by the core 106. The instructions may be fetched from any storage devices such as the memory 114 and/or the memory devices discussed with reference to Figs. 6 and 7. The core 106 may also include a decode unit 204 to decode the fetched instruction. For instance, the decode unit 204 may decode the fetched instruction into a plurality of uops (micro-operations). Additionally, the core 106 may include a schedule unit 206. The schedule unit 206 may perform various operations associated with storing decoded instructions (e.g., received from the decode unit 204) until the instructions are ready for dispatch, e.g., until all source values of a decoded instruction become available. In one embodiment, the schedule unit 206 may schedule and/or issue (or dispatch) decoded instructions to an execution unit 208 for execution. The execution unit 208 may execute the dispatched instructions after they are decoded (e.g., by the decode unit 204) and dispatched (e.g., by the schedule unit 206). In an embodiment, the execution unit 208 may include more than one execution unit, such as a memory execution unit, an integer execution unit, a floating-point execution unit, or other execution units. The execution unit 208 may also perform various arithmetic operations such as addition, subtraction, multiplication, and/or division, and may include one or more an arithmetic logic units (ALUs). In an embodiment, a co-processor (not shown) may perform various arithmetic operations in conjunction with the execution unit 208. [0017] Further, the execution unit 208 may execute instructions out-of- order. Hence, the processor core 106 may be an out-of-order processor core in one embodiment. The core 106 may also include a retirement unit 210. The retirement unit 210 may retire executed instructions after they are committed. In an embodiment, retirement of the executed instructions may result in processor state being committed from the execution of the instructions, physical registers used by the instructions being de-allocated, etc.The core 106 may additionally include a trace cache or microcode read-only memory (uROM) 212 to store microcode and/or traces of instructions that have been fetched (e.g., by the fetch unit 202). The microcode stored in the uROM 212 may be used to configure various hardware components of the core 106. In an embodiment, the microcode stored in the uROM 212 may be loaded from another component in communication with the processor core 106, such as a computer-readable medium or other storage device discussed with reference to Figs. 6 and 7. The core 106 may also include a bus unit 214 to enable communication between components of the processor core 106 and other components (such as the components discussed with reference to Fig. 1) via one or more buses (e.g., buses 104 and/or 112). The core 106 may additionally include one or more registers 216 to store data accessed by various components of the core 106.Additionally, the processor core 106 illustrated in Fig. 1 may include one or more channels 218 that correspond to a set of architecture states. Each privilege level (such as privilege level 0 or supervisor privilege level (e.g., the highest privilege level), privilege level 3 (e.g., a relatively lower privilege level that may correspond to a user level privilege in an embodiment), etc.) may have a corresponding channel. Further, each channel 218 may correspond to one or more scenarios and corresponding yield events. In an embodiment, the channels 218 may contain scenario specifications. In turn, a yield event may be signaled when the scenario associated with the channel triggers. Hence, a yield event may be the occurrence response to a scenario.Furthermore, the core 106 may include an event monitoring logic220, e.g., to monitor the occurrence of one or more events that may be associated with architecturally defined scenarios (e.g., in the channel(s) 218) that may be used to trigger a corresponding yield event. As shown in Fig. 2, the logic 220 may be provided within the execution unit 208. However, the logic 220 may be provided elsewhere in the processor core 106. As will be further discussed herein, e.g., with reference to Figs. 3-5, the logic 220 may generate a signal after a monitored event occurs and a yield conversion logic 221 may in response to the generated signal cause generation of an interrupt, e.g., based on data stored in the channels 218. For example, the events that are being monitored (e.g., with reference to data stored in the channels 218) may occur asynchronously with respect to the execution of the current instruction sequence on the processor core 106.Moreover, as shown in Fig. 2, the event handling data 120 may be stored (or cached) in one or more of the caches 111 and/or 108, instead of or in addition to the memory 114. The memory 114 may also store one or more: interrupt service routines 222 (e.g., that may be triggered in response to an interrupt that is generated in response to a yield event by the logic 220), operating systems 224 (e.g., to manage hardware or software resources of a computing system that includes the core 106), and/or device drivers 225 (e.g., to enable communication between the OS 224 and various devices such as those discussed with reference to Figs. 6 and 7). In one embodiment, after the logic 221 causes generation of an interrupt (e.g., corresponding to a yield event), the address of an interrupt service routine (222) may be obtained from the event handling data 120 (which may be stored in an interrupt descriptor table in some embodiments). [0022] In an embodiment, an event may be handled by one or more of the interrupt service routines 222 (e.g., which may also be cached in the caches 111 and/or 108 in various embodiments) that is invoked to complete handling of the event. Since invoking the routine 222 may cause preemption, e.g., due to the asynchronous nature of events, the routine 222 may execute in an arbitrary thread context and may effect how the code that executes in the context of the current thread accesses data structures, uses locks, and interacts with other threads executing on the processor core. Moreover, software executing in supervisor privilege level may be generally carefully designed to avoid potential issues due to preemption including, for example, putting restrictions on the locks that may be acquired and interactions with other software components. Accordingly, in one embodiment, instead of introducing a new source of preemption due to the need to handle the monitored events which are asynchronous in nature and add software support for this, an existing interruption mechanism may be used so that when an event that is being monitored (e.g., with reference to data stored in the channels 218) occurs, it results in an interrupt being generated (e.g., by the logic 221) and the routines 222 may be, in turn, invoked.Fig. 3 illustrates a block diagram of portions of data stored in the channels 218 of Fig. 2 for supervisor privilege level, according to an embodiment. The channels 218 may store data including, for example, one or more entries 302. Each entry 302 may include one or more of: a channel identifier (ID) field 304 (e.g., that correspond to one of the channels 218 of Fig. 2), a yield interrupt enable field 306 (e.g., to indicate whether a corresponding yield is enabled to trigger an interrupt), a scenario identifier field 308 (e.g., to identify a scenario), a scenario status field 310 (e.g., to indicate the occurrence of the scenario identified by field 308), and an entry valid field 312 (e.g., to indicate whether the corresponding entry is valid which may enable or disable a channel identified by the field 304). In an embodiment, each channel 218 of Fig. 2 may contain scenario specification where a scenario corresponds to one or more architectural events and is identified by a scenario identifier (308). In one embodiment, the list of scenario identifiers may be enumerable by making use of an instruction (such as CPUID instruction, in accordance with at least one instruction set architecture). Also, in one embodiment, the data corresponding to the channels 218 may be stored in hardware registers (e.g., including registers 216). Further details regarding usage of the fields 304-312 will be discussed with reference to Fig. 5.Fig. 4 illustrates a block diagram of portions of the event handling data 120 of Figs. 1-2, according to an embodiment. In some embodiments, event handling data 120 shown in Fig. 4 may be shared with data corresponding to event monitoring interrupts. For example, in some embodiments, a local vector table (LVT) used for a performance monitor interrupt mechanism may be modified such as shown in Fig. 4. More particularly, Fig. 4 illustrates various entries that may correspond to a modified LVT, according to one embodiment. As shown in Fig. 4, the event handling data 120 may include one or more entries 402. Each entry 402 may include one or more of: a yield mask field 404 (e.g., to mask or unmask the corresponding yield interrupt), a mask field 406 (e.g., to mask or unmask the corresponding interrupt), a yield delivery status field 408 (e.g., to indicate the delivery status of the corresponding yield interrupt), a delivery status field 410 (e.g., to indicates whether the interrupt is idle or whether the interrupt is sent, but pending acceptance, for example), a yield delivery mode field 412 (e.g., which may differ from the delivery modes supported for performance monitor interrupts in field 414), a delivery mode field 414 (e.g., to select the delivery mode), and a vector field 416 (e.g., which may contain the vector number associated with the performance monitor interrupt to enable access to the corresponding interrupt service routine 222). In an embodiment, fields 404, 408, and/or 412 may be reserved in the unmodified version of the LVT. [0025] Furthermore, in an embodiment, the logics 220 and/or 221 may correspond to an advanced programmable interrupt controller (APIC) which maintains the interrupt vector number and the type of interrupt that may be delivered when the counter overflow occurs. Furthermore, the OS 224 may allow for software profiling tools to register an interrupt handler that receives control when the interrupt occurs. For instance, the profiling interrupt may be used by an analyzer tool to collect samples for detecting hot spots in the code that is currently executing and allows programmers to tune the software based on the analysis of the hot spots. Accordingly, in some embodiments, the performance monitor interrupt mechanism may be enhanced to enable yield event interrupts along with performance monitor interrupts, for example, by allowing passage of an interrupt to the next handler if a counter overflow condition is not detected. Moreover, the APIC may support a local vector table entry for a performance monitor interrupt that is to be programmed with the vector number of the vector associated with the performance monitor interrupt. The APIC may deliver the performance monitor interrupt provided that interrupts are not masked and the current interrupt priority level is lesser than the interrupt priority level associated with the performance monitor interrupt.In some embodiments, contrary to a general-purpose interrupt that may be invoked in response to an external signal provided to a processor, yield event interrupts generated by the logic 221 may not be directly tied to an external device. For example, in one embodiment, the yield conversion logic 221 may cause generation of a performance monitor interrupt (e.g., in response to the yield) which may be an architectural extension used to monitor the performance of software executing on a processor (such as the core 106 of Figs. 1-2). Generally, a performance monitor interrupt mechanism may be used to profile and tune software by monitoring events that may cause performance issues, such as branch mispredictions, cache misses, etc. In some embodiments, the performance monitor interrupt may be tied to one or more performance monitor counters and the events that are programmed for the counters. For example, software may be notified through a performance monitor interrupt after a corresponding counter overflows.In an embodiment, if a performance monitor interrupt and an yield event occur at the same time, the performance monitor interrupt may be delivered first and yield events may be held pending and delivered after the performance monitor interrupt completes. In one embodiment, since the interrupt is shared among the performance monitor interrupt and the yield event handler, the interrupt service routines 222 may check if the source of the interrupt was caused due to a counter overflow or if occurred due to an APE scenario that caused a yield event interrupt to occur by checking the contents of the corresponding channel 218. In one embodiment, the channel generating the yield interrupt may be reprogrammed by one of the service routines 222. Furthermore, for multiple channels the yield events interrupts may be delivered one after another based on the channel priority, for example.In an embodiment, instead of sharing the yield interrupt with the performance monitor interrupt, an additional field for each entry 402 may be introduced and the operating system hardware abstraction layer may be modified to support yield event interrupts and allow other software executing in privileged mode to register yield event interrupt service routines. Supervisor privilege level channels may share the same vector number (416) and, in turn, each yield event interrupt handler written by a programmer may check if the source of the interrupt was due to the channel that was programmed by the programmer.In one embodiment, the APIC (e.g., including the logics 220 and/or 221) may mask further performance interrupts once a performance monitor interrupt has been delivered and sets the mask field 406. Furthermore, software may unmask the performance interrupts by clearing the mask field 406. Further details regarding usage of the fields within the event handling data 120 of Fig. 4 will be discussed with reference to Fig. 5.Fig. 5 illustrates a flow diagram of a method 500 to generate an interrupt in response to occurrence of a yield event, according to an embodiment. In some embodiments, various components discussed with reference to Figs. 1-4 and 6-7 may be utilized to perform one or more of the operations discussed with reference to Fig. 5. For example, at least some of the operations discussed with reference to Fig. 5 may be performed by reference to the description table entries of Figs. 3 or 4.Referring to Figs. 1-5, at an operation 502, various conditions(such as scenarios) may be defined (e.g., by a programmer). In an embodiment, data corresponding to the defined conditions of operation 502 may be stored in the channels 218. Also, various information relating to the event handling data 120 of Figs. 3 and 4 may be configured at operation 502, such as one or more of the fields 306, 312, 404, and/or 412, depending on the implementation. At an operation 504, it is determined whether one or more architectural events (e.g., corresponding to a scenario) have occurred. In an embodiment, the logic 220 may determine whether one or more architectural events (e.g., corresponding to a scenario) in a channel 218 (e.g., corresponding to channel ID field 304) have occurred at operation 504. Once operation 504 determines the occurrence of a monitored event, corresponding scenario status may be updated at operation 506. In an embodiment, the logic 220 (or another logic provided within the processor core 106) may update the scenario status field 310 to indicate the occurrence of the scenario corresponding to the scenario ID field 308 at operation 506. At an operation 508, it may be determined whether a corresponding valid entry exists. In an embodiment, at operation 508, the yield conversion logic 221 (or another logic provided within the processor core 106) may determine whether a corresponding valid entry exist with reference to the event handling data 120, e.g., by referring to the valid field 312. [0032] If at operation 508 it is determined that a valid entry exists, it may be determined whether to cause generation of an interrupt (e.g., in response to a corresponding yield) at an operation 510. For example, in one embodiment, if a valid entry exists within the event handling data 120 at operation 508, the yield conversion logic 221 (or another logic provided within the processor core 106) may determine whether to cause generation of an interrupt corresponding to the yield event by referring to the corresponding field (306 or 404) at operation 510. If yield interrupt conversion is enabled at operation 512, an interrupt may be generated at operation 512. In one embodiment, the logic 221 may generate an interrupt at operation 512, e.g., to enable activation of a corresponding interrupt service routine 222 identified by a vector provided within the event handling data 120. In an embodiment, operation 508 may be performed prior to operation 506. Also, at operation 512, the fields 408 may be updated. Further, in embodiments that utilize the performance monitor interrupt mechanism discussed with reference to Fig. 4, operations 506 and/or 508 may not be performed.In some embodiments, generating interrupts in response to the occurrence response (e.g., a yield event) for supervisor privilege channels may provide an easier migration path that reduces the impact on changes to the operation system code that executes in supervisor privilege level, in part, because supervisor privilege level software may be already aware of how to deal with pre-emption due to interrupts and yield events may also fall into this category as they pre-empt the execution of the current software thread. There are restrictions placed by the operating system on interrupt service routines 222 that can pre-empt the execution of the current thread that is being executed on the processor core 106. For instance, device driver programmers and the operating system may utilize precautions to avoid deadlocks and executing code that may cause the operating system 224 to crash. As discussed herein, if yield events are treated as interrupts, then the same set of restrictions may also apply to the corresponding yield event handling, and a new mechanism to deal with a new kind of pre-emption and the software changes (e.g., including changes to OS 224 and/or device drivers 225) may be avoided.In some embodiments, in the presence of channels 218 for privilege level 0 (as opposed to lower privilege levels such as a user privilege level, for example), the same occurrence response mechanism may not be as applicable, in part, because yield events cause an asynchronous transfer of control to a different location which is more similar to interrupts in this respect which are also asynchronous events. The software executing at the supervisor privilege level (e.g., privilege level 0) may need control over the asynchronous transfer of control when executing a section of code atomically. Also, an asynchronous transfer of control due to interrupts may cause pre-emption and the interrupt service routine may continue to execute in the context of the current software thread that is scheduled to execute by the operating system 224, the pre-emption routine may abide by certain restrictions, e.g., when it comes to accessing data structures (memory locations), locks, and interaction with other threads.Furthermore, in one embodiment, an occurrence response mechanism enables conversion of a yield event to an interrupt and allows yield events to occur and pre-empt software threads that may be executing without breaking any of the existing software or introducing new code (including without operating system modifications) and mechanisms to deal with yield event pre-emption. In some embodiments, privileged channels (such as channels 218 corresponding to supervisor privilege level) may or may not be virtualized on a per software thread context basis (e.g., depending on the usage model and the events that are being monitored) and they may not need to be saved or restored. There may be new instructions introduced that allow privilege channels to be saved or restored. Depending on the scenarios that are supported, the operating system software or driver software that executes in privilege level 0 may save or restore channel state.In accordance with at least one embodiment, the following pseudo-code may be utilized to perform some operations discussed herein, e.g., with reference to Figs. 1-5:EMONITORADD EAX, [EBX]INC EAXMUL ECX, [EAX+EDX]MOV [ESI], ECXIn the above code, EMONITOR may indicate the start of the monitoring, e.g., after the operation 502. EMONITOR may also be used to program a channel with a scenario in accordance with various input parameters. Accordingly, EMONITOR may not only indicate the start of monitoring but may also select the events that are going to be monitored. As shown various, instructions may be executed after EMONITOR. At the MUL instruction, a monitored event may occur (504), which may cause the logic 221 to generate an interrupt to invoke the corresponding routine 222. Subsequently, various operations may be performed until the occurrence of an instruction that indicates the termination of the interrupt handling (such as IRET in accordance with at least one instruction set architecture). Upon execution of the interrupt terminating instruction, the core 106 may proceed with the MOV operation. In an embodiment, if the interrupt is masked (e.g., as indicated by reference to field 404), then the interrupt may be held pending and delivered after the interrupt is unmasked. [0038] Fig. 6 illustrates a block diagram of a computing system 600 in accordance with an embodiment of the invention. The computing system 600 may include one or more central processing unit(s) (CPUs) 602 or processors that communicate via an interconnection network (or bus) 604. The processors 602 may include a general purpose processor, a network processor (that processes data communicated over a computer network 603), or other types of a processor (including a reduced instruction set computer (RISC) processor or a complex instruction set computer (CISC)). Moreover, the processors 602 may have a single or multiple core design. The processors 602 with a multiple core design may integrate different types of processor cores on the same integrated circuit (IC) die. Also, the processors 602 with a multiple core design may be implemented as symmetrical or asymmetrical multiprocessors. In an embodiment, one or more of the processors 602 may be the same or similar to the processors 102 of Fig. 1. For example, one or more of the processors 602 may include one or more of the cores 106 discusses with reference to Figs. 1 and/or 2. Also, the operations discussed with reference to Figs. 1-5 may be performed by one or more components of the system 600.A chipset 606 may also communicate with the interconnection network 604. The chipset 606 may include a memory control hub (MCH) 608. The MCH 608 may include a memory controller 610 that communicates with a memory 612 (which may be the same or similar to the memory 114 of Fig. 1). The memory 612 may store data, including sequences of instructions, that may be executed by the CPU 602, or any other device included in the computing system 600. In one embodiment of the invention, the memory 612 may include one or more volatile storage device(s) (or memory) devices such as random access memory (RAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), static RAM (SRAM), or other types of storage devices. Nonvolatile memory may also be utilized such as a hard disk. Additional devices may communicate via the interconnection network 604, such as multiple CPUs and/or multiple system memories.The MCH 608 may also include a graphics interface 614 that communicates with a display device 616. In one embodiment of the invention, the graphics interface 614 may communicate with the display device 616 via an accelerated graphics port (AGP). In an embodiment of the invention, the display 616 (such as a flat panel display) may communicate with the graphics interface 614 through, for example, a signal converter that translates a digital representation of an image stored in a storage device such as video memory or system memory into display signals that are interpreted and displayed by the display 616. The display signals produced by the display device may pass through various control devices before being interpreted by and subsequently displayed on the display 616.A hub interface 618 may allow the MCH 608 and an input/output control hub (ICH) 620 to communicate. The ICH 620 may provide an interface to I/O device(s) that communicate with the computing system 600. The ICH 620 may communicate with a bus 622 through a peripheral bridge (or controller) 624, such as a peripheral component interconnect (PCI) bridge, a universal serial bus (USB) controller, or other types of peripheral bridges or controllers. The bridge 624 may provide a data path between the CPU 602 and peripheral devices. Other types of topologies may be utilized. Also, multiple buses may communicate with the ICH 620, e.g., through multiple bridges or controllers. Moreover, other peripherals in communication with the ICH 620 may include, in various embodiments of the invention, integrated drive electronics (IDE) or small computer system interface (SCSI) hard drive(s), USB port(s), a keyboard, a mouse, parallel port(s), serial port(s), floppy disk drive(s), digital output support (e.g., digital video interface (DVI)), or other devices. [0042] The bus 622 may communicate with an audio device 626, one or more disk drive(s) 628, and a network interface device 630 (which is in communication with the computer network 603). Other devices may communicate via the bus 622. Also, various components (such as the network interface device 630) may communicate with the MCH 608 in some embodiments of the invention. In addition, the processor 602 and the MCH 608 may be combined to form a single chip. Furthermore, a graphics accelerator may be included within the MCH 608 in other embodiments of the invention.Furthermore, the computing system 600 may include volatile and/or nonvolatile memory (or storage unit(s)). For example, nonvolatile memory may include one or more of the following: read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically EPROM (EEPROM), a disk drive (e.g., 628), a floppy disk, a compact disk ROM (CD-ROM), a digital versatile disk (DVD), flash memory, a magneto- optical disk, or other types of nonvolatile machine-readable media that are capable of storing electronic data (e.g., including instructions).Fig. 7 illustrates a computing system 700 that is arranged in a point-to-point (PtP) configuration, according to an embodiment of the invention. In particular, Fig. 7 shows a system where processors, memory, and input/output devices are interconnected by a number of point-to-point interfaces. The operations discussed with reference to Figs. 1-6 may be performed by one or more components of the system 700.As illustrated in Fig. 7, the system 700 may include several processors, of which only two, processors 702 and 704 are shown for clarity. The processors 702 and 704 may each include a local memory controller hub (MCH) 706 and 708 to enable communication with memories 710 and 712. The memories 710 and/or 712 may store various data such as those discussed with reference to the memory 612 of Fig. 6. [0046] In an embodiment, the processors 702 and 704 may be one of the processors 602 discussed with reference to Fig. 6. The processors 702 and 704 may exchange data via a point-to-point (PtP) interface 714 using PtP interface circuits 716 and 718, respectively. Also, the processors 702 and 704 may each exchange data with a chipset 720 via individual PtP interfaces 722 and 724 using point-to-point interface circuits 726, 728, 730, and 732. The chipset 720 may further exchange data with a graphics circuit 734 via a graphics interface 736, e.g., using a PtP interface circuit 737.At least one embodiment of the invention may be provided within the processors 702 and 704. For example, one or more of the cores 106 of Figs. 1-2 may be located within the processors 702 and 704. Other embodiments of the invention, however, may exist in other circuits, logic units, or devices within the system 700 of Fig. 7. Furthermore, other embodiments of the invention may be distributed throughout several circuits, logic units, or devices illustrated in Fig. 7.The chipset 720 may communicate with a bus 740 using a PtP interface circuit 741. The bus 740 may communicate with one or more devices, such as a bus bridge 742 and I/O devices 743. Via a bus 744, the bus bridge 742 may communicate with other devices such as a keyboard/mouse 745, communication devices 746 (such as modems, network interface devices, or other communication devices that may communicate with the computer network 603), audio I/O device 747, and/or a data storage device 748. The data storage device 748 may store code 749 that may be executed by the processors 702 and/or 704.In various embodiments of the invention, the operations discussed herein, e.g., with reference to Figs. 1-7, may be implemented as hardware (e.g., logic circuitry), software, firmware, or combinations thereof, which may be provided as a computer program product, e.g., including a machine-readable or computer-readable medium having stored thereon instructions (or software procedures) used to program a computer to perform a process discussed herein. The machine-readable medium may include a storage device such as those discussed with respect to Figs. 1-7.Additionally, such computer-readable media may be downloaded as a computer program product, wherein the program may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a bus, a modem, or a network connection). Accordingly, herein, a carrier wave shall be regarded as comprising a machine- readable medium.Reference in the specification to "one embodiment," "an embodiment," or "some embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiment(s) may be included in at least an implementation. The appearances of the phrase "in one embodiment" in various places in the specification may or may not be all referring to the same embodiment.Also, in the description and claims, the terms "coupled" and"connected," along with their derivatives, may be used. In some embodiments of the invention, "connected" may be used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements may not be in direct contact with each other, but may still cooperate or interact with each other.Thus, although embodiments of the invention have been described in language specific to structural features and/or methodological acts, it is to be understood that claimed subject matter may not be limited to the specific features or acts described. Rather, the specific features and acts are disclosed as sample forms of implementing the claimed subject matter.
Memory circuit configuration schemes on multi-drop buses are disclosed. In aspects disclosed herein, an on-die mapping logic is provided in a memory circuit. A memory controller communicates with the on-die mapping logic over a multi-drop bus. The on-die mapping logic is configured to receive a predetermined on-die termination (ODT) value from the memory controller prior to being accessed. In response to receiving the predetermined ODT value, the memory circuit sets on-die termination to the predetermined ODT value and instructs an on-die reference signal generator to generate a predetermined reference signal associated with the predetermined ODT value. The predetermined reference signal provides an optimal reference voltage for implementing a desired equalization setting at the memory circuit, thus aiding in preserving signal integrity. Such improved signal integrity reduces errors in accessing the memory circuit, thus leading to improved efficiency and data throughput on the multi-drop bus.
What is claimed is:1. A memory circuit comprising:an on-die reference signal generator;an on-die mapping logic configured to:receive an on-die termination (ODT) value on a first communication channel; andinstruct the on-die reference signal generator to produce a predetermined reference signal associated with the ODT value; anda receiver configured to equalize a data signal received on a second communication channel based on the predetermined reference signal.2. The memory circuit of claim 1 , wherein the first communication channel is a command bus of a multi-drop bus.3. The memory circuit of claim 1, wherein the second communication channel is a data bus of a multi-drop bus.4. The memory circuit of claim 1 , further comprising a lookup table configured to map at least one ODT value to at least one predetermined reference signal value.5. The memory circuit of claim 4, wherein the lookup table is included in the on- die mapping logic.6. The memory circuit of claim 1 , wherein the predetermined reference signal is a voltage reference signal (VREF)-7. The memory circuit of claim 1 , wherein the memory circuit is a dual inline memory module (DIMM).8. The memory circuit of claim 1, wherein the memory circuit is a double data rate (DDR) synchronous dynamic random access memory (SDRAM) selected from the group consisting of: personal-computer (PC) DDR-3; low-power (LP) DDR-3; PCDDR- 4; and LPDDR-4.9. The memory circuit of claim 1 integrated into a device selected from the group consisting of: a set top box; an entertainment unit; a navigation device; a communications device; a fixed location data unit; a mobile location data unit; a mobile phone; a cellular phone; a computer; a portable computer; a desktop computer; a personal digital assistant (PDA); a monitor; a computer monitor; a television; a tuner; a radio; a satellite radio; a music player; a digital music player; a portable music player; a digital video player; a video player; a digital video disc (DVD) player; and a portable digital video player.10. A circuit comprising:an on-die reference signal generator;an on-die mapping logic configured to:receive an on-die termination (ODT) value on a first communication channel; andinstruct the on-die reference signal generator to produce a predetermined reference signal associated with the ODT value; anda receiver configured to equalize a data signal received on a second communication channel based on the predetermined reference signal.11. A method for configuring a calibrated memory circuit over a multi-drop bus prior to accessing the calibrated memory circuit, the method comprising:receiving a predetermined on-die termination (ODT) value by an on-die mapping logic in a calibrated memory circuit;retrieving a predetermined reference signal value from a lookup table based on the predetermined ODT value; andinstructing an on-die reference signal generator to produce a predetermined reference signal based on the predetermined reference signal value.12. The method of claim 11, further comprising applying the predetermined ODT value to provide on-die-termination.13. The method of claim 11, further comprising equalizing a data signal received over the multi-drop bus based on the predetermined reference signal received from the on-die reference signal generator.14. A multi-drop memory system comprising:a multi-drop bus comprising a command bus and a data bus;a memory controller connecting to the multi-drop bus; andat least one memory circuit connecting to the multi-drop bus, comprising:an on-die mapping logic configured to receive a control signal from the memory controller over the command bus and generate an instruction signal;an on-die reference signal generator configured to receive the instruction signal and generate a predetermined reference signal; and a receiver configured to receive the predetermined reference signal and a data signal received from the memory controller over the data bus.15. The multi-drop memory system of claim 14, wherein the memory controller is configured to initiate a calibration procedure by providing a calibration signal to the at least one memory circuit.16. The multi-drop memory system of claim 15, wherein the memory controller is configured to initiate the calibration procedure at a start-up of the at least one memory circuit.17. The multi-drop memory system of claim 15, wherein the memory controller is configured to initiate the calibration procedure in response to a predetermined triggering event.18. The multi-drop memory system of claim 17, wherein the predetermined triggering event is a change in temperature or a change in voltage in the at least one memory circuit.19. The multi-drop memory system of claim 15, wherein the memory controller is configured to initiate the calibration procedure based on predetermined calibration intervals.20. The multi-drop memory system of claim 15, wherein the at least one memory circuit is configured to:receive the calibration signal over the command bus;create a lookup table containing an on-die termination (ODT) value column and a reference signal value column;receive at least one ODT value and an equalization (EQ) setting;determine an optimal reference signal value based on the at least one ODT value and the EQ setting; andstore the at least one ODT value and the optimal reference signal value in theODT value column and the reference signal value column of the lookup table, respectively.21. The multi-drop memory system of claim 20, wherein the memory controller is further configured to send a predetermined ODT value and a predetermined EQ setting to the at least one memory circuit prior to accessing the at least one memory circuit.22. The multi-drop memory system of claim 21, wherein the on-die mapping logic is configured to:retrieve a predetermined reference signal value from the lookup table based on the predetermined ODT value; andsend the predetermined reference signal value to the on-die reference signal generator in the instruction signal.23. The multi-drop memory system of claim 22, wherein the on-die reference signal generator is configured to generate the predetermined reference signal based on the predetermined reference signal value received from the on-die mapping logic.24. The multi-drop memory system of claim 23, wherein the receiver is configured to equalize the data signal based on the predetermined reference signal.25. The multi-drop memory system of claim 21, wherein the memory controller is configured to adjust internal timing to match the predetermined ODT value and the predetermined EQ setting.26. The multi-drop memory system of claim 14, wherein the memory controller is configured to conduct a pre-transmission equalization on the data signal based on an equalization (EQ) setting determined for the multi-drop memory system.27. The multi-drop memory system of claim 26, wherein the receiver is configured to conduct a per-drop equalization on the data signal based on a predetermined reference signal, wherein the predetermined reference signal is provided to compensate for deficiencies resulting from the pre-transmission equalization.
MEMORY CIRCUIT CONFIGURATION SCHEMES ON MULTI-DROP BUSESPRIORITY CLAIM[0001] The present application claims priority to U.S. Patent Application Serial No. 14/456,216 filed on August 11, 2014 and entitled "MEMORY CIRCUIT CONFIGURATION SCHEMES ON MULTI-DROP BUSES," which is incorporated herein by reference in its entirety.BACKGROUNDI. Field of the Disclosure[0002] The technology of the disclosure relates generally to accessing circuits over a multi-drop bus.II. Background[0003] Modern electronic devices (e.g., computers, laptops, smartphones, etc.) all require a large amount of on-board memory for application processing and data storage needs. One type of on-board memory is known as a synchronous dynamic random access memory (SDRAM). Advancement of SDRAM technology has led to a class of high-density, high-throughput double data rate (DDR) SDRAM. The latest versions of DDR SDRAM include personal computer (PC) DDR-3, low-power (LP) DDR-3, PCDDR-4, and LPDDR-4. DDR SDRAM integrated circuits (ICs) are often packaged into an integrated memory module commonly referred to as a dual inline memory module (DIMM). Multiple DIMMs are usually needed to provide the large amount of on-board memory required by memory-consuming electronic devices.[0004] By design, a multi-drop memory bus is configured to provide connections to multiple DIMMs. In particular, a memory controller communicates with each of the DIMMs over the memory bus, with a DIMM being associated with each drop on the multi-drop memory bus. Depending on bus topology, electrical characteristics experienced at the target DIMM may vary significantly depending on which DIMM is the target DIMM. That is, impedance changes created by different geometries, reflections associated with stubs on the bus, and other incongruities may all contribute to a bus that has a first signal profile when signaling to a first DIMM and a second signal profile when signaling to a second DIMM.[0005] While the latest versions of DDR SDRAM provide dynamic on-die termination, which allows the memory controller to reconfigure memory terminations independently depending on to which device the memory controller is currently writing, empirical evidence suggests that equalization methods suffer from the requirement for co-optimization for all potential connections, leading to a compromised solution, which is sub-optimal for any particular connection, albeit generally acceptable for all connections.SUMMARY OF THE DISCLOSURE[0006] Aspects disclosed in the detailed description include memory circuit configuration schemes on multi-drop buses. In aspects disclosed herein, an on-die mapping logic is provided in a memory circuit. A memory controller communicates with the on-die mapping logic over a multi-drop bus. The on-die mapping logic is configured to receive a predetermined on-die termination (ODT) value from the memory controller. The predetermined ODT value will most often come in a multi-bit digital format from the memory controller, either as parallel bits or sequential bits. The predetermined ODT value is provided prior to reading from or writing to memory ranks of the memory circuit. In response to receiving the predetermined ODT value, the memory circuit sets on-die termination to the predetermined ODT value. The on-die mapping logic is further configured to instruct an on-die reference signal generator to generate a predetermined reference signal associated with the predetermined ODT value received from the memory controller. The predetermined reference signal provides an optimal reference voltage for implementing a desired equalization setting at the memory circuit. By dynamically adjusting the on-die termination and the predetermined reference signal, aspects of the present disclosure aid in preserving signal integrity. Such improved signal integrity reduces errors in writing data to the memory or in reading data from the memory circuit, thus leading to improved efficiency and data throughput on the multi-drop bus.[0007] In this regard, in one aspect, a memory circuit is disclosed. The memory circuit comprises an on-die reference signal generator. The memory circuit also comprises an on-die mapping logic. The on-die mapping logic is configured to receive an ODT value on a first communication channel. The on-die mapping logic is also configured to instruct the on-die reference signal generator to produce a predetermined reference signal associated with the ODT value. The memory circuit also comprises a receiver configured to equalize a data signal received on a second communication channel based on the predetermined reference signal.[0008] In another aspect, a memory circuit means is disclosed. The memory circuit means comprises a means for on-die reference signal generation. The memory circuit means also comprises a means for on-die mapping. The means for on-die mapping is configured to receive an ODT value on a first communication channel. The means for on-die mapping is also configured to instruct the means for on-die reference signal generation to produce a predetermined reference signal associated with the ODT value. The memory circuit means also comprises a means for reception configured to equalize a data signal received on a second communication channel based on the predetermined reference signal.[0009] In another aspect, a method for configuring a calibrated memory circuit over a multi-drop bus prior to accessing the calibrated memory circuit is disclosed. The method comprises receiving a predetermined ODT value by an on-die mapping logic in the calibrated memory circuit. The method also comprises retrieving a predetermined reference signal value from a lookup table based on the predetermined ODT value. The method also comprises instructing an on-die reference signal generator to produce a predetermined reference signal based on the predetermined reference signal value.[0010] In another aspect, a multi-drop memory system is disclosed. The multi-drop memory system comprises a multi-drop bus comprising a command bus and a data bus. The multi-drop memory system also comprises a memory controller connecting to the multi-drop bus. The multi-drop memory system also comprises at least one memory circuit connecting to the multi-drop bus. The at least one memory circuit comprises an on-die mapping logic configured to receive a control signal from the memory controller over the command bus and generate an instruction signal. The at least one memory circuit also comprises an on-die reference signal generator configured to receive the instruction signal and generate a predetermined reference signal. The at least one memory circuit also comprises a receiver configured to receive the predetermined reference signal and a data signal received from the memory controller over the data bus.BRIEF DESCRIPTION OF THE FIGURES[0011] Figure 1 is a schematic diagram of an exemplary multi-drop memory system that includes a pair of dual inline memory modules (DIMMs);[0012] Figure 2 is a schematic diagram illustrating an exemplary multi-drop memory system that comprises a memory controller and a memory circuit configured to dynamically adjust a reference signal based on a predetermined on-die termination(ODT) value to provide proper equalization on a received data signal;[0013] Figure 3 is a data structure illustrating an exemplary lookup table configured to map a predetermined ODT value to a predetermined reference signal value;[0014] Figure 4 is a flowchart illustrating a calibration and pre-access configuration process for calibrating and configuring the DIMMs in Figure 1 for read and write operations;[0015] Figure 5 is a flowchart illustrating an exemplary calibration and pre-access configuration process for calibrating and configuring the memory circuit in Figure 2 for read and write operations;[0016] Figure 6A is an exemplary plot graph illustrating an optimal data eye diagram when an optimal reference signal (VREF) is provided to a predetermined equalization (EQ) setting at a DIMM;[0017] Figure 6B is an exemplary plot graph illustrating how a dynamic on-die reference signal adjustment scheme employed in the memory circuit in Figure 2 can aid in restoring a degraded data eye resulting from a non-optimal reference signal (VREF) ; and[0018] Figure 7 is a block diagram of an exemplary processor-based system that can include the memory circuit of Figure 2.DETAILED DESCRIPTION[0019] With reference now to the drawing figures, several exemplary aspects of the present disclosure are described. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspects described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects.[0020] Aspects disclosed in the detailed description include memory circuit configuration schemes on multi-drop buses. In aspects disclosed herein, an on-die mapping logic is provided in a memory circuit. A memory controller communicates with the on-die mapping logic over a multi-drop bus. The on-die mapping logic is configured to receive a predetermined on-die termination (ODT) value from the memory controller. The predetermined ODT value will most often come in a multi-bit digital format from the memory controller, either as parallel bits or sequential bits. The predetermined ODT value is provided prior to reading from or writing to memory ranks of the memory circuit. In response to receiving the predetermined ODT value, the memory circuit sets on-die termination to the predetermined ODT value. The on-die mapping logic is further configured to instruct an on-die reference signal generator to generate a predetermined reference signal associated with the predetermined ODT value received from the memory controller. The predetermined reference signal provides an optimal reference voltage for implementing a desired equalization setting at the memory circuit. By dynamically adjusting the on-die termination and the predetermined reference signal, aspects of the present disclosure aid in preserving signal integrity. Such improved signal integrity reduces errors in writing data to the memory or in reading data from the memory circuit, thus leading to improved efficiency and data throughput on the multi-drop bus.[0021] Before discussing aspects of a multi-drop memory system that includes specific aspects of the present disclosure, a brief overview of a multi-drop memory system that may incorporate exemplary aspects of the present disclosure is provided with reference to Figure 1. The discussion of specific exemplary aspects of a multi-drop memory system that comprises a memory circuit begins with reference to Figure 2.[0022] In this regard, Figure 1 illustrates an exemplary schematic diagram of a multi-drop memory system 100 that may benefit through inclusion of aspects of the present disclosure. The multi-drop memory system 100 comprises a memory controller 102 that connects to a pair of dual inline memory modules (DIMMs) 104(1), 104(2) over a multi-drop bus 106. In the absence of exemplary aspects of the present disclosure, when a high frequency data signal 108 propagates from the memory controller 102 towards the DIMMs 104(1), 104(2), the electrical characteristics of the high frequency data signal 108 may be significantly different at the DIMMs 104(1), 104(2) due to signal distortion and/or interference on the multi-drop bus 106. Taking the DIMM 104(1) as an example, the electrical characteristics of the high frequency data signal 108 are impacted by a first impedance 110 at the memory controller 102, a second impedance 112 at the multi-drop bus 106, and a third impedance 114, as well as by a reflection signal 116 in between the DIMMs 104(1), 104(2). The reflection signal 116 travels back towards the memory controller 102 and collides with the high frequency data signal 108 at the DIMM 104(1), creating an interference known as cross talk or inter- symbol interference (not shown).[0023] With continuing reference to Figure 1, to preserve signal integrity at the DIMMs 104(1), 104(2), on-die-terminations 118(1), 118(2) are employed at the DIMMs 104(1), 104(2), respectively, to provide proper impedance terminations. The latest Joint Electron Device Engineering Council (JEDEC) synchronous dynamic random access memory (SDRAM) standards, such as personal-computer (PC) double data rate (DDR)- 3, low-power (LP) DDR-4 and PCDDR-4, have introduced dynamic on-die termination to provide customized impedance terminations on individual memory chips. Dynamic on-die termination also provides the memory controller 102 with increased flexibility to optimize impedance termination values individually for the DIMM 104(1) and the DIMM 104(2) on the multi-drop bus 106. With dynamic on-die termination, a different ODT value may be opportunistically applied to the DIMMs 104(1), 104(2), respectively, based on the electrical characteristics experienced at the DIMMs 104(1), 104(2). A plurality of ODT values (not shown) may be determined and stored at the memory controller 102 for each of the DIMMs 104(1), 104(2) during a calibration process known as link training. The link training process also determines and stores at the memory controller 102 an EQ setting (not shown) for the multi-drop memory system 100.[0024] With continued reference to Figure 1, in contrast to the plurality of ODT values that are determined respectively for each of the DIMMs 104(1), 104(2) in the multi-drop memory system 100, the EQ setting is static and applicable to all of the DIMMs 104(1), 104(2) in the multi-drop memory system 100 regardless of impedance variations along the multi-drop bus 106. Prior to accessing (e.g., reading data from or writing data to) the DIMM 104(1), the memory controller 102 configures the DIMM 104(1) with both a predetermined ODT value, which is chosen from the plurality of ODT values determined during calibration, and with the EQ setting so as to enable on- die impedance termination and data signal equalization at the DIMM 104(1). Equalization refers to a process, commonly employed at an electronic signal receiver, to restore frequency domain characteristics of an electronic signal that may have been distorted and/or attenuated by a transmission medium. In addition, equalization may also be used by the electronic signal receiver to compensate signal distortion resulting from improper equalization at an electronic signal transmitter, such as the memory controller 102. In this regard, the DIMM 104(1) configures on-die impedance termination based on the predetermined ODT value and equalizes the high frequency data signal 108 based on the EQ setting. Likewise, the memory controller 102 configures the DIMM 104(2) with another predetermined ODT value and the EQ setting prior to accessing the DIMM 104(2). In conventional systems, the EQ setting is only determined for the multi-drop memory system 100 as a whole, causing equalization effectiveness to be compromised, and signal integrity may be degraded at the DIMMs 104(1), 104(2). Aspects of the present disclosure expand the capabilities of the memory controller 102 to optimize signaling to individual DIMMs 104 on the multi-drop bus 106 to further reduce impedance mismatch and reduce reflection-induced interference.[0025] In this regard, Figure 2 illustrates an exemplary schematic diagram of a multi-drop memory system 120 that comprises a memory controller 122 and memory circuit 124 configured to dynamically adjust an on-die reference signal (e.g., VREF) based on the predetermined ODT value to provide proper equalization on a received data signal. As described in Figure 1 above, such signal distortion and/or attenuation are combined results of various impedances and/or interferences in the conventional multi-drop memory system 100 and imperfect signal equalization provided by the memory controller 102. Without proper equalization at the electronic signal receiver, a useful part of the electronic signal (e.g., a data carrying signal) may be overwhelmed by noise signals or so distorted so as to become undetectable by the electronic signal receiver. As a result, data transmission error increases, thus leading to reduced data transmission efficiency and throughput. [0026] With reference to Figure 2, the memory controller 122 connects to the memory circuit 124 via the multi-drop bus 126. The multi-drop bus 126 comprises a command bus 128, which is configured to carry a control signal 130, and a data bus 132, which is configured to carry a data signal 134. The memory circuit 124 comprises an on-die mapping logic 136, a lookup table 138, an on-die reference signal generator 140, and a receiver 142. Although the lookup table 138 is shown to be inside the on-die mapping logic 136, as a non- limiting example, the lookup table 138 can be implemented outside the on-die mapping logic 136 in the memory circuit 124 as well. The on-die mapping logic 136 is connected to the command bus 128 to receive the control signal 130 from the memory controller 122. The receiver 142 is connected to the data bus 132 to receive the data signal 134 from the memory controller 122.[0027] Similar to the memory controller 102 in Figure 1, the memory controller 122 in Figure 2 also needs to configure the memory circuit 124 to provide on-die impedance termination and data signal equalization, among other configurations, prior to accessing (e.g., reading data from or writing data to) memory ranks (not shown) of the memory circuit 124. To do so, the memory controller 122 includes a predetermined ODT value and a predetermined EQ setting in the control signal 130 and transmits these values to the memory circuit 124 over the command bus 128. The predetermined ODT value and the predetermined EQ setting are transmitted to the memory circuit 124 in a single instruction step, thus reducing signaling overheads on the command bus 128. The predetermined ODT value and the predetermined EQ setting are determined, along with a plurality of other configuration parameters, during a calibration process also known as standard link training in JEDEC DDR standards. The calibration process will be described later in this disclosure in reference to Figures 4-5. During the calibration process, the memory circuit 124 creates and populates the lookup table 138 according to a data structure illustrated in Figure 3, discussed below. Elements of Figure 2 are referenced in connection with Figure 3 and will not be re-described herein. Further discussion of Figure 2 will follow discussion of Figure 3.[0028] Figure 3 is an exemplary data structure illustrating the lookup table 138 configured to map the predetermined ODT value to a predetermined reference signal value. According to Figure 3, the lookup table 138 comprises an ODT value column 150 that contains the predetermined ODT value and a reference signal value column 152 that contains the predetermined reference signal value. In this regard, the memory circuit 124 establishes correlations between a plurality of predetermined ODT values and a plurality of predetermined reference signal values in the lookup table 138.[0029] With reference back to Figure 2, the on-die mapping logic 136 receives the predetermined ODT value and the predetermined EQ setting via the control signal 130. The on-die mapping logic 136 is then configured to retrieve the predetermined reference signal value from the lookup table 138 based on the predetermined ODT value. The predetermined reference signal value is determined during calibration to compensate for inefficiency of the predetermined EQ setting, thus giving the memory circuit 124 the ability to provide proper per-drop equalization for the data signal 134. The on-die mapping logic 136 then transmits an instruction signal 144, which carries the predetermined reference signal value retrieved from the lookup table 138, to instruct the on-die reference signal generator 140 to produce an on-die reference signal 146 based on the predetermined reference signal value. In a non-limiting example, the on-die reference signal 146 is a voltage reference signal VREF- The on-die reference signal 146 is received and used by the receiver 142 to provide per-drop equalization on the data signal 134 and produce an equalized complementary metal oxide semiconductor (CMOS) level signal output 148. In a non-limiting example, the memory controller 122 may equalize the data signal 134 based on the predetermined EQ setting before transmitting to the memory circuit 124. Because the predetermined EQ setting is determined for all drops in the multi-drop memory system 120, the data signal 134 may not be best suited to the memory circuit 124. With the ability to implement per-drop equalization by dynamically adjusting the on-die reference signal 146, the memory circuit 124 is able to compensate for impacts of imperfect memory controller 122 equalization on the data signal 134, thus preserving signal integrity and improving signal robustness.[0030] Figure 4 is an exemplary flowchart illustrating a calibration and pre-access configuration process 160 for calibrating and configuring the DIMMs 104(1), 104(2) in Figure 1 prior to read and write operations. Elements of Figure 1 are referenced in connection with Figure 4 and will not be re-described herein. The calibration and pre- access configuration process 160 comprises a calibration sub-process 162 and a pre- access configuration sub-process 164. The memory controller 102 conducts the calibration sub-process 162 on each of the DIMMs 104(1), 104(2) in the multi-drop memory system 100. As a non-limiting example, the memory controller 102 may conduct the calibration sub-process 162 at start-up of the multi-drop memory system 100, based on predetermined calibration intervals, or in response to a predetermined triggering event (e.g., temperature and/or voltage change) in the DIMMs 104(1), 104(2). During the calibration sub-process 162, the memory controller 102 determines and stores a plurality of configuration parameters for the multi-drop memory system 100, including internal timing setting, EQ setting, ODT value, and VREFvalue among other configuration parameters (block 166). While the internal timing setting, ODT value, and VREFvalue are DIMM-dependent and specific to each of the DIMMs 104(1), 104(2), the EQ setting parameter is DIMM-independent and generic across the multidrop memory system 100. As such, the EQ setting is generally a compromised parameter with regard to each of the DIMMs 104(1), 104(2) in the multi-drop memory system 100. That is, the EQ setting is not optimized for any particular DIMM 104, but is a best fit for all the DIMMs 104.[0031] With continuing reference to Figure 4, the memory controller 102 invokes the pre-access configuration sub-process 164 prior to accessing the DIMM 104(1) or the DIMM 104(2). Unlike the calibration sub-process 162, the pre-access configuration sub-process 164 is performed on a targeted DIMM 104 the memory controller 102 is preparing to read data from or write data to. In this regard, the memory controller 102 must first determine which of the DIMMs 104(1), 104(2) will be accessed next (block 168). The memory controller 102 then configures the targeted DIMM 104 by sending the predetermined ODT value and the predetermined EQ setting (block 170) and the VREFvalue (block 172) to the targeted DIMM 104. The targeted DIMM 104, in response to receiving the predetermined ODT value, the predetermined EQ setting, and the VREFvalue, performs internal configuration to provide on-die termination and equalization as instructed by the memory controller 102. In addition to configuring the targeted DIMM 104, the memory controller 102 adjusts memory-specific internal timing for the targeted DIMM 104 (block 174) and then starts read and/or write operation on the targeted DIMM 104 (block 176).[0032] Although the calibration and pre-access configuration process 160 is generally applicable to calibration and pre-access configuration of the multi-drop memory system 120 in Figure 2, the process may be optimized for memory circuit 124. In this regard, Figure 5 is a flowchart illustrating an exemplary calibration and pre- access configuration process 180 for calibrating and configuring the memory circuit 124 in Figure 2 prior to read and write operations. Elements of Figure 2 are referenced in connection with Figure 5 and will not be re-described herein.[0033] Similar to the calibration and pre-access configuration process 160 of Figure 4, the calibration and pre-access configuration process 180 comprises a calibration sub- process 182 and a pre-access configuration sub-process 184. The memory controller 122 conducts the calibration sub-process 182 on the memory circuit 124 in the multidrop memory system 120. As a non-limiting example, the memory controller 122 may conduct the calibration sub-process 182 at start-up of the multi-drop memory system 120, based on predetermined calibration intervals, or in response to a predetermined triggering event (e.g., temperature and/or voltage change) in the memory circuit 124. During the calibration sub-process 182, the memory controller 122 determines and stores a plurality of configuration parameters for the multi-drop memory system 120, including at least one internal timing, a generic EQ setting, and at least one ODT value among other configuration parameters (block 186). In contrast to the calibration sub- process 162 described in Figure 4, the memory circuit 124 determines at least one reference signal value (e.g., VREF value) and stores the at least one reference signal value in the lookup table 138 in association with the at least one ODT value (block 188). The at least one reference signal value is determined by the memory circuit 124 to compensate for an equalization deficiency inherited in the generic EQ setting, thus preserving signal integrity and improving signal robustness at the memory circuit 124. Although the calibration activities performed by the memory controller 122 (block 186) and the calibration activities performed by the memory circuit 124 (block 188) are shown to be conducted in sequential order in the exemplary flowchart, it is possible for the memory controller 122 and the memory circuit 124 to perform their respective calibration activities in parallel.[0034] With continuing reference to Figure 5, the memory controller 122 invokes the pre-access configuration sub-process 184 prior to accessing the memory circuit 124. Unlike the calibration sub-process 182, the pre-access configuration sub-process 184 is performed on the memory circuit 124 that the memory controller 122 is preparing to read data from or write data to. In this regard, the memory controller 122 must first determine which memory circuit 124 in the multi-drop memory system 120 will be accessed next (block 190). The memory controller 122 then configures the memory circuit 124 by sending the predetermined ODT value and the predetermined EQ setting (block 192) to targeted memory circuit 124. The memory circuit 124 in turn retrieves the predetermined reference signal value from the lookup table 138 based on the predetermined ODT value and generates the on-die reference signal 146 according to the predetermined reference signal value (block 194). In addition to configuring the memory circuit 124, the memory controller 122 adjusts memory specific internal timing and a slew rate for the targeted memory circuit 124 (block 196) and then initiates read and/or write operations on the targeted memory circuit 124 (block 198).[0035] As previously discussed in Figure 2, with the ability to dynamically adjust the on-die reference signal 146, the memory circuit 124 is able to compensate for deficiencies associated with the predetermined EQ setting so as to preserve signal integrity and improve signal robustness. Such improvement in signal robustness can be visualized in a data eye diagram. A data eye diagram is a time-domain representation of the high frequency data signal from which electrical quality of high frequency data signal can be visualized and characterized. In this regard, Figure 6A is an exemplary plot graph illustrating an optimal data eye diagram when an optimal reference signal (VREF) is provided to a predetermined EQ setting at a DIMM. As shown in Figure 6A, an optimal data eye 200 has an eye height 202 determined by a high voltage signal (VDDQ) 204 and a low voltage signal (VDDQ-VSWING) 206 in the vertical dimension. The optimal data eye 200 has an eye width 208 determined by a pair of cross points 210(1), 210(2). An optimal reference voltage signal (VREF) 212 produces a 50% eye crossing, which is computed as (VREF - VDDQ-VSWING) / (VDDQ - VDDQ-VSWING), and makes the optimal data eye 200 symmetric in both vertical and horizontal dimensions. A symmetric data eye indicates a high degree of integrity and robustness in the high frequency data signal received by the DIMM.[0036] Figure 6B is an exemplary plot graph illustrating how a dynamic on-die reference signal adjustment scheme employed by the memory circuit 124 of Figure 2 can help restore a degraded data eye 200(1) that results from a non-optimal reference signal (VREF) 214 back to an optimal form. Elements of Figure 2 are referenced in connection with Figure 6B and will not be re-described herein. When equalization is turned on by the memory circuit 124 in Figure 2, the swing level may be pulled toward the level of the VDDQ-VSWING 206, for example. When unattended, the optimal reference signal (VEF) 212 will be shifted downward, and consequently becomes the non-optimal reference signal (VREF) 214. When the non-optimal reference signal (VREF) 214 is provided to the receiver 142 in the memory circuit 124 as illustrated in Figure 6B, the non-optimal reference signal (VREF) 214 produces a less than 50% eye crossing because the non-optimal reference signal (VREF) 214 is lower than the optimal reference signal (VREF) 212 by a voltage differential 218. The downward eye crossing shift results in a reduced eye width 216 that is defined by a new pair of cross points 220(1), 220(2). Consequently the data eye 200(1) loses symmetry and shrinks in size, indicating that the integrity and robustness of the data signal 134 have been compromised.[0037] As previously described in Figure 2, the on-die mapping logic 136 is configured to instruct the on-die reference signal generator 140 to produce the on-die reference signal 146 based on the corresponding reference signal value retrieved from the lookup table 138. In a non-limiting example, the on-die reference signal 146 is provided to the receiver 142 as the optimal reference signal (VREF) 208, which acts to override the non-optimal reference signal (VREF) 210 so as to restore symmetry of the degraded data eye 200(1). Thus, by dynamically replacing the non-optimal reference signal (VREF) 210 with the optimal reference signal (VREF) 208, integrity and robustness of the data signal 134 can be preserved.[0038] The memory circuit configuration schemes on multi-drop buses according to aspects disclosed herein are not limited in the scope to memory systems. The circuit configuration schemes disclosed herein may be applied to any electrical circuit requiring per-drop customization on a multi-drop bus.[0039] The memory circuit configuration schemes on multi-drop buses according to aspects disclosed herein may be provided in or integrated into any processor-based device. Examples, without limitation, include a set top box, an entertainment unit, a navigation device, a communications device, a fixed location data unit, a mobile location data unit, a mobile phone, a cellular phone, a computer, a portable computer, a desktop computer, a personal digital assistant (PDA), a monitor, a computer monitor, a television, a tuner, a radio, a satellite radio, a music player, a digital music player, a portable music player, a digital video player, a video player, a digital video disc (DVD) player, and a portable digital video player.[0040] In this regard, Figure 7 illustrates an example of a processor-based system 222 that can employ the multi-drop memory system 120 illustrated in Figure 2. In this example, the processor-based system 222 includes one or more central processing units (CPUs) 224, each including one or more processors 226. The CPU(s) 224 may be a master device. The CPU(s) 224 may have cache memory 228 coupled to the processor(s) 226 for rapid access to temporarily stored data. The CPU(s) 224 is coupled to a system bus 230 and can intercouple master and slave devices included in the processor-based system 222. As is well known, the CPU(s) 224 communicates with these other devices by exchanging address, control, and data information over the system bus 230. For example, the CPU(s) 224 can communicate bus transaction requests to a memory controller 232 as an example of a slave device. Although not illustrated in Figure 7, multiple system buses 230 could be provided, wherein each system bus 230 constitutes a different fabric.[0041] Other master and slave devices can be connected to the system bus 230. As illustrated in Figure 7, these devices can include a memory system 234, one or more input devices 236, one or more output devices 238, one or more network interface devices 240, and one or more display controllers 242, as examples. The input device(s) 236 can include any type of input device, including but not limited to input keys, switches, voice processors, etc. The output device(s) 238 can include any type of output device, including but not limited to audio, video, other visual indicators, etc. The network interface device(s) 240 can be any devices configured to allow exchange of data to and from a network 244. The network 244 can be any type of network, including but not limited to a wired or wireless network, a private or public network, a local area network (LAN), a wide local area network (WLAN), and the Internet. The network interface device(s) 240 can be configured to support any type of communications protocol desired. The memory system 234 can include one or more memory circuits 246(0-N) connected to the memory controller 232 via at least one multi-drop bus 248. The memory circuits 246(0-N) comprise an on-die mapping logic 250(0-N), respectively. In an exemplary embodiment, the memory controller 232 may be the memory controller 122 of Figure 2. Likewise, the on-die mapping logic 250 may be the on-die mapping logic 136 of Figure 2.[0042] The CPU(s) 224 may also be configured to access the display controller(s) 242 over the system bus 230 to control information sent to one or more displays 252. The display controller(s) 242 sends information to the display(s) 252 to be displayed via one or more video processors 254, which process the information to be displayed into a format suitable for the display(s) 252. The display(s) 252 can include any type of display, including but not limited to a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, etc.[0043] Those of skill in the art will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithms described in connection with the aspects disclosed herein may be implemented as electronic hardware, instructions stored in memory or in another computer-readable medium and executed by a processor or other processing device, or combinations of both. The master devices and slave devices described herein may be employed in any circuit, hardware component, integrated circuit (IC), or IC chip, as examples. Memory disclosed herein may be any type and size of memory and may be configured to store any type of information desired. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. How such functionality is implemented depends upon the particular application, design choices, and/or design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[0044] The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.[0045] The aspects disclosed herein may be embodied in hardware and in instructions that are stored in hardware, and may reside, for example, in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a CD-ROM, or any other form of computer readable medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a remote station. In the alternative, the processor and the storage medium may reside as discrete components in a remote station, base station, or server.[0046] It is also noted that the operational steps described in any of the exemplary aspects herein are described to provide examples and discussion. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary aspects may be combined. It is to be understood that the operational steps illustrated in the flow chart diagrams may be subject to numerous different modifications as will be readily apparent to one of skill in the art. Those of skill in the art will also understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0047] The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Disclosed is a power management integrated circuit with embedded address resolution protocol functionality. In one embodiment, a device is disclosed, comprising a data storage device; and an address resolution protocol (ARP) state machine communicatively coupled to the data storage device and included within a power management integrated circuit (PMIC), wherein the ARP state machine is configuredto assign an address to the data storage device and validate requests for data stored in the data storage device received over a bus.
1.A device including:Data storage device; andAddress Resolution Protocol ARP state machine, which is communicatively coupled to the data storage device and included in the power management integrated circuit PMIC, the ARP state machine is configured to assign an address to the data storage device and verify the data received through the bus A request for data stored in the data storage device.2.The device of claim 1, the data storage device comprises an EEPROM storage device.3.The apparatus of claim 1, wherein the ARP state machine includes hard-wired circuits.4.The apparatus of claim 1, the ARP state machine includes a programmable processing element.5.The device of claim 1, wherein the ARP state machine is further configured to store an address pool, the address pool containing addresses of the data storage device.6.The apparatus according to claim 1, wherein the bus includes I2C or SMBbus.7.The device of claim 1, the ARP state machine is configured to verify the request for data stored in the data storage device by determining whether the sender address of the request has been previously authenticated by the ARP protocol.8.The device according to claim 1, wherein the ARP state machine is configured to verify the request for data stored in the data storage device by determining whether the request sender is a sender on the white list.9.The device according to claim 1, the ARP state machine is configured to reject the request for data stored in the data storage device after determining that the sender of the request for the data stored in the data storage device is not an authorized sender Request for data in the device.10.A power management integrated circuit PMIC, which includes:A power supply subsystem, the power supply subsystem including multiple switch drivers, multiple voltage regulators and sequencers; andAn address resolution protocol ARP state machine, which is communicatively coupled to a data storage device, and the ARP state machine is configured to assign an address to the data storage device and verify a request for data stored in the data storage device received through a bus .11.The PMIC of claim 10, the ARP state machine includes hard-wired circuits.12.According to the PMIC of claim 10, the ARP state machine comprises a software-based state machine.13.According to the PMIC of claim 10, the ARP state machine is further configured to store an address pool, the address pool containing the address of the data storage device.14.According to the PMIC of claim 10, the ARP state machine is configured to verify the request for data stored in the data storage device by determining whether the sender address of the request has been previously authenticated by the ARP protocol.15.According to the PMIC of claim 10, the ARP state machine is configured to verify the request for the data stored in the data storage device by determining whether the sender of the request is a sender on the whitelist.16.The PMIC according to claim 10, wherein the ARP state machine is configured to reject the request for data stored in the data storage device after determining that the sender of the request for data stored in the data storage device is not an authorized sender Request for data in the device.17.A method including:Receive the data request at the address resolution protocol ARP state machine embedded in the power management integrated circuit PMIC of the storage device;Verify the data request through the ARP state machine; andIn response to the request, one or more data items are returned through the storage device.18.The method of claim 17, wherein the returning one or more data items is performed by a controller device.19.The method of claim 17, wherein the returning one or more data items is performed by the ARP state machine.20.18. The method of claim 17, further comprising assigning an ARP address to the data storage device through the ARP state machine, and using the ARP address as an address when verifying the data request.
Power management integrated circuit with embedded address resolution protocol circuit systemRelated applicationThis application requires US Patent Application No. 15/990,497, which was filed on May 25, 2018 and entitled "POWER MANAGEMENT INTEGRATED CIRCUIT WITH EMBEDDED ADDRESSRESOLUTION PROTOCOL CIRCUITRY". The rights and interests of the filing date, the entire disclosure of the application is hereby incorporated by reference.Technical fieldAt least some of the embodiments disclosed herein generally relate to power management integrated circuits (PMICs), and more specifically, are not limited to PMICs with embedded address resolution protocol (ARP) circuitry.Background techniqueThe memory system may be a storage system, such as a solid state drive (SSD), and may include one or more memory components that store data. For example, the memory system may include memory devices such as non-volatile memory devices and volatile memory devices. In general, the host system can use the memory system to store data at the memory device of the memory system and retrieve the data stored at the memory system.The memory system may include a PMIC for managing the power requirements of the memory system in which the PMIC is configured. PMIC usually contains electronic power conversion circuit system and related power control functions. The PMIC additionally realizes the programmable control of the functionality of the PMIC. For example, the PMIC can be reconfigured to change the power sequence, output voltage, and various other functions of the PMIC.Certain dedicated hardware is included in the PMIC package to support access to data about the PMIC and/or data about devices regulated by the PMIC, such as Vital Product Data (VPD). The existing technology for providing access to data such as VPD involves using a dedicated microcontroller or functionality contained in a memory controller to perform authentication and discrimination of data requests. The inclusion of additional hardware to provide access to the data increases complexity and reduces the security of access to the data. In addition, additional components for managing data need to be located in the always-on domain of the storage device, so additional power is required in the sleep state.Description of the drawingsThe embodiments are shown by way of example and not limitation in the figures of the accompanying drawings. In the accompanying drawings, similar reference numerals indicate similar elements.FIG. 1 is a block diagram of a storage device according to some embodiments of the present disclosure.Figure 2A is a flowchart illustrating a method for implementing address resolution protocol (ARP) functionality in a storage device according to some embodiments of the present disclosure.FIG. 2B is a flowchart illustrating a method for verifying a data request from an external device according to some embodiments of the present disclosure.Figure 3 illustrates an example computing environment including a memory system according to some embodiments of the present disclosure.Detailed waysAspects of the present disclosure relate to power management integrated circuits (PMICs) in memory systems. An example of a memory system is a storage system, such as a solid state drive (SSD). In some embodiments, the memory system is a hybrid memory/storage device system. Generally speaking, a host system can utilize a memory system that includes one or more memory devices. The memory device may include a medium. The medium may be a non-volatile memory device, such as a NAND flash device. The host system may provide a write request to store data at a memory device of the memory system, and may provide a read request to retrieve data stored at the memory system. The memory system may include a controller that manages the memory device to perform operations such as reading data, writing data, or erasing data, and other such operations. In this document, a storage system (hereinafter also referred to as a storage device) is used as an example of a storage system.FIG. 1 is a block diagram of a storage device according to some embodiments of the present disclosure.The storage device 116 shown in FIG. 1 includes a PMIC 100, a controller 110, a data storage device 112 and a bus 114. The PMIC 100 includes a switch driver 102, a voltage regulator 104, a sequencer 106, and an address resolution protocol (ARP) state machine 108. In one embodiment, the PMIC 100 is connected to the host device via a bus 114 such as an I2C or SMBus bus. The host application (not shown) includes an external computing device that issues a read or write request to the data storage device 112. In the illustrated embodiment, the device 116 is configured to receive and transmit commands via the bus 118 and to provide access to the data storage device 112 via the resolution performed by the ARP state machine 108 and the controller 110. In some implementations, the data storage device 112 may include or include a read-only storage device or partition that stores such as VPD.The PMIC 100 has one or more voltage regulators 104 that convert the external power source of the PMIC 100 to operating voltages used by various components of one or more devices (eg, solid-state storage devices, DRAM, etc.) powered by the PMIC 100. The PMIC 100 includes a plurality of switch drivers 102 that provide control signals for load switches (not shown) that selectively enable and disable power to and from supported devices. The PMIC 100 includes a sequencer 106 that arranges power-related events according to a desired sequence of operations of the supported devices, the desired sequence including the sequence of operations of the voltage regulator 102 and the switch driver 104. In one embodiment, the driver 102, the regulator 104, and the sequencer 106 may include a power subsystem located in the PMIC 100.The PMIC 100 additionally includes an ARP state machine 108. In one embodiment, the ARP state machine 108 includes hard-wired circuitry. In an alternative embodiment, the ARP state machine 108 may include a software-based state machine.In the illustrated embodiment, the ARP state machine 108 receives messages via the bus 114. In one embodiment, these messages include ARP messages that resolve the address of the storage device 116. In one embodiment, the ARP state machine 108 is designed to conform to standardized ARP functionality, such as the functionality defined in the System Management Bus (SMBus) specification version 3.1.In addition to providing ARP functionality, the ARP state machine 108 is also configured to allow or disallow external access to the data storage device 112 via the controller 110. In one embodiment, the ARP state machine 108 controls the resolution of the address of the device 116 by the external device. Therefore, the ARP state machine 108 acts as a gateway preventing unauthorized access to the controller 110 and the data storage device 112. For example, if an external device attempts to access the controller 110 but fails to properly authenticate with the ARP state machine 108, such request will be rejected. Alternatively or in combination with the above, the PMIC 100 may confuse the address of the controller and only provide the correct address to the external device that obtains the address through the ARP registration procedure included in the ARP state machine 108.As shown, a request for data stored in the data storage device 112 may be mediated by the controller 110. In one embodiment, the controller 110 includes logic or circuitry that allows or denies access to the data storage device 112. In one embodiment, the controller may determine whether the data request includes a properly addressed network location of the data storage device 112. In this way, the ARP state machine 108 communicates with the controller 110 to assign the controller 110 a known address. Therefore, the authorized device can only receive the actual address of the data storage device 112 via the ARP resolution procedure (as defined in the system management bus (SMBus) specification version 3.1). Alternatively or in combination with the above, once configured, the data storage device 112 can be accessed by an external device via the bus 114 without being mediated by a controller. In this embodiment, the controller 110 can be powered off (for example, when in a low power or sleep state), while the data storage device 112 is still driven by the PMIC 100 and can be accessed via the bus 114. In this embodiment, the ARP state machine 108 is configured to process incoming messages and route the messages to the data storage device 112.In one embodiment, important product data is stored in the data storage device 112. In one embodiment, the data storage device 112 may include any type of permanent storage device, such as EEPROM, ROM, flash or similar storage technology. Generally, the data storage device 112 stores data such as part numbers, serial numbers, engineering change levels, drive health data, and other important data related to the device 116.Figure 2A is a flowchart illustrating a method for implementing address resolution protocol (ARP) functionality in a storage device according to some embodiments of the present disclosure.In block 202, the method receives an ARP preparation message. In one embodiment, the ARP preparation message identifies that the external device is initiating the ARP process.In block 204, the method receives a get unique device identifier (UDID) command from the ARP master device. This command can be for a single device, or it can be a general command issued to all devices that support ARP.In block 206, the method returns the UDID to the ARP master. The UDID corresponds to the unique identifier of the device that implements the method and includes the capability of the device, UDID version number, vendor identifier, device identifier, protocol layer interface list, and subsystem vendor identifier and subsystem device identifier And vendor-specific identifiers.In block 208, the method confirms the device address. In one embodiment, block 208 is performed by the ARP master device. In one embodiment, the method confirms the device address by executing one or more of the following: confirm the number of bytes, verify whether the device slave address of the ARP-supporting device is fixed, and determine whether the return device slave address is in use In the address pool. Once the device address is confirmed, the ARP master device issues an address assignment command. In one embodiment, the ARP master control device selects an address from the currently unused address pool.In block 210, the method receives the assign address command, assigns the received address as the new address of the device, and returns a confirmation. In some embodiments, if the device is not responsible for processing the assign address command, the method may return unconfirmed.In block 212, the ARP master device updates the address pool after receiving the confirmation. In this box, the method continues to receive confirmations and builds an ARP device address pool for all devices on the bus.In one embodiment, the ARP master control device includes the PMIC shown in FIG. 1. In this embodiment, the ARP state machine is responsible for maintaining the address pool (blocks 206 and 212), while the controller executes the remaining blocks 202, 204, 206, 210. In this way, the ARP state machine manages the addresses assigned to the controller and to the VPD storage device through the proxy.After initializing the addresses of all slave devices, the ARP state machine is further configured to process non-ARP messages received through the bus.FIG. 2B is a flowchart illustrating a method for verifying a data request from an external device according to some embodiments of the present disclosure.In block 214, the method receives the data request at the ARP address. In one embodiment, the data request includes a VPD request.In one embodiment, the ARP state machine coordinates all data requests received from external devices. In one embodiment, the data request includes a list of important product data requested by the data storage device.In block 216, the method verifies the request.In one embodiment, the method may verify the request by determining whether the sender's address has been previously authenticated by ARP, that is, whether the sender's address is in the known address pool. In an alternative embodiment, the method may be determined by determining whether the request contains a device identifier, a vendor identifier, or a similar identifier on the sender's white list.In block 218, the method determines whether the request is valid based on the result of block 216. If the method determines that the request is invalid, the method ends. Alternatively, if the method determines that the request is valid, the method proceeds to block 220, where the data is retrieved in the data storage device.In an optional embodiment, the method may enable the controller before retrieving data in block 220. In one embodiment, the controller may be configured to ignore all requests for important product data to prevent unauthorized access to the data storage device. In the illustrated embodiment, based on the ARP resolution described in FIG. 2A, the method selectively enables read access of the controller in response to verifying the received request. Once the controller is enabled, the parameters of the data request are transferred to the controller, and the controller accesses the data storage device (e.g., EEPROM).In block 224, the method transmits the retrieved data back to the requesting device.In one embodiment, the controller itself is connected to the bus that has received the data request. Therefore, in block 224, the controller may return the data on the bus. Alternatively, the ARP state machine can receive data from the controller and use the returned data to respond to data requests.Figure 3 illustrates an example computing environment 300 including a memory system 310 according to some embodiments of the present disclosure. The memory system 310 may include media, such as memory devices 312A through 312N. The memory devices 312A to 312N may be volatile memory devices, non-volatile memory devices, or a combination thereof. In some embodiments, the memory system is a storage system. An example of a storage system is SSD. In some embodiments, the memory system 310 is a hybrid memory/storage device system. Generally speaking, the computing environment 300 may include a host system 320 that uses the memory system 310. In some implementations, the host system 320 can write data to and read data from the memory system 310.The host system 320 may be a computing device, such as a desktop computer, a notebook computer, a network server, a mobile device, or a computing device including a memory and a processing device. The host system 320 may include or be coupled to the memory system 310 such that the host system 320 can read data from the memory system 310 or write data to the memory system 310. The host system 320 may be coupled to the memory system 310 via a physical host interface. As used herein, "coupled to" generally refers to a connection between components. This connection may be an indirect communication connection or a direct communication connection (for example, without intermediate components), whether wired or wireless, including, for example, electrical , Optical, magnetic and other connections. Examples of physical host interfaces include, but are not limited to, SMBus, interface, Serial Advanced Technology Attachment (SATA) interface, Peripheral Component Interconnect High Speed (PCIe) interface, Universal Serial Bus (USB) interface, Fibre Channel, serial-connected SCSI (SAS) and so on. The physical host interface can be used to transfer data between the host system 320 and the memory system 310. When the memory system 310 is coupled with the host system 320 through the PCIe interface, the host system 320 may further utilize the NVM Express (NVMe) interface to access the memory devices 312A to 312N. The physical host interface may provide an interface for transferring control, address, data, and other signals between the memory system 310 and the host system 320.The memory devices 312A to 312N may include any combination of different types of non-volatile memory devices and/or volatile memory devices. Examples of non-volatile memory devices include NAND type flash memory. Each of the memory devices 312A to 312N may include one or more memory cell arrays, such as single-level cells (SLC) or multi-level cells (MLC) (eg, three-level cells (TLC) or four-level cells (QLC) ). In some implementations, a particular memory device may include both the SLC portion and the MLC portion of the memory cell. Each of the memory units may store data bits (eg, data blocks) for use by the host system 320. Although a non-volatile memory device such as a NAND-type flash memory is described, the memory devices 312A to 312N may be based on any other type of memory, such as a volatile memory. In some embodiments, the memory devices 312A to 312N may be, but are not limited to, random access memory (RAM), read only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), Phase change memory (PCM), magnetic random access memory (MRAM), NOR flash memory, electrically erasable programmable read-only memory (EEPROM), and a cross-point array of non-volatile memory cells. The cross-point array of the nonvolatile memory can perform bit storage in combination with a stackable cross-grid data access array based on changes in body resistance. In addition, compared with many flash-based memories, cross-point non-volatile memory can perform in-situ write operations, in which non-volatile memory cells can be processed without previously erasing the non-volatile memory cells. Programming. In addition, the memory cells of the memory devices 312A to 312N may be grouped into memory pages or data blocks, which may refer to memory device units for storing data.The controller 315 may communicate with the memory devices 312A to 312N to perform operations, such as reading data, writing data, or erasing data at the memory devices 312A to 312N, and other such operations. The controller 315 may include hardware such as one or more integrated circuits and/or discrete components, buffer memory, or a combination thereof. The controller 315 may be a microcontroller, a dedicated logic circuit system (for example, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processors. The controller 315 may include a processor (processing device) 317 configured to execute instructions stored in the local memory 319. In the example shown, the local memory 319 of the controller 315 includes embedded memory configured to store instructions to perform various processes, operations, logic flows, and routines that control the operations of the memory system 310, including processing Communication between the memory system 310 and the host system 320. In some implementations, the local memory 319 may include memory registers that store, for example, memory pointers, extracted data, and the like. The local memory 319 may also include read-only memory (ROM) for storing microcode. Although the example memory system 310 in FIG. 3 has been shown as including the controller 315, in another embodiment of the present disclosure, the memory system 310 may not include the controller 315 and instead rely on external control (eg, The host may be provided by a processor or controller separate from the memory system).In general, the controller 315 can receive commands or operations from the host system 320, and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory devices 312A to 312N. The controller 315 may be responsible for other operations, such as wear leveling operations, garbage data collection operations, error detection and error correction code (ECC) operations, encryption operations, cache operations, and logical block addresses associated with the memory devices 312A to 312N and Address translation between physical block addresses. The controller 315 may further include a host interface circuitry that communicates with the host system 320 via a physical host interface. The host interface circuitry may convert commands received from the host system into command instructions to access the memory devices 312A to 312N, and convert responses associated with the memory devices 312A to 312N into information of the host system 320.The memory system 310 may also include additional circuitry or components not shown. In some embodiments, the memory system 310 may include a cache or buffer (e.g., DRAM) and address circuitry (e.g., row decoder and column decoder), which can receive addresses from the controller 315 and decode the addresses To access the memory devices 312A to 312N.The memory system 310 may include a PMIC 311 (for example, the PMIC 100 in FIG. 1). The memory system 310 may include additional circuitry, such as the circuitry shown in FIG. 1.In this specification, various functions and operations may be described as being executed or caused by computer instructions to simplify the description. However, those skilled in the art will recognize that the intent of such expressions is that the functions are derived from the execution of computer instructions by one or more controllers or processors (eg, microprocessors). Alternatively or in combination, the functions and operations may be implemented using a dedicated circuit system with or without software instructions, such as an application specific integrated circuit (ASIC) or a field programmable gate array (FPGA). The embodiments can be implemented using hard-wired circuitry without software instructions or in combination with software instructions. Therefore, the technology is not limited to any specific combination of hardware circuitry and software, nor is it limited to any specific source of instructions executed by the data processing system.Although some embodiments can be implemented in fully functional computers and computer systems, the various embodiments can be distributed into multiple forms of computing products, regardless of the specific type of machine or computer-readable medium actually used to implement the distribution Can be applied.At least some aspects disclosed may be at least partially embodied in software. That is to say, the technology can be implemented in a computer system or other data processing system in response to its processor (such as a microprocessor or microcontroller), so as to execute the memory (such as ROM, volatile RAM, non-volatile RAM). Memory, cache, or remote storage device).The routines executed to implement the embodiments may be implemented as part of an operating system or a specific application program, component, program, object, module, or sequence of instructions called a "computer program." Computer programs usually include one or more instruction sets at different times in each memory and storage device in a computer, and the instruction sets cause the computer to execute when read and executed by one or more processors in the computer. The operations necessary for the elements involved in various aspects.Tangible non-transitory computer storage media may be used to store software and data that, when executed by a data processing system, cause the system to perform various methods. The executable software and data may be stored in various locations including, for example, ROM, volatile RAM, non-volatile memory, and/or cache. Portions of the software and/or data may be stored in any of these storage devices. In addition, data and instructions can be obtained from centralized servers or peer-to-peer networks. Different parts of the data and instructions may be obtained from different centralized servers and/or peer-to-peer networks at different times and in different communication sessions or the same communication session. All data and instructions can be obtained before executing the application. Alternatively, the data and instruction parts can be obtained dynamically and in time when they need to be executed. Therefore, it is not required that the data and instructions are all on the machine-readable medium at a particular moment.Examples of computer-readable storage media include, but are not limited to, recordable and non-recordable types of media, such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory Devices, flexible and other removable disks, disk storage media and optical storage media (for example, compact disk read only memory (CDROM), digital versatile disk (DVD), etc.), etc. The instructions may be embodied in temporary media, such as electrical, optical, acoustic or other forms of propagated signals, such as carrier waves, infrared signals, digital signals, and so on. Transient media is usually used to transmit instructions, but is not considered capable of storing the instructions.In various embodiments, hard-wired circuitry can be used in combination with software instructions to implement the technique. Therefore, the technology is not limited to any specific combination of hardware circuitry and software, nor is it limited to any specific source of instructions executed by the data processing system.Although some of the figures show several operations in a specific order, non-order dependent operations can be reordered and other operations can be combined or decomposed. Although some reordering or other groupings are specifically mentioned, other reorderings or groupings are obvious to those skilled in the art, so an exhaustive list of alternatives is not provided. Furthermore, it should be recognized that the stages can be implemented in hardware, firmware, software, or any combination thereof.The above description and drawings are illustrative and should not be construed as limiting. Describe many specific details to provide thorough understanding. However, in some cases, well-known or conventional details are not described so as not to confuse the description. References to one or one embodiment in the present invention do not necessarily refer to the same embodiment; and such references mean at least one.In the foregoing specification, the present disclosure has been described with reference to specific exemplary embodiments of the present disclosure. It will be apparent that various modifications can be made thereto without departing from the broader spirit and scope set forth in the appended claims. Therefore, the description and drawings should be viewed in an illustrative sense rather than a restrictive sense.
Generally, this disclosure provides systems, devices, methods and computer readable media for secure control of access control enablement and activation on self- encrypting storage devices. In some embodiments, the device may include a non-volatile memory (NVM) and a secure access control module. The secure access control module may include a command processor module configured to receive a request to enable access controls of the NVM from a user, and to enable the access controls. The secure access control module may also include a verification module configured to verify a physical presence of the user. The secure access control module may further include an encryption module to encrypt at least a portion of the NVM in response to an indication of success from the verification module.
CLAIMSWhat is claimed is:1. 1. A storage device comprising:a non- volatile memory (NVM); anda secure access control module comprising:a command processor module to receive a request to enable access controls of said NVM, from a user, and to enable said access controls;a verification module to verify a physical presence of said user; and an encryption module to allow encryption of at least a portion of said NVM in response to an indication of success from said verification module.2. The system of claim 1, wherein said secure access control module implements Opal Storage Specification access controls.3. The system of claim 1 or 2, further comprising a random number generator to generate a random number and update a Security Identifier (SID) associated with said access controls to said random number.4. The system of claim 1 or 2, wherein said verification of said physical presence of said user is based on receiving a Physical Security Identifier (PSID) from said user, said PSID associated with said storage device.5. The system of claim 4, wherein said PSID is displayed on a housing of said storage device.5 a. The system of claim 4, wherein said PSID is provided in a visually observable manner in association with said storage device.6. The system of claim 1 or 2, wherein said secure access control module is further to perform a revert operation of said storage device, if said verification of said physical presence is successful.7. The system of claim 6, wherein said revert operation restores said SID to a Manufacturer Security Identifier (MSID).8. The system of claim 1 or 2, wherein said secure access control module is further to allow configuration of said access controls of said NVM if said verification of said physical presence is successful.9. The system of claim 1 or 2, wherein said NVM is a solid state drive (SSD).10. The system of claim 1 or 2, wherein said secure access control module is further to communicate with a host system through an interface module and a storage bus, said interface module to implement one of a Serial Advanced Technology Attachment (SATA) interface, a Serial Attached Small Computer System (SAS) Interface, a Peripheral Component Interconnect Express (PCIe) interface, a Universal Flash Storage (UFS) interface or an embedded Multimedia Controller interface (eMMC).11. A method for secure control of a storage device, said method comprising: receiving a request, from a user, to enable access controls of a non- volatile memory (NVM) of said storage device;enabling said access controls in response to said request;verifying a physical presence of said user; andallowing activation of self-encryption of said NVM in response to success of said verifying.12. The method of claim 11, wherein said storage device implements Opal Storage Specification access controls.13. The method of claim 11 or 12, wherein said enabling of said access controls further comprises generating a random number and updating a Security Identifier (SID) associated with said access controls to said random number.14. The method of claim 11 or 12, wherein said verifying of said physical presence of said user further comprises receiving a Physical Security Identifier (PSID) from said user, said PSID associated with said storage device.15. The method of claim 14, wherein said PSID is displayed on a housing of said storage device.16. The method of claim 11 or 12, further comprising performing a revert operation of said storage device, in response to success of said verifying.17. The method of claim 16, wherein said revert operation further comprises restoring said SID to a Manufacturer Security Identifier (MSID).18. The method of claim 11 or 12, further comprising allowing configuration of said access controls of said NVM in response to success of said verifying.19. A mobile platform comprising:a processor;a display element coupled to said processor; anda solid state drive (SSD) storage device coupled to said processor, said SSD comprising:a non- volatile memory (NVM); anda secure access control module comprising:a command processor module to enable access controls of said NVM in response to a request from said processor;a verification module to verify a physical presence of a user; and an encryption module to allow encryption of at least a portion of saidNVM in response to an indication of success from said verification module.20. The mobile platform of claim 19 wherein said secure access control module implements Opal Storage Specification access controls.21. The mobile platform of claim 19 or 20, wherein said verification of said physical presence of said user is based on receiving a Physical Security Identifier (PSID) from said user, said PSID associated with said storage device.22. The mobile platform of claim 21, wherein said PSID is displayed on a housing of said storage device.23. The mobile platform of claim 19 or 20, wherein said secure access control module is further to perform a revert operation of said storage device, if said verification of said physical presence is successful.24. The mobile platform of claim 23, wherein said revert operation restores said SID to a Manufacturer Security Identifier (MSID).25. The mobile platform of claim 19 or 20, wherein said secure access control module is further to allow configuration of said access controls of said NVM if said verification of said physical presence is successful.26. The mobile platform of claim 19 or 20, wherein said secure access control module is further to communicate with said processor through an interface module and a storage bus, said interface module to implement one of a Serial Advanced Technology Attachment (SAT A) interface, a Serial Attached Small Computer System (SAS) Interface, a Peripheral Component Interconnect Express (PCIe) interface, a Universal Flash Storage (UFS) interface or an embedded Multimedia Controller interface (eMMC).27. The mobile platform of claim 19 or 20, wherein said mobile platform is a smart phone, smart tablet, notebook or laptop computer.
SECURE CONTROL OF SELF-ENCRYPTING STORAGE DEVICESInventors:Shankar NatarajanJason CoxCharles B. FosterHinesh K. ShahFIELDThe present disclosure relates to self-encrypting storage devices, and more particularly, to self-encrypting storage devices with secure control of access control enablement and activation.BACKGROUNDStorage drives, for example solid state drives (SSDs) or hard disk drives (HDDs), are often configured to provide security features including self-encryption and access control. These security features are designed to prevent a data breach in the event of physical loss or theft of the storage drive or the device containing the drive.BRIEF DESCRIPTION OF THE DRAWINGSFeatures and advantages of embodiments of the claimed subject matter will become apparent as the following Detailed Description proceeds, and upon reference to the Drawings, wherein like numerals depict like parts, and in which:Figure 1 illustrates a top level system diagram of an example embodiment consistent with the present disclosure;Figure 2 illustrates a block diagram of one example embodiment consistent with the present disclosure;Figure 3 illustrates a flowchart of operations of one example embodiment consistent with the present disclosure;Figure 4 illustrates a flowchart of operations of another example embodiment consistent with the present disclosure; and Figure 5 illustrates a system diagram of a platform of another example embodiment consistent with the present disclosure.Although the following Detailed Description will proceed with reference being made to illustrative embodiments, many alternatives, modifications, and variations thereof will be apparent to those skilled in the art.DETAILED DESCRIPTIONSecurity features provided by storage devices may typically be enabled or disabled by the manufacturer in a fixed manner. It would generally be desirable, however, to provide a capability that allows the end user to enable or disable these types of security features, for example through a software configurable device setting, without compromising the integrity of the drive. This would avoid the requirement for a user to purchase different devices depending on their security needs and simplify the logistics for manufacturers and suppliers who would otherwise need to manage separate product lines. Providing a user enable/disable capability, however, may present a security threat since a malicious attacker could potentially enable the security feature remotely and take ownership of the drive by setting new access control authentication credentials. This would lock out the legitimate user, who may not even be aware that security is enabled on the drive.Generally, this disclosure provides systems, devices, methods and computer readable media for secure control of access control enablement and activation on self- encrypting storage devices. In one embodiment, the storage device may include a non- volatile memory (NVM) and a secure access control module. The secure access control module may be configured to process commands received from a user or host system including a request to enable access controls of the NVM. The secure access control module may further be configured to verify a physical presence of the user. Physical presence of the user may be verified by requiring the user to provide the Physical Security Identifier (PSID) associated with the storage device, which can generally be obtained in a limited manner, such as, for example, by reading a physical label on the storage device. The secure access control module may further be configured to allow the user to activate and provision access controls if the physical presence verification is successful and after a revert operation is performed. The secure access control module may further include an encryption module configured to encrypt at least a portion of the NVM when access controls have been activated. The NVM may include or otherwise be configured as a Solid State Drive (SSD) or magnetic disk in a Hard Disk Drives (HDD). Any suitable method of encryption may be used including, for example, the Advanced Encryption Standard (AES), the Data Encryption Standard (DES) and the International Data Encryption Algorithm (IDEA). In some embodiments, the enablement of access controls may be considered an initialization or set-up activity of the storage device, to be performed by the user/owner of the storage device during an initial phase of deployment.As used herein, the terms "enablement," "activation," and "provisioning," with respect to access controls, are defined as follows. Regarding "enablement," the access control capabilities of the device may be supported (embedded in hardware or software of the device, by the manufacturer) but remain in a disabled or hidden state until enablement is performed. After a successful enablement, activation may be performed to turn on the access controls so that portions of the NVM are encrypted or otherwise locked for security. Activation may also be accompanied by provisioning which is an operation to configure the access controls (e.g., provide additional authentication credentials for administrators and/or users and specify regions of the NVM for encryption, etc.).Figure 1 illustrates a top level system diagram 100 of one example embodiment consistent with the present disclosure. A host system 104 is shown coupled to a self-encrypting storage device with secure control capability 110. The secure control capability of the storage device will be described in greater detail below. In some embodiments, the host system 104 may be, for example, a desktop computer, workstation, laptop computer, convertible tablet, notebook, smart phone, smart tablet, personal digital assistant (PDA) or mobile Internet device (MID).The host system 104 may be coupled to the storage device 110 through interface modules 108a, 108b and storage bus 130, which may be configured as a Serial Advanced Technology Attachment (SAT A) interface, a Serial Attached Small Computer System (SAS) Interface or a Peripheral Component Interconnect Express (PCIe) interface, a Universal Flash Storage (UFS) interface, an embedded Multimedia Controller interface (eMMC) or any other suitable type of interface. The SATA and SAS interfaces may comply with ANSI standards managed by T13 (www.tl3.org) and T10 (www.tlO.org) technical committees. The PCIe interface may comply with the PCISIG standard (www.pcisig.com). The UFS and eMMC may comply with the JEDEC standards (www.jedec.org). The storage device 110 described in this disclosure may be configured as a solid state drive (SSD). In some embodiments, the storage device 110 may include hard disk drive (HDD).An intended or legitimate user 102 may access the storage device 110 through the host system 104 and interface 108 and bus 130. Similarly, a remote attacker or malicious user 106 may attempt to access the storage device 110 and attempt to enable access controls (and self-encryption of the device) to the detriment of the intended user 102. The secure control capability of the storage device 110 may be configured, however, to defeat such attempts, as will be described below.Figure 2 illustrates a block diagram 200 of one example embodiment consistent with the present disclosure. The storage device 110 is shown to include a secure access control module 204, a storage device side interface module 108b and an NVM 220.The storage device 110 and/or the secure access control module 204 may be configured to implement, comply with, or otherwise be compatible with the Opal Storage Specification: "TCG Storage Security Subsystem Class: Opal," Specification Version 1.00, February 4, 2010 of the Trusted Computing Group (TCG), including current, previous and future versions of that specification. The storage device 110 may also be referred to as a "Trusted Peripheral" in Opal terminology. Although operations will be described here in the context of Opal, it will be appreciated that these techniques may be applied to other similarly purposed storage device security systems.The secure access control module 204 is shown to include a command processor module 212, a verification module 214, an encryption module 216, a random number generator 218 and storage for a Security Identifier (SID) 206, PSID 208 and a Manufacturer Security Identifier (MSID) 210.The command processor module 212 may be configured to receive requests from a user or host system including a request to enable or disable the secure access control features of the NVM 220. Any required encryption or decryption of one or more portions (e.g., address ranges) of the NVM 220 may be performed by encryption module 216 as appropriate. The command processor module 212 may also be configured to receive the associated verification credentials (SID, PSID, etc.) that may be required from the user for these operations. Verification module 214 may be configured to perform the verification operations, as will be described below, to verify the credentials and physical presence of the user.In some embodiments, a software application is provided by the manufacturer or an independent software vendor to send the appropriate configuration commands as specified in the TCG Opal spec to the storage device. In an embodiment, the software application issues a sequence of commands, called methods in the TCG specifications, to perform configuration and provisioning operations. Prior to initiating a session, the software application invokes the Level 0 discovery command and Properties method to determine the capabilities of the secure access control module 204 (e.g., the OPAL security subsystem).The StartSession method is used by the software application to initiate a communications session between the host system 104 and the storage device 110. This method can also pass a credential, such as the PSID or SID, to the storage device for authentication. The storage device is configured to authenticate the credential and responds with success if the credential is successfully authenticated.After successful authentication of the SID credential and initiation of a session, the software application invokes the Activate method, which is used to activate the locking and encryption management functionality supplied by the Opal subsystem in the storage device. The session is then ended by the software application.Once locking and encryption management have been activated, the software application invokes StartSession to initiate a new session and authenticate an Admins credential, in order to satisfy access control requirements necessary to perform configuration and provisioning operations, such as setting User passwords and access controls.The software application invokes the Get method in a session, in order to retrieve metadata from tables in the subsystem, which are data structures employed to store configurations and metadata. The software application invokes the Set method in a session to configure Users and Admins passwords, and configure the device to lock when the device power cycles.The MSID 210 is an identifier, for example an alphanumeric value, which is used as a default credential for the storage device. The MSID 210 is encoded or otherwise stored in a reserved location in non- volatile memory that is outside of the region of encrypted data of the non- volatile memory of the storage device 110. The MSID is accessed by a user/host system through the interface 108 through an appropriate set of commands. Generally, the MSID, once set by the manufacturer, cannot be changed by the user.The SID 206 is a security identifier, for example an alphanumeric value, or credential that is associated with the owner or legitimate user of the storage device 110. The SID 206 is typically initialized by the manufacturer to a default value that is set to the MSID 210 and can subsequently be changed by the user to enforce access controls on the device 110. The SID 206 can also be stored in a non-volatile memory of the storage device, for example, outside of the region of encrypted data.The PSID 208 is a physical security identifier, for example an alphanumeric value, or credential that is associated with and unique to the storage device 110. In an embodiment, the PSID 208 can be generated by the manufacturer and stored in a nonvolatile memory of the storage device that is inaccessible through the interface 108. In other words, the PSID 208 cannot be read or otherwise discovered by any entity external to the device 110 through any electronic method. The PSID 208 can, however, be printed on a label attached to the device 110, or otherwise made available, for example through some visual method, to a user located in physical proximity to the device 110. In some embodiments, the PSID is printed or otherwise visually accessible on the housing of the storage device 110 or on a housing of a system within which storage device 100 is incorporated. A remote attacker 106 can therefore be prevented from obtaining the PSID 208. The PSID is thus used to verify a physical presence of the device owner or legitimate user 102, for example prior to enablement of the self-encryption feature.It will be appreciated that the term "physical presence" does not necessarily require that the intended or legitimate user 102 need always be locally present in the proximity of the storage device 110. For example, physical presence may indicate a one-time presence by the user 102 to visually obtain the PSID which may later be used during a verification process from a remote location.Figure 3 illustrates a flowchart of operations 300 of another example embodiment consistent with the present disclosure. The operations provide a method for secure enablement and activation of access controls on a self-encrypting storage device. At operation 310, a request is received to enable self-encryption (e.g., as implemented through Opal). At operation 320, Opal is enabled for the device and a random number or string (generated for example by random number generator 218) is assigned to the SID for the device, which will no longer be the same as the MSID. This may prevent any further attempts to alter access control settings until the current operation is successfully completed (e.g., by the intended user 102). The random number generator 218 may implement a non-deterministic random number generation algorithm to reduce the probability that a remote attacker might predict the random number value.At operation 330, a request is received for a revert operation, via the TCG Opal Revert method. The requester's physical presence is verified at operation 340, by supplying a valid PSID associated with the device, via a TCG method such as StartSession or Authenticate. Because access to the PSID is limited to visual observation of some portion of the device, such as a printed label as described previously, knowledge of the PSID may be used to verify the physical presence of the requester. If the verification fails, then at operation 350 the Revert method invocation will subsequently be denied and the SID remains set to the random value. In some embodiments, an alert may be generated to log the event and/or notify the legitimate user (e.g., intended user 102) of a failed attempt to enable access controls (Opal).If the verification succeeds, however, then at operation 360 the revert operation is performed. At operation 370, as part of the revert, the SID is reset back to the MSID associated with the device and Opal is left in an enabled state. At this point the user may optionally, activate and provision Opal, at operation 380, for example through the Activate method executed by the software application.Figure 4 illustrates a flowchart of operations 400 of another example embodiment consistent with the present disclosure. The operations provide a method for secure control of access control enablement and activation on a self-encrypting storage device. At operation 410, a request is received to enable access controls of the storage device. The request is received from a user of a host system of the storage device, for example through a software application that requests the storage device to enable OPAL security by sending an appropriate sequence of commands. The StartSession method is used to initiate a communications session and authenticate the SID credential. The Activate method is used to activate the locking functionality provided by the Opal subsystem implemented in the storage device. At operation 420, access controls (e.g., OPAL security) are enabled in response to the request. At operation 430, the physical presence of the user is verified, for example by supplying a valid PSID associated with the device as printed on the storage device label. The software application may be configured to prompt the user to enter the PSID. The user may then enter the PSID through the software application. The software application may send the PSID to the storage device, for example by using the StartSession method or using the Authenticate method in a session that has already been initiated. The storage device verifies the submitted PSID and responds with the verification result. Because access to the PSID is limited to visual observation of some portion of the device, such as a printed label as described previously, knowledge of the PSID may be used to verify the physical presence of the requester. At operation 440, if the physical presence verification succeeds, the software application invokes the "Revert" command which resets the SID to MSID and activation of self- encryption of the storage device is then possible, via execution of the Activate method. If the physical presence verification fails, access controls (e.g., OPAL security) may remain in their existing state and the SID remains set to the random value.Figure 5 illustrates a system diagram 500 of one example embodiment consistent with the present disclosure. The system 500 may be a mobile platform 510 or computing device such as, for example, a smart phone, smart tablet, personal digital assistant (PDA), mobile Internet device (MID), convertible tablet, notebook or laptop computer, or any other suitable device. It will be appreciated, however, that embodiments of the system described herein are not limited to mobile platforms, and in some embodiments, the system 500 may be a workstation or desktop computer. The device may generally present various interfaces to a user via a display element 560 such as, for example, a touch screen, liquid crystal display (LCD) or any other suitable display type.The system 500 is shown to include a host system 104 that may further include any number of processors 520 and memory modules 530. In some embodiments, the processors 520 may be implemented as any number of processor cores. The processor (or processor cores) may be any type of processor, such as, for example, a micro- processor, an embedded processor, a digital signal processor (DSP), a graphics processor (GPU), a network processor, a field programmable gate array or other device configured to execute code. The processors may be multithreaded cores in that they may include more than one hardware thread context (or "logical processor") per core. The memory 530 may be coupled to the processors. The memory 530 may be any of a wide variety of memories (including various layers of memory hierarchy and/or memory caches) as are known or otherwise available to those of skill in the art. It will be appreciated that the processors and memory may be configured to store, host and/or execute one or more user applications or other software modules. These applications may include, but not be limited to, for example, any type of computation, communication, data management, data storage and/or user interface task. In some embodiments, these applications may employ or interact with any other components of the mobile platform 510.System 500 is also shown to include network interface module 540 which may include wireless communication capabilities, such as, for example, cellular communications, Wireless Fidelity (WiFi), Bluetooth®, and/or Near FieldCommunication (NFC). The wireless communications may conform to or otherwise be compatible with any existing or yet to be developed communication standards including past, current and future version of Bluetooth®, Wi-Fi and mobile phone communication standards.System 500 is also shown to include an input/output (IO) system or controller 550 which may be configured to enable or manage data communication between processor 520 and other elements of system 500 or other elements (not shown) external to system 500.System 500 is also shown to include a self-encrypting storage device with secure control 110, as described previously. Storage device 110 may further include a secure access control module (e.g., Opal) and an NVM as illustrated in Figure 2. Interface modules 108a, 108b may also be provided to couple the storage device 110 to the host system 104 over a storage bus.It will be appreciated that in some embodiments, the various components of the system 500 may be combined in a system-on-a-chip (SoC) architecture. In some embodiments, the components may be hardware components, firmware components, software components or any suitable combination of hardware, firmware or software.Embodiments of the methods described herein may be implemented in a system that includes one or more storage mediums having stored thereon, individually or in combination, instructions that when executed by one or more processors perform the methods. Here, the processor may include, for example, a system CPU (e.g., core processor) and/or programmable circuitry. Thus, it is intended that operations according to the methods described herein may be distributed across a plurality of physical devices, such as, for example, processing structures at several different physical locations. Also, it is intended that the method operations may be performed individually or in a subcombination, as would be understood by one skilled in the art. Thus, not all of the operations of each of the flow charts need to be performed, and the present disclosure expressly intends that all subcombinations of such operations are enabled as would be understood by one of ordinary skill in the art.The storage medium may include any type of tangible medium, for example, any type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), digital versatile disks (DVDs) and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic and static RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), flash memories, magnetic or optical cards, or any type of media suitable for storing electronic instructions."Circuitry", as used in any embodiment herein, may include, for example, singly or in any combination, hardwired circuitry, programmable circuitry, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. An application or "app" may be embodied as code or instructions which may be executed on programmable circuitry such as a host processor or other programmable circuitry. A module, as used in any embodiment herein, may be embodied as circuitry. The circuitry may be embodied as an integrated circuit, such as an integrated circuit chip.Thus, the present disclosure provides systems, devices, methods and computer readable media for secure control of access control enablement and activation on self- encrypting storage devices. The following examples pertain to further embodiments.According to Example 1 there is provided a storage device. The device may include a non-volatile memory (NVM) and a secure access control module. The secure access control module of this example may include a command processor module to receive a request to enable access controls of the NVM, from a user, and to enable the access controls; a verification module to verify a physical presence of the user; and an encryption module to allow encryption of at least a portion of the NVM in response to an indication of success from the verification module.Example 2 may include the subject matter of Example 1, and the secure access control module implements Opal Storage Specification access controls. Example 3 may include the subject matter of Examples 1 and 2, further including a random number generator to generate a random number and update a Security Identifier (SID) associated with the access controls to the random number.Example 4 may include the subject matter of Examples 1-3, and the verification of the physical presence of the user is based on receiving a PhysicalSecurity Identifier (PSID) from the user, the PSID associated with the storage device.Example 5 may include the subject matter of Examples 1-4, and the PSID is displayed on a housing of the storage device.Example 6 may include the subject matter of Examples 1-5, and the PSID is provided in a visually observable manner in association with the storage device.Example 7 may include the subject matter of Examples 1-6, and the secure access control module is further to perform a revert operation of the storage device, if the verification of the physical presence is successful.Example 8 may include the subject matter of Examples 1-7, and the revert operation restores the SID to a Manufacturer Security Identifier (MSID).Example 9 may include the subject matter of Examples 1-8, and the secure access control module is further to allow configuration of the access controls of the NVM if the verification of the physical presence is successful.Example 10 may include the subject matter of Examples 1-9, and the NVM is a solid state drive (SSD).Example 11 may include the subject matter of Examples 1-10, and the secure access control module is further to communicate with a host system through an interface module and a storage bus, the interface module to implement one of a Serial Advanced Technology Attachment (SATA) interface, a Serial Attached Small Computer System (SAS) Interface, a Peripheral Component Interconnect Express (PCIe) interface, a Universal Flash Storage (UFS) interface and/or an embedded Multimedia Controller interface (eMMC).According to Example 12 there is provided a method for secure control of a storage device. The method may include receiving a request, from a user, to enable access controls of an NVM; enabling the access controls in response to the request; verifying a physical presence of the user; and allowing activation of self-encryption of the NVM in response to success of the verifying.Example 13 may include the subject matter of Example 12, and the storage device implements Opal Storage Specification access controls. Example 14 may include the subject matter of Examples 12 and 13, and the enabling of the access controls further includes generating a random number and updating a Security Identifier (SID) associated with the access controls to the random number.Example 15 may include the subject matter of Examples 12-14, and the verifying of the physical presence of the user further includes receiving a Physical Security Identifier (PSID) from the user, the PSID associated with the storage device.Example 16 may include the subject matter of Examples 12-15, and the PSID is displayed on a housing of the storage device.Example 17 may include the subject matter of Examples 12-16, and the PSID is provided in a visually observable manner in association with the storage device.Example 18 may include the subject matter of Examples 12-17, further including performing a revert operation of the storage device, in response to success of the verifying.Example 19 may include the subject matter of Examples 12-18, and the revert operation further includes restoring the SID to a Manufacturer Security Identifier (MSID).Example 20 may include the subject matter of Examples 12-19, further including allowing configuration of the access controls of the NVM in response to success of the verifying.According to Example 21 there is provided a mobile platform. The mobile platform may include a processor; a display element coupled to the processor; and an SSD storage device coupled to the processor. The SSD of this example may include a non- volatile memory (NVM) and a secure access control module. The secure access control module of this example may include a command processor module to enable access controls of the NVM in response to a request from the processor; a verification module to verify a physical presence of a user; and an encryption module to allow encryption of at least a portion of the NVM in response to an indication of success from the verification module.Example 22 may include the subject matter of Example 21, and the secure access control module implements Opal Storage Specification access controls.Example 23 may include the subject matter of Examples 21-22, and the verification of the physical presence of the user is based on receiving a Physical Security Identifier (PSID) from the user, the PSID associated with the storage device. Example 24 may include the subject matter of Examples 21-23, and the PSID is displayed on a housing of the storage device.Example 25 may include the subject matter of Examples 21-24, and the secure access control module is further to perform a revert operation of the storage device, if the verification of the physical presence is successful.Example 26 may include the subject matter of Examples 21-25, and the revert operation restores the SID to a Manufacturer Security Identifier (MSID).Example 27 may include the subject matter of Examples 21-26, and the secure access control module is further to allow configuration of the access controls of the NVM if the verification of the physical presence is successful.Example 28 may include the subject matter of Examples 21-27, and the secure access control module is further to communicate with a host system through an interface module and a storage bus, the interface module to implement one of a Serial Advanced Technology Attachment (SATA) interface, a Serial Attached Small Computer System (SAS) Interface, a Peripheral Component Interconnect Express (PCIe) interface, a Universal Flash Storage (UFS) interface and/or an embedded Multimedia Controller interface (eMMC).Example 29 may include the subject matter of Examples 21-28, and the mobile platform is a smart phone, smart tablet, notebook or laptop computer.According to Example 30 there is provided at least one computer-readable storage medium having instructions stored thereon which when executed by a processor result in the following operations for secure control of a storage device. The operations may include receiving a request, from a user, to enable access controls of an NVM; enabling the access controls in response to the request; verifying a physical presence of the user; and allowing activation of self-encryption of the NVM in response to success of the verifying.Example 31 may include the subject matter of Example 30, and the storage device implements Opal Storage Specification access controls.Example 32 may include the subject matter of Examples 30 and 31, and the enabling of the access controls further includes the operations of generating a random number and updating a Security Identifier (SID) associated with the access controls to the random number.Example 33 may include the subject matter of Examples 30-32, and the verifying of the physical presence of the user further includes the operation of receiving a Physical Security Identifier (PSID) from the user, the PSID associated with the storage device.Example 34 may include the subject matter of Examples 30-33, and the PSID is displayed on a housing of the storage device.Example 35 may include the subject matter of Examples 30-34, and the PSID is provided in a visually observable manner in association with the storage device.Example 36 may include the subject matter of Examples 30-35, further including the operation of performing a revert operation of the storage device, in response to success of the verifying.Example 37 may include the subject matter of Examples 30-36, and the revert operation further includes the operation of restoring the SID to a Manufacturer Security Identifier (MSID).Example 38 may include the subject matter of Examples 30-37, further including allowing configuration of the access controls of the NVM in response to success of the verifying.According to Example 39 there is provided a system for secure control of a storage device. The system may include means for receiving a request, from a user, to enable access controls of an NVM; means for enabling the access controls in response to the request; means for verifying a physical presence of the user; and means for allowing activation of self-encryption of the NVM in response to success of the verifying.Example 40 may include the subject matter of Example 39, and the storage device implements Opal Storage Specification access controls.Example 41 may include the subject matter of Examples 39 and 40, and the enabling of the access controls further includes means for generating a random number and updating a Security Identifier (SID) associated with the access controls to the random number.Example 42 may include the subject matter of Examples 39-41, and the verifying of the physical presence of the user further includes means for receiving a Physical Security Identifier (PSID) from the user, the PSID associated with the storage device.Example 43 may include the subject matter of Examples 39-42, and the PSID is displayed on a housing of the storage device. Example 44 may include the subject matter of Examples 39-43, and the PSID is provided in a visually observable manner in association with the storage device.Example 45 may include the subject matter of Examples 39-44, further including means for performing a revert operation of the storage device, in response to success of the verifying.Example 46 may include the subject matter of Examples 39-45, and the revert operation further includes means for restoring the SID to a Manufacturer Security Identifier (MSID).Example 47 may include the subject matter of Examples 39-46, further including means for allowing configuration of the access controls of the NVM in response to success of the verifying.The terms and expressions which have been employed herein are used as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described (or portions thereof), and it is recognized that various modifications are possible within the scope of the claims. Accordingly, the claims are intended to cover all such equivalents. Various features, aspects, and embodiments have been described herein. The features, aspects, and embodiments are susceptible to combination with one another as well as to variation and modification, as will be understood by those having skill in the art. The present disclosure should, therefore, be considered to encompass such combinations, variations, and modifications.
Systems, apparatuses, and methods for migrating memory pages are disclosed herein. In response to detecting that a migration of a first page between memory locations is being initiated, a first page table entry (PTE) corresponding to the first page is located and a migration pending indication is stored in the first PTE. In one embodiment, the migration pending indication is encoded in the first PTE by disabling read and write permissions. If a translation request targeting the first PTE is received by the MMU and the translation request corresponds to a read request, a read operation is allowed to the first page. Otherwise, if the translation request corresponds to a write request, a write operation to the first page is blocked and a silent retry request is generated and conveyed to the requesting client.
1.A system includes:Memory subsystem; andA processor coupled to the memory subsystem;Wherein the system is configured as:Detecting that the first page is migrated from a first memory location to a second memory location in the memory subsystem;Positioning a first page table entry (PTE) corresponding to the first page; andStoring a migration suspension indication in the first PTE.2.The system of claim 1, wherein in response to detecting a conversion request targeting the first PTE and detecting the migration suspension indication in the first PTE, the system is configured to:If the conversion request corresponds to a read request targeted at the first page, allowing a read operation to be performed on the first page; andIf the conversion request corresponds to a write request that targets the first page, then a write operation is prevented from being performed on the page and a silent retry request is generated.3.The system of claim 2, wherein the system is configured to transmit the silent retry request to a requesting client.4.The system of claim 3, wherein the requesting client is configured to retry the write request at a later point in time.5.The system of claim 1, wherein the migration suspension indication is encoded in the first PTE by disabling read and write permissions to the first PTE.6.The system of claim 1, wherein in response to completion of the migration of the first page from the first memory location to the second memory location, the system is configured to:Clearing the migration pending indication; andGenerate an invalidation request for any cached translations corresponding to the first PTE.7.The system of claim 1, wherein:The memory subsystem includes a first memory and a second memory;The first memory location is in the first memory; andThe second memory location is in the second memory.8.A method comprising:Detecting that the first page is migrated from the first memory location to the second memory location by the computing system;Positioning a first page table entry (PTE) corresponding to the first page; andStoring a migration suspension indication in the first PTE.9.The method of claim 8, wherein in response to detecting a conversion request targeting the first PTE and detecting the migration suspension indication in the first PTE, the method further comprises:If the conversion request corresponds to a read request targeting the first page, allowing a read operation to be performed on the first page; andIf the conversion request corresponds to a write request that targets the first page, then a write operation is prevented from being performed on the page and a silent retry request is generated.10.The method of claim 9, wherein in response to detecting the migration pending indication in the PTE, the method further comprises transmitting the silent retry request to a requesting client.11.The method of claim 10, further comprising the requesting client to retry the write request at a later point in time.12.The method of claim 8, wherein the migration suspension indication is encoded in the first PTE by disabling read and write permissions to the first PTE.13.The method of claim 8, wherein in response to completion of the migration of the first page from the first memory location to the second memory location, the method further comprises:Clearing the migration pending indication; andGenerate an invalidation request for any cached translations corresponding to the first PTE.14.The method of claim 8, wherein the first memory location is in a first memory, and wherein the second memory location is in a second memory.15.A device comprising:Memory subsystem; andMemory management unit (MMU);The MMU is configured as:Detecting that the first page is migrated from a first memory location to a second memory location in the memory subsystem;Positioning a first page table entry (PTE) corresponding to the first page; andStoring a migration suspension indication in the first PTE.16.The apparatus of claim 15, wherein in response to detecting a conversion request targeting the first PTE and detecting the migration suspension indication in the first PTE, the MMU is configured to:If the conversion request corresponds to a read request targeted at the first page, allowing a read operation to be performed on the first page; andIf the conversion request corresponds to a write request that targets the first page, then a write operation is prevented from being performed on the page and a silent retry request is generated.17.The device of claim 16, wherein in response to detecting the migration pending indication in the PTE, the device is configured to transmit the silent retry request to a requesting client.18.The apparatus of claim 17, wherein the requesting client is configured to retry the write request at a later point in time.19.The device of claim 15, wherein the migration suspension indication is encoded in the first PTE by disabling read and write permissions to the first PTE.20.The device of claim 15, wherein in response to completion of the migration of the first page from the first memory location to the second memory location, the device is configured to:Clearing the migration pending indication; andGenerate an invalidation request for any cached translations corresponding to the first PTE.
Silent activity page migration errorBackground techniqueDescription of related technologiesMany computing devices use virtual memory technology to handle the access of software programs to data. The virtual memory page translation mechanism enables the system software to create a separate address space for each process or application. These address spaces are called virtual address spaces. The system software uses a paging mechanism to selectively map individual pages of physical memory into the virtual address space using a set of hierarchical address translation tables, which are collectively referred to as page tables. Virtual memory can be implemented with any processor, including but not limited to a central processing unit (CPU), a graphics processing unit (GPU), and an accelerated processing unit (APU).When a program accesses data, a block (referred to as a "page" of memory) of a given size (e.g., 4 kilobytes (KB)) of memory including the data is removed from a backup storage body (e.g., a disk Drive or semiconductor memory) to an available physical location in main memory in a computing device. Some systems have multiple different page sizes stored in memory. Rather than using a program to manage the physical location of a page, a memory management unit in a computing device manages the physical location of the page. Instead of using a page-based physical location address (or "physical address") to access the memory, the program uses a virtual address in a virtual address space to access the memory. From the point of view of the program, the virtual address indicates the actual physical address (i.e., the physical location) within which the data is stored in the page, and thus the virtual address is used by the program for memory access. However, the virtual address is not directly mapped to the physical address of the physical location where the data is stored. Therefore, as part of managing the physical location of the page, the memory management unit translates the virtual address used by the program into the physical address where the data is actually located. The translated physical address is then used to perform the memory access of the program. To perform the above conversion, the memory management unit uses a page table in the memory, the page table including a set of conversions of the pages stored in the memory from a virtual address to a physical address.From time to time, the system can migrate pages between memory locations, causing the virtual-to-physical address translation to change. In some cases, the system determines that the page is to be moved from the first memory to the second memory. Alternatively, the system may move pages within a single memory as part of a garbage collection operation. However, while the process is running (for example, a graphics program is performing a rendering task), the migration of the page may be interrupted.BRIEF DESCRIPTION OF THE DRAWINGSThe advantages of the methods and mechanisms described herein will be better understood by referring to the following description in conjunction with the accompanying drawings, in which:FIG. 1 is a block diagram of one embodiment of a computing system.Figure 2 shows an example of a page table entry (PTE) format.FIG. 3 is a block diagram of one embodiment of a system in which a page migration is in progress.Figure 4 is a block diagram of one embodiment of the system after page migration is complete.FIG. 5 is a generalized flowchart illustrating one embodiment of a method for migrating a first page between memory locations.6 is a generalized flowchart illustrating one embodiment of a method for processing a conversion request that hits a PTE with a migration pending indication.FIG. 7 is a generalized flowchart illustrating one embodiment of a method for processing a conversion request.detailed descriptionIn the following description, numerous specific details are set forth to provide a comprehensive understanding of the methods and mechanisms presented herein. However, one of ordinary skill in the art will recognize that various embodiments may be practiced without these specific details. In some cases, well-known structures, components, signals, computer program instructions, and techniques have not been shown in detail so as not to obscure the methods described herein. It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be enlarged relative to other elements.Systems, devices, and methods for migrating pages between memory locations are disclosed herein. In one embodiment, a system includes at least one processor, a memory management unit (MMU), and a memory subsystem. In one embodiment, an indication is detected that the first page will migrate from a first memory location to a second memory location in the memory subsystem. Before migrating the first page, positioning a first page table entry (PTE) corresponding to the first page. Subsequently, a migration suspension indication is stored in the first PTE. In one embodiment, the migration suspension indication is encoded in the first PTE by disabling read permissions and write permissions of the first page. After the migration suspension indication is stored in the first PTE, migration of the first page may be started.In one embodiment, when the migration suspension indication is encoded in the first PTE, the MMU receives a conversion request targeting the first PTE. If the conversion request corresponds to a read request, a read operation is allowed to be performed on the first page. Otherwise, if the translation request corresponds to a write request that targets the first page, prevent a write operation from being performed on the first page, and generate a silent retry request and transmit the silent retry request To the requesting client. In one embodiment, the silent retry is referred to as "silent" because it does not include generating an interrupt or updating a status register. Therefore, the requesting client is configured to retry the write request at a later point in time.Referring now to FIG. 1, a block diagram of one embodiment of a computing system 100 is shown. In one embodiment, the computing system 100 includes a system-on-chip (SoC) 105 coupled to a system memory 150 via a central processing unit (CPU) chipset 140. The SoC 105 may also be referred to as an integrated circuit (IC). In one embodiment, the SoC 105 includes at least an input / output (I / O) interface 155, a structure 120, a graphics processing unit (GPU) 130, and a local memory 110. The SoC 105 may also include other components not shown in FIG. 1 to avoid making the drawings difficult to understand. In another embodiment, the GPU 130 may be another type of processing unit (e.g., a central processing unit (CPU), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP) )).GPU 130 includes at least a translation lookaside buffer (TLB) complex 135 and computer units 145A-N, which represent any number and type of computing units for graphics processing or general-purpose processing. The GPU 130 is coupled to the local memory 110 via the structure 120. In one embodiment, the local memory 110 is implemented using a high bandwidth memory (HBM). In one embodiment, GPU 130 is configured to perform graphics pipeline operations, such as drawing commands, pixel operations, geometric calculations, and other operations for presenting images to a display. In another embodiment, the GPU 130 is configured to perform graphics-independent operations. In another embodiment, the GPU 130 is configured to perform graphics operations and non-graphics related operations.In one embodiment, GPU 130 uses TLB to cache the mapping of virtual addresses to physical addresses of virtual addresses assigned to different processes executing on these devices. These TLBs are shown as corresponding L1 TLB 170A-N in calculation units 145A-N and L2 TLB 160 in TLB complex 135. The TLB complex 135 also includes a table viewer 165. Generally speaking, depending on the implementation, the memory management unit may include one or more TLBs, table viewing logic, fault handlers, and circuits. In some implementations, different TLBs may be implemented within GPU 130 for instructions or data. For example, a relatively small and faster L1 TLB is backed up by a larger L2 TLB that requires more cycles to perform a search. The lookup performed by the L2 TLB is relatively fast compared to the table view of the page tables 125A-B. Depending on the implementation, page tables 125A-B may be located in local memory 110, system memory 150, or portions of page tables 125A-B may be located in local memory 110 and system memory 150. Some embodiments of the TLB complex include an instruction TLB (ITLB), a primary data TLB (L1 DTLB), and a secondary data TLB (L2 DTLB). Other embodiments of the TLB complex may include other configurations and / or levels of TLB.Address translation for a load instruction or a store instruction in the GPU 130 may be performed by passing a request for a virtual address translation to the L1 TLB. If the virtual address is found in the entry of the L1 TLB, the L1 TLB returns the physical address. If the request for virtual address translation is missed in the L1 TLB, the request is passed to the L2 TLB. If the request for a virtual address is missed in the L2 TLB, then a page table view is performed for the request. A page table view may result in one or more lookups at the page table level.The process of moving a page from system memory 150 to local memory 110 or a page from local memory 110 to system memory 150 is referred to herein as "page migration." In addition, moving pages within the system memory 150 or moving pages within the local memory 110 is also referred to herein as "page migration." The combination of local memory 110 and system memory 150 may be referred to herein as a "memory subsystem." Alternatively, the local memory 110 or the system memory 150 may be referred to herein as a "memory subsystem." The system 100 is configured to generate an indication that the given page is in a page migration state when the given page is to be moved between locations in the memory subsystem. This allows other operations to continue seamlessly. In one embodiment, the system 100 is configured to modify page table entries for a given page and turn off read and write privileges when the given page is in a page migration state. The meaning of this particular combination (read and write privileges are disabled) has changed to indicate that a given page is in a page migration state. In other embodiments, other ways of encoding a given page in a page migration state are possible and conceivable.As used herein, the term "page" is defined as a fixed-length contiguous block of virtual memory. A "page" is also defined as a unit of data used by the system 100 for memory management. The size of the page can vary depending on the implementation, and multiple different page sizes can be utilized in a single implementation. It should be understood that the terms "memory page" and "page" are intended to mean any size memory area.In one embodiment, in response to detecting that the migration of the first page between memory locations is started, the first page table entry (PTE) corresponding to the first page is located and the migration pending indication is stored in the first PTE . In one embodiment, the migration pending indication is encoded in the first PTE by disabling read permissions and write permissions. If a conversion request targeting the first PTE while the migration pending indication is encoded in the first PTE is received by the MMU and the conversion request corresponds to a read request, then reading the first page is allowed operating. Otherwise, if the conversion request corresponds to a write request, a write operation is prevented from being performed on the first page, and a silent retry request is generated and transmitted to the requesting client. The requesting client may then retry the write request at a later point in time. In another embodiment, if the conversion request corresponds to a read request, the read request is blocked and a retry request is generated for the read operation.I / O interface 155 is coupled to fabric 120 and CPU chipset 140, and I / O interface 155 represents any number and type of interfaces (e.g., Peripheral Component Interconnect (PCI) bus, PCI-extension (PCI-X), PCIE (PCI Express) bus, Gigabit Ethernet (GBE) bus, Universal Serial Bus (USB)). The SoC 105 is coupled to a memory 150 via a CPU chipset 140, where the memory 150 includes one or more memory modules. Each of the memory modules includes one or more memory devices mounted thereon. In some implementations, the memory 150 includes one or more memory devices mounted on a motherboard or other carrier, and the SoC 105 is also mounted on the motherboard or other carrier. In one embodiment, the memory 150 is used to implement random access memory (RAM) for use with the SoC 105 during operation. The implemented RAM may be static RAM (SRAM), dynamic RAM (DRAM), resistive RAM (ReRAM), phase change RAM (PCRAM), or any other volatile or non-volatile RAM. The types of DRAM used to implement the memory 150 include, but are not limited to, double data rate (DDR) DRAM, DDR2 DRAM, DDR3 DRAM, and the like.In various implementations, the computing system 100 may be a computer, laptop computer, mobile device, server, or any of a variety of other types of computing systems or devices. Please note that the number of components of the computing system 100 and / or the SoC 105 may vary depending on the implementation. The number of individual components / sub-components may be more or less than the number shown in FIG. 1. For example, in another implementation, the SoC 105 may include multiple memory controllers coupled to multiple memories. Please also note that the computing system 100 and / or SoC 105 may include other components not shown in FIG. 1. For example, in another implementation, the SoC 105 may also include a central processing unit (CPU) with one or more processor cores. In addition, in other embodiments, the computing system 100 and the SoC 105 may be constructed in other ways than those shown in FIG. 1.Turning now to FIG. 2, an example of a page table entry (PTE) format is shown. The PTE format 205 at the top of FIG. 2 illustrates a PTE format according to one embodiment. In one embodiment, the physical page address is stored in bits 39-12. In one embodiment, the size of the page pointed to by the physical page address in the PTE format 205 is 4KB. Therefore, there is one PTE for each 4KB logical page of addressable memory. In other implementations, the page pointed to by the physical page address in the PTE format 205 may be any of a variety of other sizes.A write permission field 210 and a read permission field 215 are shown in the PTE format 205. In one embodiment, when these two fields are set to "0", this indicates that the page pointed to by the item is in a page migration state. For example, the PTE format of the page migration state 220 is shown in the middle of FIG. 2 to illustrate the values of the write permission field and the read permission field of an item pointing to a page to be migrated between memory locations. Item 220 also includes address 225 and other fields shown in PTE format 205. Note that in other implementations, other ways of encoding the page migration status indication within the PTE are possible and conceivable.An example of a PTE format according to another embodiment is shown as PTE format 235 at the bottom of FIG. 2. In the PTE format 235, the T field 240 is used to indicate whether the corresponding page has read permission. The use of the T field 240 in the PTE format 235 to encode read permissions enables a write-only migration. In one embodiment, if the T field 240 is set to one, then the corresponding page has read permissions. Otherwise, if the T field 240 is set to zero, the corresponding page does not have read permissions. In one embodiment, when the R and W fields of a given page are equal to zero (ie, the given page is in a page migration state), if the T field 240 is set to one, then reading of the given page is allowed. However, if the R and W fields of a given page are equal to zero, then if the T field 240 is set to zero, then reading of the given page is prevented. Please note that in other embodiments, other suitable PTE formats other than the PTE format shown in FIG. 2 may be utilized.Referring now to FIG. 3, a block diagram of one embodiment of a system 300 in which a page migration is in progress is shown. A page table entry 330 in the page table block 305 is shown, and the entry 330 includes at least a page address 310, a write permission field 315, and a read permission field 320. As shown in item 330, both the write permission field 315 and the read permission field 320 are set to zero. In one embodiment, this indicates that the page 345A pointed to by item 330 is in a page migration state. In other embodiments, other ways of encoding the corresponding page in a page migration state may be utilized. Prior to page migration, page 345A is stored in local memory 340. For this discussion, assume that page 345A is migrating to system memory 350. This is shown as a migrated page 345B in system memory 350. In other embodiments, page 345A can be migrated to other locations. For example, page 345A may be migrated to another location within local memory 340 or to another memory other than system memory 350.In one embodiment, when the conversion request hits item 330, if the memory request is a read request, subsequent memory requests for page address 310 will be allowed to proceed. A read request will then be performed on page 345A of local memory 340. Otherwise, if the storage request is a write request, a silent retry request will be generated and sent to the requesting client. Write requests will not be allowed to continue at this point in time. The client can retry the write request at a later point in time, and if the page migration is completed while processing another conversion request for the retry write request, the write request will be allowed to continue.Turning now to FIG. 4, a block diagram of one embodiment of a system 400 after page migration is complete. The page table block 405, local memory 440, and system memory 450 of the system 400 are intended to represent the corresponding page table block 305, local Memory 340 and system memory 350, the migrated page is now located at address 410. Therefore, both the write permission field 415 and the read permission field 420 of the item 430 are set to one to indicate that the page migration has been completed and the write permissions and read permissions have been returned to the migrated page 445. Alternatively, if the migrated page 445 has only write or read privileges, only one of these fields will be set after the page migration is complete. If a conversion request targeting a migrated page 445 is received by the page table block 405, the conversion request and subsequent storage requests will be processed in a typical manner in response to detecting that write or read permissions are enabled in item 430 .Referring now to FIG. 5, one embodiment of a method 500 for migrating a first page between memory locations is shown. For the sake of discussion, the steps in this embodiment and those in FIGS. 6 to 7 are shown in sequential order. Note, however, that in various embodiments of the described method, one or more of the described elements are performed simultaneously, in a different order than the order shown, or in which the described elements are omitted entirely One or more of them. Other additional components are executed as needed. Any of the various systems or devices described herein is configured to implement the method 500.An indication that the first page is to be migrated from the first memory location to the second memory location is detected (block 505). In one embodiment, the first memory location is in a first memory (e.g., local memory) and the second memory location is in a second memory (e.g., global memory). In another embodiment, the first and second memory locations are both located in a single memory.Next, the first page table entry (PTE) corresponding to the first page and any cached copies of the first PTE are located (block 510). An indication that the first page is in a page migration state is stored in the first PTE and any cached copies of the first PTE (block 515). In one embodiment, the indication is encoded in the PTE by disabling read and write permissions for the first page. In other embodiments, other ways of encoding the migration pending indication in the first PTE may be utilized. In addition, an invalidation request for the first page is sent to the TLB and any pending writes to memory are cleared (block 520). Once a decision is made on the pending writes to the memory, the page migration copy process for the first page may begin (block 522).If the migration of the first page is complete (conditional block 525, "Yes" branch), the migration pending indication is cleared from the first PTE and any cached copies of the first PTE (block 530). In addition, the first PTE is modified to point to a second memory location to which the first page has been migrated (block 535). In addition, an invalidation request is generated for any cached copy of the first PTE (block 540). The system then waits for the invalidation completion confirmation and then reuses the first memory location (block 542). At block 542, the method 500 ends. If the migration of the first page has not been completed (conditional block 525, "No" branch), the system waits for the page migration to complete (block 545) and then returns to conditional block 525.Turning now to FIG. 6, one embodiment of a method 600 for processing a conversion request is shown that hits a PTE with a migration pending indication. The processor generates a translation request for a given virtual address (block 605). In one embodiment, the processor is part of a system (eg, the system 100 of FIG. 1) including at least one processor, an MMU, and a memory subsystem. Depending on the embodiment, the system may also include any number of other components. The MMU detects a PTE for a given virtual address including a migration pending indication (block 610). If the memory request targeted at a given virtual address is a read request (condition block 615, yes branch), then it is determined whether the target physical page is allowed to be read (condition block 620). In one embodiment, whether or not a target physical page is allowed to be read may be programmable. If the target physical page is allowed to be read (conditional block 620, yes branch), then the read operation is allowed to be performed on the target physical page (block 635). After block 635, the method 600 ends. If the target physical page is not allowed to read (conditional block 620, "No" branch), then the read operation is prevented from being performed on the target physical page and a silent retry error is generated and transmitted to the requesting client (Block 640). At a later point in time, the client retries the read request for the given virtual address (block 645). After block 645, the method 600 ends.If the memory request targeted by the virtual address is a write request (condition block 615, "write" branch), then prevent write operations on the target physical page, and generate a silent retry error and re-silently The trial error is transmitted to the requesting client (block 625). In one embodiment, the silent retry error is referred to as "silent" because the error does not include generating an interrupt or updating a status register. The silent retry error indicates to the client that it should retry the write request at a later point in time. At a later point in time, the client will retry the write request for the given virtual address (block 630). Once the migration is complete, a write request is performed on the physical page at the new location. After block 630, the method 600 ends.Referring now to FIG. 7, one embodiment of a method 700 for processing a conversion request is shown. A hit on a page table entry (PTE) with read and write permissions disabled is detected (block 705). In one embodiment, when the read and write permissions of the PTE are disabled, this indicates that the corresponding physical page is currently migrating between memory locations. If the subsequent memory request is a read request (conditional block 710, "read" branch), then the read request is allowed to be performed on the target physical page, although the read permission for the page is disabled (block 715). In addition, system software prevents errors from being generated for the memory request (block 720). After block 720, the method 700 ends. If the subsequent memory request is a write request (conditional block 710, "write" branch), the retry request is sent to the client and the write request is blocked (block 725). At a later point in time, the client may retry the write request (block 730). After block 730, the method 700 ends.In various implementations, the program instructions of a software application are used to implement the methods and / or mechanisms described previously. The program instructions describe the behavior of the hardware in a high-level programming language, such as C. Alternatively, a hardware design language (HDL) such as Verilog is used. The program instructions are stored on a non-transitory computer-readable storage medium. Numerous types of storage media are available. The storage medium can be accessed by a computing system during use to provide program instructions and accompanying data to the computing system for program execution. The computing system includes at least one or more memories and one or more processors configured to execute program instructions.It should be emphasized that the above-mentioned embodiments are merely non-limiting examples of implementation. Once the above disclosure is fully understood, numerous changes and modifications will become apparent to those skilled in the art. The following claims are intended to be construed to include all such changes and modifications.
To provide an integrated circuit (IC) package substrate.SOLUTION: An integrated circuit (IC) package substrate comprises a magnetic material embedded in a dielectric material. A first surface of the dielectric material is below the magnetic material, and a second surface of the dielectric material, opposite the first surface, is over the magnetic material. A metallization level comprising a first metal feature is embedded in the magnetic material. A second metal feature is at an interface between the magnetic material and the dielectric material. The second metal feature has a first sidewall in contact with the dielectric material, and a second sidewall in contact with the magnetic material.SELECTED DRAWING: Figure 2L
An integrated circuit (IC) package substrate, a magnetic material embedded in a dielectric material, wherein the first layer of the dielectric material is a magnetic material and the magnetism beneath the magnetic material. A metallization level having a first metal feature embedded in the material and a second metal feature at the interface between the magnetic material and the dielectric material, wherein the second metal feature is the said. An integrated circuit (IC) package substrate having a metallization level, having a first side wall in contact with a second layer of dielectric material and a second side wall in contact with said magnetic material.The IC package substrate according to claim 1, wherein the second metal feature completely surrounds the first metal feature and extends along the periphery of the magnetic material.The metallization level has a multilayer material stack comprising a first metal on a second metal, wherein the second metal has a higher magnetic permeability than the first metal, claim 1. Or the IC package substrate described in 2.The metallization level further comprises a third metal feature embedded in the second layer of the dielectric material that is laterally adjacent to the side wall of the magnetic material, said third metal feature. The IC package substrate according to claim 1 or 2, wherein the side wall of the object has less lateral undercut than the second metal feature.The second metal feature is one of a plurality of second metal features embedded in the magnetic material, and the third metal feature is in the second layer of the dielectric material. It is one of a plurality of third metal features buried in the second metal feature, the second metal feature has a first pitch, and the third metal feature has a second pitch. The IC package substrate according to claim 4, wherein the second pitch is smaller than the first pitch.The IC package substrate according to claim 4, wherein the second metal feature has a thickness larger than that of the third metal feature.The IC package substrate according to claim 1 or 2, wherein the first side wall has less lateral undercut than the second side wall.The IC package substrate according to claim 1 or 2, wherein the side wall of the magnetic material has an inclination of at least 45 ° from the plane of the metallized layer.The metallization level is an upper metallization level, the substrate further has a lower metallization level, and the lower metallization level is the bottom of the magnetic material and the dielectric material. Claim 1 or claim 1, wherein the lower metal feature has a lower metal feature between the first layers, and the lower metal feature has an area larger than the area of the magnetic material in contact with the lower metal feature. The IC package substrate described in 2.An integrated circuit (IC) package assembly with an IC die electrically coupled to the host circuit board via a power supply mounted on the host circuit board and an inductor embedded in the IC package board. The IC package substrate is a magnetic material embedded in a dielectric material, and the first layer of the dielectric material is a magnetic material underneath the magnetic material and said. A metallization level having an element of the inductor embedded in a magnetic material and a metal feature at the interface between the magnetic material and the dielectric material, wherein the metal feature is a second of the dielectric material. An integrated circuit (IC) package assembly having a metallization level, having a first side wall in contact with the layer and a second side wall in contact with the magnetic material.The integrated circuit (IC) package assembly according to claim 10, wherein the inductor has a flat structure having a meandering structure embedded in the magnetic material.The integrated circuit (IC) package assembly according to claim 10 or 11, wherein the metal feature completely surrounds the element of the inductor and extends along the perimeter of the magnetic material.The metallization level has a multilayer material stack comprising a first metal on a second metal, wherein the second metal has a higher magnetic permeability than the first metal. Or the integrated circuit (IC) package assembly described in 11.A method of manufacturing an integrated circuit (IC) package substrate, wherein one or more metallized layers are formed on a first layer of a dielectric material, and at least one of the metallized layers is formed. A step of patterning into the spare metal feature, a step of forming a second layer of the dielectric material on the spare metal feature, and an opening penetrating the second layer of the dielectric material. A step of exposing a part of the spare metal feature by the opening, a step of placing a dry film resist on the part of the spare metal feature, and the dry step. A step of patterning the preliminary metal feature into a first metal feature based on the pattern of the film resist, and a step of forming a magnetic material in the opening and on the first metal feature. A method comprising the steps of forming an additional dielectric material on top of the magnetic material.The step of forming an opening through the second layer of the dielectric material comprises laser drilling an opening in the second layer of the dielectric above the preliminary metal feature. Item 14. The method according to item 14.The step of forming one or more metallized layers embedded in the dielectric material comprises the step of forming the reserve metal feature by a semi-addition process (SAP), one of the reserve metal features. The method according to claim 14, wherein the two or more side walls have an inclination of 10 ° or less from the plane of the preliminary metal feature.The step of patterning the reserve metal feature into the first metal feature comprises subtractively removing the metal from the reserve metal feature, one or more of the first metal feature. The method according to any one of claims 14 to 16, wherein the side wall has an inclination between 45 ° and 85 ° from the plane of the spare metal feature.The first metal feature has a meandering trace having a plurality of parallel traces and a ring structure surrounding the meandering trace, and the meandering trace and the side wall of the ring structure adjacent to the meandering trace are described as described above. The method according to any one of claims 14 to 16, which is formed by subtractive removal of the metal of the preliminary metal feature in a wet metal etching bath according to the pattern of the dry film resist.18. The side wall of the plurality of parallel traces of the meandering trace, and the side wall of the ring structure adjacent to the meandering trace, have an inclination between 45 ° and 85 ° from the plane of the spare metal feature. The method described.The reserve metal feature is a first reserve metal feature at a first conductivity level that is flush with the bottom of the opening, and the second reserve metal feature is at the first conductivity level. At the second conductive level on the second dielectric material layer above, by subtractive removal of the metal from the second reserve metal feature, at the same time as the first metal feature. 19. The method of claim 19, wherein a plurality of second metallic features are formed.The first metal feature has a plurality of parallel traces, one of the plurality of parallel traces is separated by a first minimum pitch, and the plurality of second metal features are the first. The method of claim 20, wherein the first minimum pitch is substantially equal to the second minimum pitch, separated by a minimum pitch of two.
In-plane inductor in IC packageThe present application relates to in-plane inductors in IC packages.Integration of inductive structures within integrated circuit (IC) package substrate materials is important for increasing power supply in high performance IC devices. Inductive structures, including magnetic materials, can be placed in any layer within the package, thus allowing multiple types of structures. For example, an in-plane inductor formed by patterning a conductive layer in a package substrate can be embedded in a magnetic core material integrated in a cavity formed in the package substrate.However, the integration of magnetic materials within the package substrate entails process and structural challenges. One area where the integration of buried inductive structures is a challenge is the coreless package, where traces with improved power supply are completely enclosed by magnetic material. To completely enclose the traces with magnetic material, for example, to access the back side of the package buildup, where supplementary magnetic material is placed over the traces that are only partially embedded during the front side buildup. , Multiple depaneling operations may be required. In some situations, this complementary magnetic material needs to have a different magnetic composition, which can introduce another complexity with respect to the variety of magnetic materials.Methods and structures that allow complete encapsulation without depanelization and / or the use of a single magnetic material reduce process risk and add process flexibility.The embodiments of the present disclosure will be more fully understood from the following detailed description and the accompanying drawings of the various embodiments of the present disclosure. However, they should not be construed as limiting this disclosure to any particular embodiment. These are for illustration and understanding purposes only.The figures referred to as "cross section", "profile", "plan", and "isometric" correspond to orthogonal planes in the Cartesian coordinate system. Therefore, cross-sections and profile views are taken in the x-z plane, plan views are taken in the x-y plane, and isometric views are taken in the 3D Cartesian coordinate system (x-y-z). The drawings are optionally labeled with axes that indicate the orientation of the drawings.It is a figure which showed the process flowchart of the method as an example of manufacturing the package integrated in-plane inductor (shown in FIGS. 2A-2M) according to an embodiment of this disclosure. FIG. 5 is a cross-sectional view and a plan view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having an embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view and a plan view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having an embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view and a plan view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having an embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view and a plan view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having an embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view and a plan view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having an embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view and a plan view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having an embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view and a plan view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having an embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view and a plan view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having an embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view and a plan view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having an embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view and a plan view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having an embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view and a plan view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having an embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view and a plan view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having an embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view and a plan view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having an embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. FIG. 5 is a cross-sectional view of a typical structure formed at different stages of a process flow as an example for forming a package substrate having a package-embedded inductor according to an embodiment of the present disclosure. It is a figure which showed the cross-sectional view in the x-z plane of the package assembly as an example which has a package substrate coupled to an IC chip and a host member by an embodiment of this disclosure. It is a figure which showed the block diagram of the arithmetic unit which introduced the embodiment of this disclosure.In the specification, a reference to an "embodiment," "one embodiment," "one embodiment," or "another embodiment" is at least a particular feature, structure, or feature described in connection with an embodiment. It means that it is included in an embodiment, but it does not necessarily have to be included in all embodiments. The various appearances of an "embodiment," "one embodiment," or "a certain embodiment" do not necessarily all refer to the same embodiment. Where the specification states that it may, may or may include, the particular member, feature, structure or property need not necessarily be included. If the description or claims refer to an element of "a" or "an", this does not mean that only one of the elements is present. If the specification or claims refer to "additional" elements, it is not excluded that more than one of the additional elements is included.Here, the terms "circuit" or "module" may refer to one or more passive and / or active members arranged in concert with each other to provide the desired function. The term "signal" may represent at least one current signal, voltage signal, magnetic signal, or data / clock signal.The term "microprocessor" generally refers to an integrated circuit (IC) package that includes a central processing unit (CPU), a graphical processing unit (GPU, field programmed gate array (FPGA)) or a microcontroller. A microprocessor package is referred to herein as a "microprocessor". The microprocessor socket receives the microprocessor and electrically couples it to the printed circuit board (PCB).The meanings of "a", "an", and "the" include multiple references. The meaning of "in" includes "in" and "on". It is understood that the vertical orientation is in the z direction and that the descriptions "up", "bottom", "top", "top" and "bottom" represent relative positions in the z dimension with the usual meaning. "Upper", "upper" and "upper" usually represent the upper position in the z dimension, while "bottom", "lower" and "lower" represent the lower position in the z dimension. .. The term "on" as used in the present disclosure is used to indicate that a feature or subject is in direct contact with a feature or subject below it. However, it is understood that the embodiments are not necessarily limited to the orientations or configurations described in the drawings.The terms "substantially", "approaching", "approximate", "close" and "about" usually mean within ± 10% of the target value (unless otherwise specified). The use of common adjectives such as "1st", "2nd" and "3rd" simply refers to different examples of similar objects to describe a common object unless otherwise stated. It is not intended that the objects so described must be in a given order, temporarily, spatially, in ranking, or in any other way. No.For the purposes of this disclosure, the terms "A and / or B" and "A or B" mean (A), (B) or (A and B). For the purposes of this disclosure, the term "A, B and / or C" refers to (A), (B), (C), (A and B), (A and C), (B and C) or ( It means A, B and C).In the present application, an embodiment of a package accumulation induction structure and a method for manufacturing the package integration induction structure is described. As described in the present application, the lithography steps and the setting of the magnetic material may be minimized. Further, in the embodiment described in the present application, the inductor metallization can be completely enclosed in a single magnetic material without depanelizing the package substrate being processed.FIG. 1 shows a process flow chart of one method 100 for manufacturing a package integrated in-plane inductor according to an embodiment of the present disclosure. Method 100 may be implemented as part of a panelized IC package substrate manufacturing process that is compatible with the coreless package substrate structure. In addition, such a process may be carried out as part of an IC package substrate manufacturing process compatible with the core substrate technology.Operation 101 has the steps of forming one or more metallized layers in the build-up material. The metallized layer may be formed between the dielectric layers in the package substrate build-up stack. The stack may be formed, for example, by laminating sheets of dielectric material in a high temperature roller or vacuum laminating process. The newly laminated sheets are combined with the lower dielectric to form a monolithic dielectric substrate. In one operation, the laminated sheet has a copper film (eg, 2 μm) that provides a plated seed surface. After the dielectric lamination cycle, a metal such as copper may be plated over the exposed surface of the dielectric to form a metallized layer. The plated metallized layer may be in an unstructured or "blanket" filmed state and then etched via a lithography defined photomask to form preliminary metal features. .. Alternatively, the metallized layer may be selectively plated via a lithographically defined photomask to form preliminary metal features. Preliminary metal features may be embedded by forming one or more additional layers of dielectric.In operation 102, the subtraction process may penetrate the dielectric to form one or more openings, exposing some of the preliminary metal features. In certain embodiments, these openings are formed by a laser drilling operation. In this case, the reserve metal features prevent the laser from penetrating into the lower levels of the package substrate. Apertures may be formed by other methods, such as dry and / or wet etching of the dielectric with a lithographically defined etching mask. In the etching method, the spare metal feature may be used as an etching stop.In operation 103, a layer of photoresist is formed over the side walls and bottom of the opening to cover the exposed portion of the spare metal feature. The dielectric surrounding the opening may also be covered with a photoresist. In certain embodiments, the dry film resist (DFR) is laminated, for example, by a vacuum high temperature roller / press or vacuum lamination process so as to conformally cover the preliminary metal features and the sidewalls of the opening.In operation 104, the portion of the DFR above at least the exposed portion of the spare metal feature is patterned to define at least a portion of the inductor routing structure. The pattern feature may include, for example, a linear or meandering flat inductor trace, or a vertical interconnect path to the flat inductor trace. The etch mask is then removed, leaving the inductor routing structure at the bottom of the opening in the packaging material stack.In operation 105, a magnetic material is installed across the inductor routing structure and the openings are at least partially filled. A moldable paste or viscous matrix with magnetic particles may be printed or screen-printed on the openings, for example by inkjet. The magnetic material may be selected to have a suitable magnetic permeability. The opening may be fully filled and the inductor trace features and the sidewalls of the opening may be covered. After film formation, curing may cure the matrix to a solid magnetic material that partially seals the inductor structure. The resulting inductive device may have a particular inductance as determined by the structure of the inductor trace, as well as the size and permeability of the core containing the magnetic material.The inductor structure may be completely sealed by first forming a magnetic material underneath the spare metal features using the techniques described above. As further shown below, for example, through preliminary repetition of operations 101, 102, and 105, a base magnetic material is formed, which allows the inductor to be laminated on top of each other. It may include an inductor wiring structure embedded between the filling openings. In another example, the underlying magnetic material is formed as part of a composite foil that can be applied as part of operation 101, which allows the inductive device to have an inductor routing structure and a single magnetic material filling opening. You may have.In operation 106, the magnetic material is capped with one or more layers of dielectric and the induction structure is completely embedded in the package substrate dielectric. Any known technique may form any number of additional build-up dielectric layers and / or metallized material layers to reach a final package substrate structure suitable for a given IC chip and / or application. NS.2A-2M show cross-sectional views and plan views of typical structures formed at different stages of the method as an example of forming a package substrate 200 with an embedded inductor according to an embodiment of Method 100.In FIG. 2A, the manufacturing process IC package substrate stack 201 is accepted. The package substrate stack 201 has a dielectric 202. In certain embodiments, the dielectric 202 is, but is not limited to, an epoxy phenol resin, or an epoxy cyanate, as a dielectric build-up film laminated on a package core or carrier panel in an embodiment of a coreless package substrate. It has a material such as ester resin. The epoxy resin laminate may have a thickness in the range of 10 to 100 μm, for example. The package substrate stack 201 may be formed by building up a plurality of layers of epoxy resin-based dielectric thin film layers continuously laminated on the growth stack. The package substrate stack structure may include, for example, a flip chip package structure or a bumpless build-up level (BBUL) package structure.The metallized layer between the dielectric layers is copper, other suitable metal, or other suitable metal, electroplated or formed directly on the dielectric material after any given repetition of the lamination process. It may have a conductive material. The conductive layers may be numbered as the level of metallization within the package substrate stack 201. The highest level of metallization is a package substrate formed above a plurality of metallized layers N-1, N-2, etc., which are continuously and deeply embedded in the package dielectric material in the package substrate stack 201. It may be the Nth or N + mth level on the first (eg, top) side of, or closest to it. Usually, the bottom level metallization (including, for example, die interconnection) is the level of metallization closest to the second side (eg, bottom) of the package substrate stack 201. As an example, the copper layer may be sputtered and plated, or the copper foil may be laminated as the metallized layer N-1 of the package substrate stack 201 during the manufacturing process. The copper layer has a thickness of 5 to 50 μm and is patterned to include metal features such as interconnects that attach the package substrate stack 201 to the IC die or host member (not shown). You may. The metallization level N-1 is patterned to include the metallization feature 203. In embodiments where the metallization level N-1 is plated, the metallization feature 203 may be formed by the semi-addition method (SAP). For example, a plating mask may be used to form the metallized feature 203. In embodiments where the metallization level N-1 is installed or plated as a foil without the use of a mask, the metallization feature 203 may be formed by a subtraction process. For example, a masked etching process (eg, wet chemistry) may be used to form the metallized feature 203.The dielectric layer 202 may be installed over the metallization layer N-1 at a distance h1 below the upper surface 204 up to the metallization feature 203 embedded in the dielectric 202. The metallization feature 203 may have any shape on the plane of metallization level N-1. The metallized feature 203 may have, for example, a lateral dimension of about 500 μm to 20 mm (x and y planes) and a thickness in the range of 15 to 200 μm (eg, z-axis). Although not shown, the coplanar metallized feature within conductivity level N-1 may be adjacent to the metallized feature 203.In FIG. 2B, an opening 205 is formed in the upper dielectric 202. In one embodiment, the opening 205 is formed by a laser perforated opening up to a depth of h1 to expose at least a portion of the metallized feature 203. One or more openings 205 may be formed by laser ablation of a dielectric material, eg, by strong heat generated by laser energy. For example, a CO2 or Nd: YAG laser can be used as the laser source. The metallization feature 203 can interfere with the laser beam (eg, as a laser stop layer) and prevent the laser beam from entering the dielectric material underneath the package substrate stack 201. As a result of laser ablation of the dielectric 202, the side wall 206 may be tapered as shown. The inclination angle θ1 of the side wall 206 may be in the range of 45 ° to 85 ° with respect to the plane of the metallization level N-1. As a result of the sloping side wall, the opening 205 may have a larger span at the opening (eg, the intersection of the side wall 206 and the surface 204). For example, the distance d2 may be larger than d1 × h1 tan (π / 2-θ1). The laser produces a slight ablation of some of the metal from the surface of the metallization feature 203 itself, resulting in scalloping or other damage artifacts that represent the laser perforation process. For example, the ablation of the metallization feature 203 may excavate the metal to a depth in the range of 100 nm to 2-3 μm. Further, the unablated residual portion of the upper dielectric may be deposited on the surface of the laser stop layer in the form of small particles of the inorganic filling material.In some other embodiments, the openings 205 may be formed by a wet or dry mask etching process. In one dry etching process, the side wall 206 becomes substantially linear. In such an embodiment, the metallization feature 203 may function as an etching stop layer. This is because the etching rate of a metal such as copper is considerably slower than the etching rate of an organic material by an etching solution. Therefore, the metallization feature 203 may protect the lower dielectric material from the etching process.The opening 205 may be formed, for example, at a depth of h1 (eg, z-height) of 15 to 200 μm. The opening 205 may have a length d1 in the range of 500 μm to 15 mm, measured, for example, at the bottom of the opening 205. The metallized feature 203 may be separated by a distance d2 at the bottom of the opening 205 and may extend laterally at a distance d3 from the bottom of the side wall 206, for example. In the example shown, the metallized feature 203 may have a length of d1 + 2d3 (eg, x-dimensional), where d3 provides a safe overlap margin between the opening 205 and the metallized feature 203. It is enough to secure it. The perimeter of the opening 205 may have any shape within the range of the metallization feature 203. Since the aperture 205 can have a lateral dimension many times larger than the beam width of the laser drill, the laser beam is rasterized over any region and of the length and width defined by the ends of the metallization feature 203. The opening may be drilled between the limits.In FIG. 2C, a magnetic material 207 is formed in the opening 205 to cover the metallized feature 203 and the side wall 206. Suitable magnetic materials may have non-conductive magnetic filler particles, such as ferrite or iron oxide powder, dispersed in any matrix material. In certain embodiments, the organic matrix has a cross-linking agent and polymer precursor that are activated by heat and / or light, and once deposited within the opening 205, the matrix cures to a solid, with side walls 206 and It may come into contact with the metallized feature 203. The magnetic material 207 may be deposited in the opening 205 as a moldable paste or ink and then cured, for example, by heat and / or light treatment. The film forming process may include steps of screen printing or inkjet printing of the material in the openings 205. During film formation, the magnetic material 207 may fill the pores, overflow laterally and vertically (z) and extend above the surface 204. A polishing or grinding operation may be performed and the magnetic material 207 may be flattened to have a surface 204, substantially as shown in FIG. 2C.In FIG. 2D, another metallization level N is formed on the dielectric 202 and the magnetic material 207. The metallization level N may have a copper foil and the thickness between the surfaces 204 and 208 may be in the range of 5 to 50 μm. In another embodiment, the metallization level N may be formed by electroplating or sputtering of copper or another suitable metal over the flattened surfaces of surface 204 and magnetic material 207. In electroplating, a thin conductive seed layer containing copper, gold, silver, or other suitable metal may be formed in a preliminary operation as a cathode. In addition, a seed layer may be formed to promote nucleation of metal precursors in a CVD atmosphere. An adhesion layer containing chromium may be formed before the seed layer.In FIG. 2E, a preliminary metallization feature 209 is formed on top of the magnetic material 207 at metallization level N, which is the top of the magnetic material 207 as defined by the intersection of the end 210 of the magnetic material and the surface 204. It extends laterally by a distance d3 from the boundary. Pre-metallized feature 209 is one of a plurality of features formed at metallization level N and is identical to adjacent features such as interconnect traces, pads, and other structures not illustrated. It is on a plane. In this example, the preliminary metallization feature 209 is patterned by a subtraction method, for example, using an etching process with an etching mask. In another embodiment, the preliminary metallized feature 209 is selectively formed by SAP, for example, the preliminary metallized feature 209 is plated via a patterned plating mask (not shown). May be good.The premetallized feature 209 has any suitable shape and extends laterally (eg, only a distance d3) on the surface 204 beyond the upper boundary of the magnetic material 207. Is suitable as a laser or etching stop layer in the subsequent formation of openings above the metallization level N. The preliminary metallization feature 209 may have, for example, 500 μm to 20 mm, or a lateral dimension greater than the distance d2. The preliminary metallization feature 209 may have a thickness (eg, z-height) between 2 and 15 μm. Depending on whether a subtraction process or an addition process is performed, the side wall 211 may be substantially vertical and / or may have a rounded top end, or an isotropic etching process. It may be the indicated display curvature.In FIG. 2F, the dielectric 212 is formed on the upper part of the preliminary metallization feature 209 and the upper part of the dielectric surface 204. In certain embodiments, the dielectric 212 is laminated as a dielectric sheet on top of the package substrate stack 201, for example in a high temperature roller or vacuum stacking process. The dielectric 212 may be substantially the same as the dielectric 202. In one embodiment, the dielectric 212 has a thickness h2 of 10 to 50 μm between the preliminary metallization feature 209 and the surface 213.In FIG. 2G, an opening 214 is formed through the dielectric 212 to a depth of h2. In certain embodiments, the aperture 214 is formed by a laser perforation process (eg, similar to the method used in forming the aperture 205), or any suitable etching process. The opening 214 is d4 in length and exposes the preliminary metallization feature 209. The side wall 215 may be tilted at an angle θ2 (eg, between 45 ° and 85 °) with respect to the plane of the premetallized feature 209 as a result of, for example, a laser drilling process. The premetallized feature 209, which was not exposed by the formation of the opening 214, penetrates the dielectric 212 and extends laterally from the side wall 215 of each opening by a distance d5. Here, d5 is about h2 · tan (π / 2-θ2).In FIG. 2H, the photoresist 216 is installed across the dielectric 212 and the opening 214, and the bottom and side walls 215 of the opening 214 are conformally coated with the premetallized feature 209. The photoresist 216 may be, for example, a dry film resist (DFR) having a thickness in the range between 10 μm and 100 μm. The DFR may be installed by vacuum laminating or high temperature laminating, and the DFR may be softened and / or molded and conformalized to opening 214.In FIG. 2I, the photolithography process patterns the photoresist 216 (indicated by the dashed outline at the top) so that the strip 217 and the opening 218 are included, and the strip 217 and the opening 218 are y. Stretch vertically in a dimension. In one example, the strip 217 and opening 218 of the preliminary metallization feature 209 are etched according to the pattern of photoresist 216 and have an in-plane inductor routing structure 219 (lower side) with trace 220 (cross section shown). (Indicated by the dashed silhouette) is formed. In the embodiments shown, each trace 220 has a line width of 5 to 50 μm and a minimum spacing s1 of 10 to 100 μm between the traces 220. The peripheral "ring" structure 221 containing the masked portion of the preliminary metallization feature 209 is adjacent to the inductor trace 220 because the opening 214 partially overlaps the end of the preliminary metallization feature 209 below. In addition, it is visible in the plan view of FIG. 2M). The ring structure 221 is electrically isolated from the inductor trace 220, which is shown in the practice of method 100. In the embodiments shown, the traces 220 are shown with substantially uniform line widths and spacing, but in another embodiment some traces 220 may have different lines and line widths. good. For example, the spacing s2 between the ring structure 221 and the termination trace 220 may be different from the minimum spacing s1 between traces.As shown in FIG. 2J, when the photoresist 216 is removed, the etched inductor trace 220 is exposed and the exposed portion of the ring structure 221 extends into the opening 214. The inductor trace 220 and the ring structure 221 can have a thickness in the range of 5-50 μm. The sidewall 222 of the inductor trace 220 has an inclined and / or curved profile 223 representing a wet (eg, isotropic) chemical etching patterning process. The slanted side wall 222 provides the inductor trace 220 with a substantially trapezoidal cross-sectional profile, for example, as further shown in the inset diagram. The trapezoidal profile of the inductor trace 220 may result from a subtractive isotropic wet etching process. In this method, horizontal chemical etching occurs at the same time as vertical chemical etching in the exposed metal region. As a result, the side wall 222 has a curved negative taper, with some minimum spacing s1 between the inductor traces 220.In certain embodiments, the outer wall 224 of the ring structure 221 has a substantially vertical and linear profile showing, for example, a semi-additional pattern of the premetallized feature 209. The metallized features are electroplated on a patterned plating mask. The openings in the plated mask may have linear side walls that are substantially vertical or have an inclination of less than 10 ° from the vertical. The side wall 224 also has a rounded upper end 225, which may also exhibit a semi-additive pattern of the premetallized feature 209. As shown in the inset of FIG. 2J of the ring structure 221 the outer wall 224 is not exposed to subtractive etching, whereas the inner wall 226 of the ring structure 221 is exposed within the opening 214 and thus the inductor trace. It is subjected to a subtractive etching process used to pattern 220. Therefore, in addition to the presence of the ring structure 221 indicating Method 100, the difference in profile of the inner and outer sidewalls 224, 226 also represents the manufacturing technique used.As shown in FIG. 2K, a magnetic material 227 is deposited in the opening 214 (eg, shown in FIG. 2J) to cover the exposed adjacent portions of the inductor trace 220 and the ring structure 221. The magnetic material may have, for example, a lateral dimension similar to the opening 214 (eg, 500 μm to 15 mm) and a thickness (z height) between 5 μm and 100 μm. In the embodiment shown, the ring structure 221 extends over the interface 228 at the boundary between the magnetic material 227 and the dielectric 212. The inner wall 226 is in the magnetic material 227 and the outer wall 224 is embedded in the dielectric 212.In certain embodiments, the magnetic material 227 is substantially the same material as the magnetic material 207 described above. In one other embodiment, the magnetic material 227 is a different material than the magnetic material 207. The magnetic material 227 may have, for example, a specific magnetic permeability of 5 to 10. The magnetic material 227 may be formed by printing the material in the opening 214. In certain embodiments, the magnetic material 227 has a relatively low initial viscosity, which may be printed directly into the opening 214 by inkjet. In one embodiment, the magnetic material 227 is part of a paste that is otherwise placed over the surface 228 of the dielectric 212 and fills the opening 214. The magnetic material 227 may come into contact with the lower magnetic material 207 through an opening 218 between the inductor traces 220 to form a continuous mass encapsulation inductor trace 220. The excess material may be removed from the surface 229 and the magnetic material 227 may remain in the opening 214 at the surface 228.A thermal or photochemical curing process may be subsequently performed to cure the magnetic material 227. The inductor trace 220 is completely embedded in a magnetic material (eg, lower magnetic material 207 and upper magnetic material 227). The combined magnetic material may form a magnetic core that seals the inductor trace 220. The in-plane inductor structure 219 is completely embedded in the dielectric of the package substrate 200. The magnetic core in which the inductor trace 220 is completely embedded is formed from the combined magnetic materials 207, 227, the total thickness is in the range of 10 to 200 μm, and the specific magnetic permeability is between 5 and 10. .. The ring structure 221 extends over the interface between the magnetic material 227 and the adjacent dielectric 212, and the outer wall 224 is embedded within the dielectric 212. The magnetic material 227 may be flattened to have a surface 229 by a polishing and / or grinding process (eg, chemical mechanical polishing, CMP).As shown in FIG. 2L, the fabrication of the in-plane inductor structure 219 is substantially completed. The magnetic material 227 is capped with the dielectric material, for example, by laminating the dielectric 230 over the surface 229 to complete the package substrate 200 or to adjust the metallization level thereafter.FIG. 2M shows a plan view of the metallization level N in the x-y plane. Lines A-A'crossing the plan view indicate the position of the cross section shown in FIG. 2L. As shown in FIG. 2M, the inductor trace 220 is a continuous meandering trace structure 231 containing a plurality of interconnected parallel segments. The inductor structure 219 crosses the AA'plane and is terminated by the interconnect pad 232. The interconnect pad 232 may be a via cap, perpendicular to higher level (eg, N + 1) and / or lower level (eg, N-1) metallization within the package substrate 200. It may be interconnected. The inductor trace 220 is interconnected to a package land pad at the bottom of the package substrate 200 (not shown) via a lower metallization level and mounted on an external circuit (eg, socket mounted or printed circuit board). Directly surface mounted).The ring structure 221 is shown as a feature around a rectangle that surrounds the inductor trace 220. In the embodiment shown, the inductor structure 219 is bounded by the outer wall 224 of the ring structure 221 within the dielectric 212. In one embodiment, the ring structure 221 is interconnected with other metallized features. For example, the ring structure 221 is electrically connected within a package substrate or in a printed circuit board electrically coupled to the package substrate via a vertical or lateral trace path (not shown) coupled to a ground plane. It may be grounded. In certain embodiments, the ring structure 221 may be electrically suspended or connected to any reference voltage source.3A-3G are cross-sectional views of typical structures formed at different stages of an exemplary process flow for forming a package substrate 300 with a package-embedded inductor according to another embodiment of Method 100. Is shown.The process shown in FIG. 3A may be preceded by a process operation similar to that shown in FIGS. 2A-2D. The package substrate stack 301 may be obtained, for example, as a structure during manufacturing as shown in FIG. 2C. Therefore, the description of the metal and dielectric structures obtained and shown in FIGS. 2A-2C is also applicable to the package substrate stack 301.In FIG. 3A, the premetallized structure 302 and the adjacent trace path 303 are formed by a semi-additional process (SAP) at level N. SAP, as an example, is a suitable metal (eg, copper) into the lithographically defined plating mask-patterned openings formed over the surface 204 of the dielectric 202 and the magnetic material 207 in the previous operation. May have a film formation of. Also, the premetallized structure 302 and the adjacent trace path 303 are plated into the lithographically defined openings in the photoresist film-forming mask formed over the surface 204 in the previous operation (not shown). May be good. The electroplated structure may have a thickness h3 in the range of 2 to 50 μm. The preliminary metallization feature 302 has a lateral dimension in the x-y plane ranging from 500 μm to 20 mm, which may cover the magnetic material 207 and extend beyond the top edge of the magnetic material 207.The inset of FIG. 3A shows an enlarged view of the preliminary metallization structure 302 and the trace path 303. The side wall 304 may have a substantially vertical profile indicating a semi-additional process (eg, a metal electroplating process), as described above. The upper end 305 may have a rounded profile, as shown in the inset. The spacing between the side walls 304 is measured by the minimum spacing s3, which can be scaled at the technology node, but is usually less than the minimum spacing achievable via the subtraction process.In FIG. 3B, the dielectric 306 is formed above the metallized structure and the open dielectric surface 204. The dielectric 306 may be formed as described above. The dielectric 306 may be conformally laminated over the surface 204, for example, by a high temperature roller or high temperature vacuum laminating process to fill the space between the metallized structures. The dielectric 306 may have a thickness in the range, for example, between 5 and 100 μm.In FIG. 3C, an opening 307 and a via opening 308 are formed in the dielectric 306 above the preliminary metallization feature 302 and the pad 309, respectively. The opening 307 may be formed by laser drilling to a depth determined by the distance h4 between the surface 310 and the premetallized feature 302, as described above. As mentioned above, the preliminary metallization feature 302 can prevent the laser from entering the magnetic material 207. In another embodiment, the opening 307 may be formed by a chemical etching method. The side wall 311 may have an inclination of between 45 ° and 85 ° from the plane of the preliminary metallization feature 302, which may again suggest a laser drilling process. Aperture 307 may have at least one lateral dimension d6 (measured from the bottom of the opening) in the range of 500 μm to 15 mm.Further, the via opening 308 may be formed to a depth of h3 (for example, 100 μm) where a part of the pad 309 is exposed by the laser drilling operation. Also, in some other embodiments, the via openings 308 may instead be formed by a suitable etching process. The via opening 308 may have a circular cross section in the x-y plane, but other suitable cross section profiles are also possible. In certain embodiments, the side wall 312 of the via opening 308 may be tilted between 45 ° and 85 °.In FIG. 3D, the metallized layer 313 is conformally formed on the opening 307, the via opening 308, the exposed portion of the preliminary metallization feature 302, and the surface 310 of the dielectric 306. For example, electroplating or electroless plating may deposit copper or another suitable metal to fill the via opening 308 and form a via 314 above the pad 309. The metallization layer 313 has a thickness h5 (eg, up to 50 μm thickness) above the surface 310 and the preliminary metallization feature 302. The formed metallized layer 313 may increase the metal thickness of the exposed portion of the preliminary metallized feature 302 by about h5. An increase in metal thickness at the bottom of the opening 307 can be significant for lowering the resistance of the inductor trace.In FIG. 3E, the photoresist 315 is formed above the metallized layer 313. In certain embodiments, the photoresist 315 is a dry film resist (DFR), which is laminated above the metallized layer 313, eg, substantially as described above.In FIG. 3F, the subtraction process (eg, isotropic wet chemical etching) patterns the metallized layer 302, simultaneously defining features at metallization levels N and N + 1. The metallized layer 313 is patterned over a plurality of inductor traces at level N and a trace path 317 including via pads 318 at level N + 1 and formed over the surface 310 of the dielectric 306. The inductor trace 316 is a plurality of interconnected parallel traces, and one or more in-plane meandering inductor wires may be formed. The side wall 319 has a trapezoidal profile in the x-z plane and may represent isotropic etching as described above. The inductor trace 316 has a thickness of h6 (eg z height), a width of w4 (eg 20-50 μm) and may be separated by a minimum spacing s4 (eg 20-50 μm). The z height h6 may be approximately the sum of h3 and h5. The minimum spacing s4 of the inductor traces 316 formed by the subtractive isotropic etching process may be significantly larger than the minimum spacing s3 of the SAP structure 303 within the same N-1 metallization level.Metallized structures at level N + 1 such as trace path 317 may have a thickness (z height) h5, as shown in the examples. Trace path 317 has a minimum pitch, which is expected to be sufficiently larger than feature 303 and even larger than that of inductor trace 316 as a result of the difference in thickness h5 and h6. The large thickness of the inductor trace 316 compared to other metallized structures can reduce the resistance of the inductor window.At level N, a ring structure 320 is formed at the same time as the inductor trace 316. As shown in FIG. 3F, the ring structure 320 extends laterally below the dielectric 306 at a distance d7 from the side wall 311 and is protected from chemical attacks. The ring structure 320 is etched back to the side wall 311 of the opening 307. The inner side wall 319 of the ring structure 320 may have a concave profile that exhibits isotropic etching. The ring structure 320 may be fully etched up to the opening side wall 311 or a portion of the ring structure 320 may remain in the opening 307. The width w3 of the ring structure 320 may depend on the etching rate and time. The ring structure 320 has a thickness of h3 <h6. The ring structure 320 surrounds the inductor trace 316, but may be electrically isolated from the inductor trace 316. In certain embodiments, the ring structure 320 is interconnected with a ground plane or ground metallization, for example, a ground protection ring may be provided around the inductor trace 316.In FIG. 3G, a magnetic material 321 is formed in the opening 307 and the inductor trace 316 is sealed. The magnetic material 321 may be flattened to have a surface 310, as shown in the figure. In certain embodiments, the magnetic material 321 has substantially the same composition as the magnetic material 207. Other suitable compositions are also possible. The magnetic material 307 may come into contact with the magnetic material 207 below the inductor trace 316 to form a continuous magnetic core that seals the inductor trace 316. In one embodiment, the inductor traces 316 are interconnected to form a meandering structure similar to the inductor structure 231 as shown in FIG. 2M. The ring structure 320 may extend substantially over the interface between the magnetic material 321 and the adjacent dielectric 306, as shown in FIG. 2M.The dielectric 322 is formed on the surface 310 to seal the metallized structures 317 and 318 and cap the magnetic material 321 and the dielectric 306 to complete the package substrate 300. The dielectric 322 may be the same as or similar to the package dielectric material described above, or may be laminated as described above. In one embodiment, the formation of the dielectric 322 substantially completes the fabrication of the embedded inductor structure 323 (enclosed by a dashed outline).4A-4G show cross-sectional views of typical structures formed at different stages of the process flow of an example of forming a package substrate 400 with a package-embedded inductor according to another embodiment of Method 100.The process shown in FIG. 4A can be preceded by operations similar to those shown in FIGS. 3A-3B. The package substrate stack 401 can be obtained, for example, during the manufacture of the structure shown in FIG. 3B. Therefore, the description of the resulting metal and dielectric structures shown in FIGS. 2A-2C is also applicable to the package substrate stack 401.In FIG. 4A, the package substrate stack 401 has substantially the same structure as the package substrate stack 301 in FIG. 3C, and has a dielectric 202 and a magnetic material 207 embedded in the dielectric 202. The metallization feature 203 is at metallization level N-1 and is directly below the magnetic material 207. The trace path 303 is coplanar with the preliminary metallization feature 302 at the metallization level N, where the preliminary metallization feature 302 covers the magnetic material 207. For example, the laser perforation process may form an opening 307, exposing a portion of the preliminary metallization feature 302 to form an inclined side wall 319 as described above. Pre-metallized features 302 and level N metallized features with adjacent coplanar trace paths 303 may be formed by the semi-additional plating process described above. In the example shown, the preliminary metallization feature 302 and the trace path 303 have substantially vertical sidewalls. Features at level N (eg, preliminary metallization features 302, and adjacent trace paths 303) may have a thickness of h3 (eg, a z-height of 5 to 50 μm).In FIG. 4B, the photoresist 402 is conformally placed over the surface 310 to cover the side wall 319 and the premetallized feature 302. In certain embodiments, the photoresist 402 is a laminated DFR, as described above, having a thickness between, for example, 10 to 100 μm.In FIG. 4C, the photoresist 402 is patterned into an etching mask with an opening 403. The inductor trace 404 and the ring structure 405 may be formed by isotropic etching through the opening 403. The inductor trace 404 and the surrounding ring structure 405 have concave side walls 406 and 407, respectively. The inductor traces 404 are separated by a minimum spacing of s6, as shown in the figure. The interval s6 may be significantly greater than the minimum interval s5 between adjacent trace path structures 303.In FIG. 4D, the magnetic material 408 is deposited in the opening 307 and the trace 404 is sealed. The magnetic material 408 is shown to overflow the side wall 319, forming an overhang 409 extending over the adjacent region of the dielectric 306. The magnetic material 408 may have, for example, substantially the same material as the magnetic material 207. The magnetic material 408 extends between the inductor traces 404 and contacts the magnetic material 207 below the inductor traces 404 to form a continuous magnetic core that completely seals the inductor traces 404.In FIG. 4E, a via opening 410 is formed in the dielectric 306 adjacent to the side wall 315. The via opening 410 may be formed by a laser drilling process as described above. Prior to the formation of the via opening 410, the magnetic material 408 may be flattened with an overhang 409.In FIG. 4F, a level N + 1 trace path 411 may be formed by a semi-additional plating process adjacent to the side wall 311 and over the via 413 over the via pad 412 and the adjacent surface 310. Prior to the formation of structures 411 and 412, the via openings 410 may be filled and vias 413 formed by electroplating or electroless plating processes. As a result of the difference in feature resolution between the subtractive technique and the SAP fabrication technique, the trace path 411 at level N + 1 has the same or similar minimum spacing s6 as the trace path 303 spacing s5 at level N. You may.In FIG. 4G, the dielectric 414 is laminated over the metallization level N + 1, the magnetic material 408, and the surface 310, and the fabrication of the embedded inductor structure 415 (enclosed by the dashed outline) is substantially completed. .. The inductor structure 415 has magnetic materials 207 and 408 that seal the inductor trace 404.5A-5G show cross-sectional views of typical structures formed at different stages of the process flow of an example of forming a package substrate 500 with a package-embedded inductor according to another embodiment of Method 100.In FIG. 5A, the package substrate stack 501 has a metallized feature 502 embedded in a dielectric 503 at level N + 1. The magnetic material is incorporated into the package substrate stack 501 as a layer in a thin multi-layer sheet laminated on top of the dielectric 503. The multilayer thin film 4 has a sheet of high magnetic permeability material 504, such as a nickel-iron (NiFe) or nickel-cobalt (NiCo) alloy, for example, a specific magnetic permeability in the range of 10,000 to 30,000. Has. The magnetic material 504 may be joined to a non-magnetic material 505 having a thickness in the range of 5 to 500 μm, eg, a thickness in the range of 5 to 500 μm. The non-magnetic material 505 may be copper or another conductive material suitable for the metallization level N of the package substrate stack 501.In FIG. 5B, the photoresist 506 is placed on top of the multilayer thin film and patterned. The photoresist 506 may be a DFR or a liquid resist. By a suitable etching process, etching may be performed through at least the non-magnetic material 505, and further, etching may be performed through the magnetic material 504. Here, the material has high conductivity (eg, NiFe, NiCo, etc.). As shown in the figure, the patterned structure 507 has a preliminary metallized feature 508 etched from the non-magnetic material 505 above the magnetic material feature 509 etched from the magnetic material 504. The premetallized feature 508 and the magnetic thin film feature 509 may have lateral dimensions ranging from 500 μm to 20 mm, for example. Metals in both materials may be attacked using a suitable etching process, such as a wet etching bath of acid and / or oxidant. The etching rate may be the same for both thin films. In the example shown, isotropic wet etching is used to form the undercut concave side walls 510, 511 of the premetallized feature 508 and the magnetic thin film feature 509, respectively. Due to the different etching rates of different materials, the amount of recess in the illustrated sidewalls is slightly different for each thin film, as shown by the lateral displacement of the sidewalls 510 and 511.In FIG. 5C, the dielectric 512 is formed on top of the dielectric 503 and the premetallized feature 508 by a laminating process or by another suitable method, as described above. The opening 513 is formed on the premetallized feature 508 by laser perforation or etching process as described above. The formation of the opening 513 may expose a portion of the preliminary metallization feature 508 to the bottom of the opening 513. The exposed portion of the preliminary metallization feature 508 may have, for example, a lateral dimension (eg, length) d8 in the range of 500 μm to 15 mm. The opening 513 has a side wall 514. In one embodiment, the side wall 514 is tilted at an angle θ3 (eg, 45 ° to 85 °) with respect to the plane of the preliminary metallization feature 508.In FIG. 5D, the photoresist 515 is syntactically placed above the package substrate stack 501 and is coated with the dielectric 512, the side wall 514 of the opening 513, and the spare metal feature 508. The photoresist 515 may be, for example, a DFR laminate. The photoresist 515 may be patterned and an etching mask may be formed over the premetallized feature 508 and the magnetic thin film feature 509.In FIG. 5E, the stripes and openings of the photoresist 515 (not shown) formed by lithography are transferred to the premetallized feature 508 and the magnetic thin film feature 509. Both structures are simultaneously patterned, for example, by isotropic wet etching, forming concave side walls 516 and 517 of multiple etched copper inductor traces 518 and the lower magnetic strip 519 of the inductor traces 518, respectively. Will be done. The side walls 516 and 517 may have substantially the same profile as the side walls 510 and 511 obtained in the lithography process shown in FIG. 5B, respectively. The inductor trace 518 may have a spacing represented by a minimum spacing of s7 (eg, 5 to 50 μm). The opening 520 is etched into the magnetic thin film feature 509 to form a plurality of magnetic strips 519 below the inductor trace 518, exposing a portion of the dielectric 503 between the magnetic strips 519. The opening 520 (drawn with a dashed outline) coincides with the space 521 between the inductor traces 518 (as well).A ring structure 522 is formed around the copper thin film, which is separated from the inductor trace 518 by a patterning process. A portion of the ring structure 522 and the lower magnetic material 509 extends from the side wall 514 through the dielectric 512. The ring structure 522 has an outer side wall 510 formed during the etching process shown in FIG. 5B (eg, before the formation of the dielectric 512) and an inner side wall 516 obtained during the formation of the inductor trace 518. In certain embodiments, the ring structure 522 may be interconnected with an inductor trace 518. In one embodiment, the ring 522 is electrically isolated from the inductor trace 518. In the embodiments shown, the ring structure 522 extends asymmetrically from, for example, the opening side wall 514 to provide a vertically interconnected landpad as shown below.In FIG. 5F, the magnetic material 523 is formed in the opening 513. In certain embodiments, the magnetic material 523 may have a composition similar to or substantially the same as that of the magnetic material described above (eg, magnetic material 207). The magnetic material 523 has a different composition than the magnetic strip 519. As described above, the magnetic material 523 may be formed by, for example, an inkjet method or a screen printing method. In the embodiments shown, the magnetic material 523 has a surface 524 that is flat and surrounds the dielectric 512. After film formation and curing of the magnetic material 523, polishing and grinding operations (not shown) may be performed for flattening with the surrounding dielectric 512.The magnetic material 523 extends through the opening 520 and the space 521 to the floor of the (previous) opening 513 and contacts the top surface of the dielectric 503 to seal the inductor trace 518 in the magnetic material. Electrically insulate the high magnetic permeability magnetic strip 519. The inductor trace 518 is completely sealed within the magnetic material by both the magnetic strip 519 (lower) and the magnetic material 523 (upper and adjacent to the side walls).In subsequent operations, a metallization feature 525 is formed at level N + 1 on the surface 524. The metallized feature 525 may be formed by a semi-addition process, for example, by plating copper or other suitable metal via a plating mask, as described above. The side wall 526 of the metallization feature 525 is substantially vertical (inclination of 10 ° or less) and may have a rounded upper end that represents semi-additional formation of the metal structure by film formation on the plating mask. good. The metallization feature 525 may have a minimum spacing s8 smaller than the minimum spacing s7 between the inductor traces 518 at level N.In FIG. 5G, a dielectric 527 is formed on top of the package substrate stack 501 to cover the dielectric 512 and the metallized feature 525. The magnetic material 523 is capped with a dielectric 527 and is completely embedded in the package substrate stack 501. Subsequent operations form a level N + 2 metallized feature 528 on top of the dielectric 527. The metallization feature 528 may be formed by a semi-additive plating process (or a subtractive metal etching process), for example, an upper level interconnect pad capable of accepting solder bumps or other interconnects. Vias 529 may be formed by plating the exposed openings of the ring structure 522 formed on both the dielectrics 527 and 512. Via 529 interconnects the ring structure 522 (at level N) with the metallization of level N + 2. A via cap 530 may be formed at the same time as the metallization feature 528. In certain embodiments, the formation of level N + 2 metallized features substantially completes the fabrication of the package substrate 500. In the example shown, the via 529 interconnects the ring structure 522 with the upper metallization level N + 2. The ring structure 522 may be interconnected with an external grounded circuit (eg, on a printed circuit board) via a via 529.As shown in the illustrated embodiment, the magnetic strip 519 has a significantly smaller z-height than the magnetic material 523 and occupies a relatively small portion of the magnetic core having both the magnetic material 523 and the magnetic strip 519. May be good. In the embodiments shown, the high magnetic permeability of the magnetic strip 519 allows the magnetic core of the inductor structure 531 (eg, magnetic material 523 and magnetic strip 519) to be shown as a structure enclosed within a dashed contour. Dominates the overall magnetic permeability of). The magnetic permeability of the composite core may be several thousand times higher than the magnetic permeability of the magnetic material 523 alone. If the magnetic permeability is significantly increased, the z-height requirement of the composite core structure can be reduced and the overall z-height of the package substrate 500 can be reduced.6A-6R show cross-sectional views of typical structures formed at different stages of an example process flow, forming a package substrate 600 with a package-embedded inductor, according to another embodiment of Method 100.In FIG. 6A, the package substrate stack 601 has a dielectric 602 and an embedded metallized feature 603 (level N-1). Pre-metallized features 604 and adjacent metal features (eg, via pad 605) are dielectrics in previous metallization operations, including semi-additive or subtractive metallization processes as described above. Formed at level N above 602. Vias 606 and 607 interconnect the level N metallized structure containing the preliminary metallization feature 604 to the level N-1 metallized structure 603.In FIG. 6B, a dielectric 608 is formed on top of the dielectric 602 and is coated with a preliminary metallization feature 604 and an adjacent metallization (eg, pad 605). As shown in FIG. 6C, an opening 609 is formed in the dielectric 608 by the laser drilling process or etching operation described above. The opening 609 is formed, for example, at a depth h7 in the range of 5 to 100 μm, exposing a portion of the preliminary metallization feature 604. In the embodiments shown, the side wall 610 has a tilt angle θ4, eg, in the range between 45 ° and 85 °, as a result of, for example, a laser drilling process.In FIG. 6D, a conformal metal layer 611 of the seed metal is formed over the dielectric 608, the side wall 610, and the preliminary metallization feature 604. The metal layer 611 may be formed on the dielectric 608 with a thickness h9 in the range of 5 to 50 μm by a suitable metal such as copper sputtering method or electroless film formation method. A metal layer 611 is formed on the preliminary metallization feature 604, and the overall thickness of the structure approximately increases to h10, which is the sum of h7 and h9.In FIG. 6E, the photoresist 612 is installed across the package substrate stack 601. The photoresist 612 may be, for example, a DFR laminate. The photoresist 612 may have a thickness h11 (range 15-150 μm) greater than the depth h8 of the opening 609. The photoresist 612 may be patterned in a lithography operation to form an opening 613 in the operation shown in FIG. 6F.In FIG. 6G, a pillar 615 is formed in the opening 613. Pillar 615 may be formed by electrolysis (or electroless) deposition of copper or other suitable metal in the opening 613. As shown in the figure, the pillar 615 may fill the opening 613 and extend over the photoresist 612. For example, the photoresist 612 may have a thickness of 15 μm and the pillar 615 may grow to a z-height of 20 μm and extend over the via 606, for example 5 μm. The seed metal layer 611 and the preliminary metallization feature 604 are etched to expose the lower dielectric (eg, dielectrics 602, 608). The ring structure 616 may remain from the previous etching.In FIG. 6I, the magnetic material 617 is formed in the opening 609 and the pillar 615 is embedded. The magnetic material 617 may be any of the magnetic material pastes or inks described above in the present disclosure (eg, same as magnetic material 207). The magnetic material 617 may overfill the opening 609 and spread over the dielectric 608 to form an overhang 618.In FIG. 6J, the magnetic material 617 and pillar 615 are flattened together with the dielectric 608. In FIG. 6K, metallization level N + 1 is formed on the package substrate stack 601 above the dielectric 608 and the magnetic material 617. The preliminary metallization feature 619 and the adjacent metallization (eg, via pad 620) may be formed as an SAP metal structure, in which case the structure has a substantially vertical, linear side wall as shown in the figure. It may be formed by a subtractive etching process that has or forms a concave side wall. Via 621, the interconnection of metallization level N and metallization level N + 1 may be formed in the electrodeposition process described above. Pillar 615 interconnects the preliminary metallization feature 619 (later patterned on the inductor trace) with the lower metallization (eg, metallization structure 603). The preliminary metallization feature 619 may have at least the same lateral dimensions (eg, up to 20 mm) as the magnetic material 617.In FIG. 6L, a dielectric 622 is formed over N + 1 metallization with the preliminary metallization feature 619. In FIG. 6M, an opening 623 is formed in the dielectric 622 (eg, by a laser perforation process or the etching method described above). The opening 623 may be the same size (eg, width, depth) as the opening 609 formed by the operation shown in FIG. 6C, or may be smaller. In the process shown in FIG. 6N, photoresist 624 is syntactically installed (eg, laminated or spin coated) across the package substrate stack 601 to cover side wall 611 at opening 623 and premetallized feature 619. The photoresist 624 provides an etching mask patterned by lithography, and an inductor structure may be formed from the preliminary metallization feature 619.In FIG. 6O, the subtractive etching process (described above) forms multiple inductor traces 625 at level N + 1, and the lithography pattern formed in the previous operation is transferred to the premetallized feature 619. The inductor trace 625 may have concave sidewalls and trapezoidal profiles that represent an isotropic etching process, as described above. The etching process simultaneously forms a pad 626 on top of the pillar 615, and the inductor trace 625 may be coupled with the pillar 615 due to the vertical interconnect path. The ring structure 627 extends from the inside of the opening 623 to the dielectric 622 via the side wall 628.In FIG. 6P, a magnetic material 629 is formed on the opening 623 and the inductor trace 625 is covered. The magnetic material has the same or similar composition and magnetic properties as the magnetic material 617 (eg, relative permeability of 5 to 10) and is in contact with the magnetic material 617 by extending between the inductor traces 625. May be good. Therefore, in the continuous magnetic inductor core, the inductor trace may be sufficiently sealed in the magnetic material, and the inductor structure 630 (drawn by the outline of the broken line) may be formed.In FIG. 6Q, a dielectric 631 is formed on the package substrate stack 601 and the inductor structure 630 (dashed line contour) is fully embedded in the package dielectric. Metallization N + 2 is formed over the dielectric 631 to complete the package 600. A via 632 is formed and extends from the via pad 633 at the upper level (eg, level N + 2) to the via pad 620 at level N + 1. The via 632 may be part of a power path within the package 500.FIG. 6R shows a plan view of the metallization level N + 1 in the x-y plane. Lines A-A'crossing the plan view represent the position of the cross section shown in FIG. 6Q. As shown in FIG. 6R, the inductor traces 625 are arranged in a continuous meandering structure configured to form a single inductor trace 634 (dashed line contour). The pad 626 may be interconnected to a higher level of metallization by terminating the inductor trace 634 and, in certain embodiments, forming vias on the pad 626. As shown in FIG. 6Q, the via pad 620 is interconnected with the top level metallized pad 633. The ring structure 627 is shown to be electrically isolated from the inductor trace 634.FIG. 7 shows a cross-sectional view of an example member mounting assembly 700 having a package substrate 600 incorporated in an IC package 701 according to an embodiment of the present disclosure in the x-z plane.The upper level package metallization feature 633 is interfaced with the host member 702 and with the host member pad 704 by a second level interconnect 703 (eg, solder). The IC die 705 is interfaced on the opposite side of the package substrate 600. The first level interconnect 706 (eg, solder) couples the bottom level package metallization feature 707 to the interconnect pad 708 on the IC die 705. In the example shown, power may be transmitted from power source 709 via PCB 702 to die 705 via inductor structure 628.The inductor structure 630 is used as part of a fully integrated voltage regulation (FIVR) circuit and is larger than what can be made on a die by embedding the entire inductor structure 630 in a package substrate. Inductors and / or larger magnetic cores can be obtained. As a result, the back conversion circuit on the die can operate at lower switch frequencies, and the rules of power path design are relaxed on both the die and the package substrate. In another example, the inductor structure may be part of an rf oscillator tank circuit or an rf filter circuit.FIG. 8 shows a block diagram of an arithmetic unit 800 as part of a system-on-chip (SoC) package in the implementation of a packaged integrated inductor according to an embodiment of the present disclosure.In certain embodiments, the compute device 800 is a server, desktop workstation, or other, but not limited to, laptop computer, computing tablet, cell phone or smartphone, wireless-enabled e-reader, or other wireless mobile device. Represents a mobile workstation.In certain embodiments, the computer 800 has wireless connectivity (eg, Bluetooth®, WiFi and 5G networks). It is usually understood that some members are shown and not all members of such devices are shown in the arithmetic unit 800.Also, the various embodiments of the present disclosure may have a network interface 870, such as a wireless interface, and system embodiments may be incorporated into a wireless device, such as a mobile phone or personal digital assistant. The wireless interface includes a millimeter wave generator and an antenna array. The millimeter wave generator may be part of a monolithic microwave integrated circuit.In certain embodiments, processor 810 represents a CPU or GPU and may include one or more physical devices such as microprocessors, application processors, microcontrollers, programmable logic devices, or other means of processing. As disclosed, processor 810 may have any one of the package substrates having an embedded inductor structure (eg, any one of the package substrates 200, 300, 400, 500, or 600). .. The processing operations performed by processor 810 include running the operating platform or operating system on which the functions of the application and / or device are performed. Processing operations include operations related to I / O (input / output) with human users or other devices, operations related to power management, and / or operations related to connecting the computing device 800 to other devices. Further, the processing operation may include an operation related to audio I / O and / or display I / O.In certain embodiments, the computing unit 800 has an audio subsystem 820, which relates to providing audio functionality to the computing unit, such as hardware (eg, audio hardware and audio circuits) and software (eg, drivers). , Codec) represents a member. Audio features may include speaker and / or headphone output, as well as microphone input. Devices of such function may be integrated in or connected to the arithmetic unit 800. In certain embodiments, the user can exchange information with the computer 800 by providing audio commands that are received and processed by the processor 810.The display subsystem 830 represents a hardware (eg, display device) and software (eg, driver) component, providing the user with a visual and / or tactile display for exchanging information with the computer 800. NS. The display subsystem 830 has a display interface 832, which includes a specific screen or hardware device used to provide a display to the user. In certain embodiments, the display interface 832 includes logic separated from the processor 810 to perform at least some display-related processing. In certain embodiments, the display subsystem 830 includes a touch screen (or touch pad) device that provides both output and input to the user.I / O controller 840 represents a hardware device and software component associated with user interaction. The I / O controller 840 is operated to manage the hardware that is part of the audio subsystem 820 and / or the display subsystem 830. Further, the I / O controller 840 indicates a connection point of an additional device connected to the computing device 800, through which the user can exchange information with the system. For example, a device that can be attached to computing device 800 is a specific application, such as a microphone device, speaker or stereo system, video system, or other display device, keyboard or keypad device, or card reader or other device. It may have other I / O devices used in.As mentioned above, the I / O controller 840 can interact with the audio subsystem 820 and / or the display subsystem 830. For example, input via a microphone or other audio device can provide input or commands for one or more applications or functions of the Arithmetic Logic Unit 800. Also, an audio output can be provided in place of or in addition to the display output. In another example, if the display subsystem 830 has a touch screen, the display can also function as an input device and can be managed at least partially by the I / O controller 840. Also, the arithmetic unit 800 may have additional buttons or switches to provide I / O functionality managed by the I / O controller 840.In certain embodiments, the I / O controller 840 manages a device such as an accelerometer, a camera, an optical sensor or other environmental sensor, or other hardware that may be included in the calculator 800. The inputs are part of the direct user interaction and can also provide environmental inputs to the system to influence the operation of the system (eg, noise filtering, display adjustment for brightness detection). , Applying a flash for the camera, or other features).In one embodiment, the arithmetic unit 800 has a power management 850, which manages features related to battery power usage, battery charging, and power saving operation. The memory subsystem 860 has a memory device for storing information in the computing device 800. Memory is a non-volatile (state does not change when power to the memory device is cut off) and / or volatile (state is uncertain when power to the memory device is cut off) memory device. Can have. The Memory Subsystem 860 stores (long-term or temporary) application data, user data, music, photos, documents, or other data, as well as system data related to the execution of applications and functions of the computer 800. Can be done.The elements of the embodiment are also provided as a machine-readable medium (eg, memory 860) that complements computer-executable instructions. Machine-readable media (eg, memory 860) are, but are not limited to, flash memory, optical discs, CD-ROM, DVDROM, RAM, EPROM, EEPROM, magnetic or optical cards, phase change memory (PCM), etc. Alternatively, it may have other types of machine-readable media suitable for complementing electronic or computer executable instructions. For example, embodiments of the present disclosure are downloaded as a computer program (eg, BIOS), which is a requesting computer from a remote computer (eg, a server) via a data signal over a communication link (eg, a modem or network connection). It may be transferred to (eg, a client).Connectivity via network interface 870 includes hardware devices (eg, wireless and / or wired connectors and communication hardware), as well as software components (eg, drivers, protocol stacks), where the computing device 800 is an external device. Can communicate with. The computing device 800 may be another computing device, a wireless access point, or a separate device such as a base station, or a peripheral device such as a headset, printer, or other device.The network interface 870 may have several different types of connectivity. For generalization, the arithmetic unit 800 is shown with cellular connectivity 872 and wireless connectivity 874. Cellular connectivity 872 is typically GSM (Global System of Mobile Communications) or variant or derivative, CDMA (Code Division Multiple Access) or variant or derivative, TDM (Time Division Multiplexing) or variant or derivative, Represents cellular network connectivity provided by a wireless carrier, or as provided via other cellular service standards. Wireless connectivity (or wireless interface) 874 represents non-cellular wireless connectivity, including personal area networks (Bluetooth®, nearfield, etc.), local area networks (Wi-Fi, etc.), and / Or wide area networks (such as WiMax), or other wireless communications may be included.Peripheral connection 880 includes hardware interfaces and connectors, as well as software components (eg, drivers, protocol stacks) for forming peripheral connections. It is understood that the arithmetic unit 800 may be a peripheral device (882 "to") to another computing device, and a peripheral device (884 "from") may be connected to the peripheral device (882 "to"). Typically, the arithmetic unit 800 has a "docking" connector for connecting to other arithmetic units for purposes such as managing content on the arithmetic unit 800 (eg, downloading and / or uploading, modifying, synchronizing). The docking connector also connects the arithmetic unit 800 to some peripheral device, which allows the arithmetic unit 800 to control, for example, content output, audiovisual, or other systems.In addition to a proprietary docking connector, or other proprietary connection hardware, the Arithmetic Logic Unit 800 can form a peripheral connection 880 via a common or standard-based connector. Common types are universal serial bus (USB) connectors (which can include any many different hardware interfaces), DisplayPort including Multi DisplayPort (MDP), and high resolution multimedia interfaces (HDMI®. )), Firewire, or other types can be included.Also, in one or more embodiments, specific features, structures, functions, or properties may be combined in any suitable manner. For example, the first embodiment can be combined with the second embodiment if the particular features, structures, functions, or properties associated with the two embodiments are not mutually exclusive.Although the present disclosure has been described in the context of certain embodiments, one of ordinary skill in the art will appreciate many alternatives, modifications, and variations of such embodiments. The embodiments of the present disclosure are intended to include all such alternatives, modifications, and modifications, as included in the broad scope of the appended claims.Also, for simplicity of illustration and description, and to avoid obscuring disclosure, known power / ground connections to integrated circuit (IC) chips and other components may be shown in the drawings presented. , May not be shown. In addition, the placement may be presented in block diagram format to avoid obscuring the disclosure, and the fact that the description of the implementation of such a block diagram placement is highly dependent on the platform on which this disclosure is implemented. Is considered. (That is, such statements are well within the scope of those skilled in the art). To illustrate exemplary embodiments of the present disclosure, where certain details (eg, circuits) are described, the present disclosure may be implemented without or with modifications of these particular details. Is obvious to those skilled in the art. Therefore, the description is not limited and is considered to be an example.The following examples relate to another embodiment. The description in the examples can be used in any one or two or more embodiments. Also, all optional features of the equipment described in this application may be implemented in relation to the method or process.The first embodiment is an integrated circuit (IC) package substrate, which is a magnetic material embedded in a dielectric material, wherein the first surface of the dielectric material is below the magnetic material. The second surface of the dielectric material, as opposed to the first surface, is a magnetic material above the magnetic material, a first metal feature embedded within the magnetic material, and the magnetic material. A level of metallization having a second metal feature at the interface between the dielectric material and the second metal feature, the second metal feature is in contact with the first side wall in contact with the dielectric material and the magnetic material. It is an integrated circuit (IC) package substrate having a metallization level, which has a second side wall.The second embodiment includes all the features of the first embodiment, the second metal feature completely surrounding the first metal feature and extending along the perimeter of the magnetic material.Example 3 includes all the features of Example 1 or 2, wherein the metallization level has a multi-layer material stack comprising a first metal on a second metal, wherein the second metal is: It has a higher magnetic permeability than the first metal.Example 4 includes all the features of any one of Examples 1 to 3, wherein the metallization level is further embedded in a portion of the dielectric material that is laterally adjacent to the side wall of the magnetic material. The third metal feature has a third metal feature, and the side wall of the third metal feature has less lateral undercut than the second metal feature.The fifth embodiment includes all the features of the fourth embodiment, and the second metal feature is one of a plurality of second metal features embedded in the magnetic material, and the third metal feature is the third. The metal feature is one of a plurality of third metal features embedded within a portion of the dielectric material, the second metal feature having a first pitch and said third. The metal feature of the above has a second pitch, the second pitch being smaller than the first pitch.Example 6 includes all the features of Example 4 or 5, wherein the first side wall has less lateral undercuts than the second side wall.Example 7 includes all the features of any one of Examples 4 to 6, and the second metal feature has a larger thickness than the third metal feature.Example 8 includes all the characteristics of any one of Examples 1 to 7, and the side wall of the magnetic material has an inclination of at least 45 ° from the plane of the metallized layer.Example 9 includes all the features of any one of Examples 1-8, the metallization level being the upper metallization level, and the substrate further having a lower metallization level. The lower metallization level has a lower metal feature between the bottom of the magnetic material and the first surface of the dielectric material, and the lower metal feature is the lower. It has a larger lateral dimension than the portion of the magnetic material in contact with the metal feature.The tenth embodiment is an integrated circuit (IC) package assembly, which is electrically coupled to the host circuit board via a power supply mounted on the host circuit board and an inductor embedded in the IC package board. The IC package substrate is a magnetic material embedded in a dielectric material, and the first surface of the dielectric material is under the magnetic material. The second surface of the dielectric material, which is opposite to the first surface, is a magnetic material on the magnetic material, an element of the inductor embedded in the magnetic material, and the magnetic material. A metallization level having metal features at the interface of the dielectric material, the metal features having a first side wall in contact with the dielectric material and a second side wall in contact with the magnetic material. An integrated circuit (IC) package assembly having a metallization level.The eleventh embodiment includes all the features of the tenth embodiment, and the inductor has a flat structure having a meandering structure embedded in the magnetic material.Example 12 includes all of the features of Example 10 or 11, the metal feature completely surrounding the element of the inductor and extending along the perimeter of the magnetic material.Example 13 includes all of the features of any one of Examples 10-12, said metallization level having a multi-layer stack comprising a first metal on a second metal, said second. The metal has a higher magnetic permeability than the first metal.Example 14 is a method of manufacturing an integrated circuit (IC) package substrate, which is a step of forming one or more metallized layers embedded in a dielectric material, wherein at least the metallized layer is formed. One is a step of patterning on the spare metal feature and a step of forming an opening through the dielectric material, the opening exposing a part of the spare metal feature. , A step of installing a dry film resist over a part of the spare metal feature, and a step of patterning the spare metal feature into a first metal feature based on the pattern of the dry film resist. A method comprising a step of forming a magnetic material in the opening and over the first metal feature, and a step of forming a dielectric material on the magnetic material.The 15th embodiment includes all the features of the 14th embodiment, and the step of forming the opening through the dielectric material is a step of laser-drilling the opening in the dielectric above the preliminary metal feature. Has.Example 16 includes all the features of Example 14 or 15, and the step of forming one or more metallized layers embedded in the dielectric material is a subtraction-addition process to the preliminary metal features. One or more side walls of the reserve metal feature have an inclination of 10 ° or less from the plane of the reserve metal feature.Example 17 includes all of the features of any one of Examples 14 to 16, and the step of patterning the reserve metal feature into the first metal feature is the subtraction of the metal from the reserve metal feature. One or more side walls of the first metal feature have an inclination between 45 ° and 85 ° from the plane of the reserve metal feature.Example 18 includes all of the features of any one of Examples 14-17, the first metal feature having a meandering trace having a plurality of parallel traces and a ring structure surrounding the meandering trace. The meandering trace and the side wall of the ring structure adjacent to the meandering trace are formed by subtractive removal of the metal of the preliminary metal feature in a wet metal etching bath according to the pattern of the dry film resist.Example 19 includes all the features of any one of Examples 14-18, wherein the side walls of the plurality of parallel traces of the meandering trace and the side walls of the ring structure adjacent to the meandering trace are the spare metal. It has an inclination between 45 ° and 85 ° from the plane of the feature.Example 20 includes all of the features of any one of Examples 14-19, wherein the reserve metal feature is a first reserve metal feature at a first conductivity level that is flush with the bottom of the opening. The material, the second reserve metal feature, is at a second conductivity level on the dielectric material above the first conductivity level and is a metal from the second reserve metal feature. By the subtractive removal of, a plurality of second metallic features are formed at the same time as the first metallic feature.Example 21 includes all of the features of any one of Examples 14 to 20, the first metal feature having a plurality of parallel traces, and one of the plurality of parallel traces. Separated by a minimum pitch, the plurality of second metal features are separated by a second minimum pitch, the first minimum pitch being substantially equal to the second minimum pitch.Example 22 includes all of the features of any one of Examples 14-18, and further, by a subtraction-addition process, a plurality of second metallic features on top of the dielectric material adjacent to the magnetic material. It has a step of forming an object, and the plurality of second metal features are formed by forming a metal on a photoresist-patterned opening or a dielectric material.Example 23 includes all of the features of any one of Examples 14-22, and further, the sidewalls of the plurality of parallel traces are separated by a first minimum pitch and the plurality of second metal features. The side walls of the object are separated by a second minimum pitch, the first minimum pitch being greater than the second minimum pitch.Example 24 includes all of the features of any one of Examples 14 to 23, and the step of forming the magnetic material in the opening includes the step of laminating the magnetic foil on the first dielectric and the step of laminating the magnetic foil. It has a step of forming a second dielectric on the magnetic foil and a step of forming the opening in the second dielectric over the magnetic foil to expose a part of the magnetic foil.Example 25 includes all of the features of any one of Examples 14 to 23, wherein there is a copper foil on the magnetic foil, which is joined to the magnetic foil, wherein the magnetic foil is a first magnetic material. The copper foil and the magnetic foil are simultaneously patterned and have a first layer having the first magnetic material and a second layer above the first layer having copper. , A serpentine having a plurality of parallel traces is formed, a second magnetic material is formed on the plurality of parallel traces, and the copper in the second layer is the first in the first layer. It is sealed with a magnetic material and a second magnetic material on top of the copper.A summary is submitted and it is understood that it is not used to limit the scope or meaning of the claims. The following claims are incorporated into the detailed description, and each claim exists as a separate embodiment in itself.201 Board Stack 202 Dielectric 203 Metallization Features 204 Top Surface 205 Opening 206 Side Sides 207 Magnetic Material 209 Pre-Metallic Features 212 Dielectric 214 Openings 218 Openings 220 Inductor Trace 221 Ring Structure 227 Magnetic Material 230 Dielectric 231 Serpentine Trace Structure
The present technology relates to systems and methods for cleaning a compression molding system. In particular, a system including a first roller carrying a cleaning tape is provided. The system further includes a second roller configured to dispense the cleaning tape along a compression molding structure. The tape is removably adhered to the structure and subsequently removed, thereby removing foreign debris such as dust and/or other particles from the compression molding structure.
1.A system for cleaning a compression-molded structure, the system comprising:a loading roller configured to carry and dispense the cleaning tape;an unloading roller configured to attach to a first region of the cleaning tape and move in a first direction from a first position adjacent to the first end portion of the compression-molded structure to adjacent to the first end portion of the compression-molded structure a second position adjacent a second end portion of the compression-molded structure, wherein moving from the first position to the second position dispenses the cleaning tape between the first end portion and the second position between the end parts;a vacuum support tube configured to releasably secure a second region of the dispensed cleaning tape adjacent the first end portion; andAn attachment roller configured to apply pressure to at least a segment of the dispensed cleaning tape to removably adhere the segment to the compression-molded structure.2.4. The system of claim 1, wherein the unloading roller is further configured to move from the second position to a third position closer to the compression-molded structure and remove the first region of the cleaning tape Releasably secured in the second securing position.3.3. The system of claim 2, wherein when the unloading roller is in the third position, the loading roller is tensioned to prevent further dispensing of the cleaning tape.4.4. The system of claim 1, wherein the attachment roller is configured to compress at least a portion of the dispensed cleaning tape by pressing an adhesive surface of the dispensed cleaning tape against at least a portion of the compression-molded structure The segment is removably adhered to the compression-molded structure.5.3. The system of claim 1, wherein the unloading roller is further configured to move in a second direction, and wherein moving in the second direction is to removably adhere to all portions of the compression-molded structure. The dispensed cleaning tape is removed.6.6. The system of claim 5, wherein the unload roller is configured to rotate when the unload roller moves in the second direction such that at least a portion of the dispensed cleaning tape wraps around the unload on the roll.7.6. The system of claim 5, wherein removing the dispensed cleaning tape from the compression-molded structure removes debris attached to the compression-molded structure.8.6. The system of claim 5, wherein the unload roller is further configured to return to the first position and the vacuum support tube is further configured to release the releasably secured second region of the cleaning tape .9.The system of claim 1, wherein the loading roll has a first tensiometer and the unloading roll has a second tensiometer, and wherein the first and second tensiometers are configured to Anomalies are detected during operation of the system.10.6. The system of claim 1, further comprising a backup roller configured to position the dispensed cleaning tape adjacent the first end portion.11.A method for cleaning a compression-molded structure, the method comprising:Supply rollers for carrying cleaning tape;moving an unload roller attached to a first region of the cleaning tape from a first position adjacent a first end portion of the compression-molded structure to adjacent a second end portion of the compression-molded structure a second position, wherein moving the unloading roller from the first position to the second position dispenses a segment of the cleaning tape along the length of the compression-molded structure;removably adhering the segment of the cleaning tape to the compression-molded structure by applying pressure to the segment with an attachment roller; andThe section of cleaning tape removably adhered to the compression-molded structure is removed by moving the unload roller from the second position toward the first position.12.12. The method of claim 11, wherein the length of cleaning tape is removably adhered to the unloading roller by the unloading roller as the unloading roller moves from the first position to the second position of the compression-molded structure.13.12. The method of claim 11, wherein the attachment roller moves along the first surface of the section of cleaning tape such that the attachment roller presses the second adhesive surface of the cleaning tape against the on the compression-molded structure.14.11. The method of claim 11, wherein moving the unload roller from the second position toward the first position comprises rotating the unload roller such that the portion of cleaning tape removed from the compression-molded structure At least a portion of the segment is wound around the unloading roll.15.A method for cleaning a compression-molded structure, the method comprising:Supply rollers for carrying cleaning tape;moving an unload roller attached to a first region of the cleaning tape from a first position adjacent a first end portion of the compression-molded structure to adjacent a second end portion of the compression-molded structure a second position, wherein moving the unloading roller from the first position to the second position places a segment of the cleaning tape along the length of the compression-molded structure;releasably securing a second region of the cleaning tape adjacent the first end portion;releasably securing the first region of the cleaning tape adjacent the second end portion;removably adhering the segment of the cleaning tape to the compression-molded structure by applying pressure to the segment with an attachment roller; andThe section of the cleaning tape that is removably adhered to the compression-molded structure is removed by moving the unload roller from the second position toward the first position.16.16. The method of claim 15, wherein the second region of the cleaning tape is releasably secured near the first end portion of the compression-molded structure by a vacuum support tube, and the cleaning tape is The first region is releasably secured near the second end portion of the compression-molded structure by the unloading roller.17.17. The method of claim 16, further comprising releasing the second region of the cleaning tape releasably secured by the vacuum support tube.18.16. The method of claim 15, wherein the attachment roller removably adheres the segment of the cleaning tape to the The compression molding structure.19.16. The method of claim 15, wherein removing the segment of the cleaning tape removably adhered to the compression-molded structure removes foreign debris attached to the compression-molded structure.
Foreign body cleaning system for compression moldingtechnical fieldThe present technology relates to systems and methods for cleaning compression molded structures.Background techniqueMany packaged microelectronic devices have a substrate, a microelectronic die attached to the substrate, interconnects (eg, wires) between the die and the substrate, and a protective covering or wrap surrounding the die and its interconnects sealant. The protective covering is typically a plastic or epoxy compound that can be molded to form a housing over the die and its interconnects. The microelectronic die may be a memory device, a microprocessor, or another type of microelectronic assembly with an integrated circuit system. Many types of packaged devices also include bond pads on the substrate that are coupled to the integrated circuit system of the die. The bond pads may alternatively be coupled to pins or other types of terminals exposed on the outside of the microelectronic device for connecting the die to buses, circuits, and/or other microelectronic assemblies.A notably limiting process in the manufacture of packaged microelectronic devices is the encapsulation of the die with a protective covering. The dies and interconnects are sensitive components that should be protected from physical contact and potentially harmful environmental conditions to avoid damage to the dies and their interconnects. Therefore, a protective casing enclosing the dies and interconnects should shield the dies and interconnects from the external environment and protect the dies and interconnects from electrical and mechanical shocks. Therefore, the protective housing should not have any voids to prevent contaminants or other harmful agents from contacting and potentially damaging the dies and interconnects.One conventional technique for encapsulating dies and interconnects is compression molding. During the compression molding process, the substrate and die are loaded onto the upper block of the compression molded structure, and the molding compound is loaded onto the lower block of the compression molded structure. The lower block is moved upward toward the upper block so that the die and interconnects are immersed in the molding compound. Once the die and interconnects are fully encapsulated, the upper block is separated from the lower block, and the encapsulated die is removed.However, foreign debris, such as dust or other particles, may accumulate on one or more surfaces of the compression-molded structure. This build-up may affect the viability of the compression molding process. For example, foreign objects may cause substrate indentation and/or die crack failures during the molding process.Conventional methods of cleaning compression-molded structures have a number of disadvantages. For example, one conventional cleaning method uses a vacuum cleaning system that includes a vacuum tube and brushes. However, such vacuum cleaning systems may introduce dust and/or particles into the compression-molded structure through the brush and/or may otherwise damage the compression-molded structure. Another method is to use a hardened sheet to manually scrape along the surface of the compression-molded structure. However, this approach may damage the compression-molded structure through physical deformation and does not capture dust and/or other particles removed from the surface. Accordingly, there is a need for improved systems and methods for cleaning compression molded structures.SUMMARY OF THE INVENTIONIn one aspect, the present application provides a system for cleaning a compression-molded structure, the system comprising: a loading roller configured to carry and dispense cleaning tape; an unloading roller configured to Attached to an end region of the cleaning tape and moved in a first direction from a first position adjacent the first end portion of the compression-molded structure to a second end portion of the compression-molded structure an adjacent second position, wherein moving from the first position to the second position dispenses the cleaning tape between the first end portion and the second end portion; a vacuum support tube, the vacuum support tube configured to releasably secure an area of the dispensed cleaning tape proximate the first end portion; and an attachment roller configured to provide the dispensed cleaning tape At least one segment of the tape applies pressure to removably adhere the segment to the compression-molded structure.In another aspect, the present application further provides a method for cleaning a compression-molded structure, the method comprising: providing a loading roller carrying cleaning tape; removing an unloading roller attached to an end region of the cleaning tape from the A first position adjacent to the first end portion of the compression-molded structure is moved to a second position adjacent to the second end portion of the compression-molded structure, wherein the unloading roller is removed from the first moving the position to the second position to dispense a segment of the cleaning tape along the length of the compression-molded structure; removably adhering the segment of the cleaning tape to the compression-molded structure; and by Moving the unload roller from the second position toward the first position removes the segment of cleaning tape that is removably adhered to the compression-molded structure.In yet another aspect, the present application further provides a method for cleaning a compression-molded structure, the method comprising: providing a loading roller carrying cleaning tape; removing an unloading roller attached to an end region of the cleaning tape from the A first position adjacent to the first end portion of the compression-molded structure is moved to a second position adjacent to the second end portion of the compression-molded structure, wherein the unloading roller is removed from the first moving the position to the second position to place a section of the cleaning tape along the length of the compression-molded structure; releasably securing a first region of the cleaning tape adjacent the first end portion; the end region of the cleaning tape is releasably secured adjacent the second end portion; the segment of the cleaning tape is removably adhered to the compression-molded structure with an attachment roller; and removing the segment of the cleaning tape removably adhered to the compression-molded structure by moving the unload roller from the second position toward the first position.Description of drawings1A-1G are partial schematic cross-sectional side views of a compression molding apparatus for encapsulating a microelectronic device using a molding process.2A-2E are cross-sectional side views of a cleaning system in accordance with one embodiment of the present technology.3 is a flow diagram of a method of cleaning a compression-molded structure in accordance with one embodiment of the present technology.4 is a flow diagram of a method of cleaning a compression-molded structure in accordance with another embodiment of the present technology.Detailed waysSpecific details of various embodiments of systems, methods, and apparatus for cleaning compression-molded structures are described herein.1A-1G illustrate a compression molding structure for encapsulating a semiconductor die. FIG. 1A shows a compression-molded structure 100 that includes a top locking block 102 and a bottom cavity 104 . In FIG. 1B , a strip 106 containing the substrate and semiconductor die has been loaded into the compression-molded structure 100 such that the strip 106 is juxtaposed with the surface of the top locking block 102 . As shown in FIG. 1C , the molding compound can then be loaded into the compression molding structure 100 such that the molding compound occupies the space adjacent the bottom cavity 104 . Once the molding compound 108 has been added to the compression molded structure 100, the molding compound 108 can be heated and the top locking blocks 102 and/or the bottom cavity 104 can be moved toward each other, as shown in Figure ID. As further shown in FIG. 1E , the space between the top locking block 102 and the bottom cavity 104 may be reduced until the strip 106 is immersed in the heated molding compound 108 . Finally, as shown in FIG. 1F , the top locking block 102 or the bottom cavity 104 can be moved away from each other and the strip 106 encapsulated by the molding compound 108 can be removed from the compression molded structure 100 .However, foreign debris, such as dust and/or other particles, may accumulate on one or more surfaces of the compression-molded structure. FIG. 1G illustrates one possibility in which foreign debris 110 collects on the surface of the top locking block 102 . As recognized in the art, the accumulation of foreign debris on the top locking block can cause substrate indentation and possibly die crack failure during a compression molding cycle.2A-2E illustrate one embodiment of a cleaning system 200 and a process for cleaning a surface 207 of a compression-molded structure 202. As will be discussed in further detail herein, cleaning system 200 includes a loading roller 204 that is configured to carry and dispense cleaning tape 214 . The cleaning system 200 further includes an unloading roller 206 configured to attach to the end region of the cleaning tape 214 . The unload roll 206 is configured to move from a first position adjacent the first end portion 203 of the compression-molded structure 202 to a second position adjacent the second end portion 205 of the compression-molded structure 202 . By moving from the first position to the second position, the unloading roller 206 dispenses the cleaning tape 214 along the surface 207 of the compression-molded structure 202 . In some embodiments, the system 200 may also include an attachment roll 208 , a vacuum support tube 210 , a support roll 212 , a first gauge 216 and a second gauge 218 .The cleaning tape 214 may have a first surface and a second surface. In some embodiments, the first surface may be an adhesive surface and the second surface may be a non-adhesive surface. In such embodiments, the adhesive surface can be removably adhered to another surface, such as surface 207 of compression-molded structure 202 . In other embodiments, the first surface and the second surface may be adhesive. Cleaning tape 214 may be any tape suitable for cleaning compression molded structures. A non-limiting example of a suitable tape is a high temperature tape, such as 3M Polyimide Film Tape 5413.Figure 2A illustrates cleaning system 200 during a first stage of one embodiment of a cleaning process. In some embodiments, the configuration of cleaning system 200 in FIG. 2A is the starting position of the cleaning system. As shown in FIG. 2A , both the loading roll 204 and the unloading roll 206 may be positioned adjacent the first end portion 203 of the compression-molded structure 202 . The first end region of the cleaning tape 214 may be attached to the unloading roll 206 such that the cleaning tape 214 extends between the loading roll 204 and the unloading roll 206 . The backup roll 212 may optionally be positioned between the loading roll 204 and the unloading roll 206 such that the cleaning tape 214 extending between the loading roll 204 and the unloading roll contacts the backup roll 212 . As shown in FIG. 2A , the backup roll 212 may guide the cleaning tape 214 dispensed from the loading roll 204 toward the first end portion 203 of the compression-molded structure 202 . However, in some embodiments, support rollers 212 are not included, and loading rollers 204 and unloading rollers 206 are configured such that tape dispensed from loading rollers 204 is directed toward first end portion 203 of compression-molded structure 202 .As further shown in FIG. 2A , the attachment rollers 208 may be positioned spaced apart from the loading rollers 204 and the compression-molded structure 202 . As will be discussed in more detail below, the attachment roller 208 may be configured to apply pressure on at least a segment of the cleaning tape 214 distributed between the first end portion 203 and the second end portion 205 of the compression-molded structure 202 . Attachment roller 208 removably adheres the adhesive side of cleaning tape 214 to surface 207 of compression molded structure 202 by applying pressure on cleaning tape 214 .As further shown in FIG. 2A , a vacuum support tube 210 may be positioned between the unload roll 206 and the support roll 212 . In embodiments without support rollers 212 , vacuum support tubes 210 may be positioned between loading rollers 204 and unloading rollers 206 . In either embodiment, the vacuum support tube 210 may be adjacent to the cleaning tape 214 and configured to releasably secure an area of the dispensed cleaning tape 214 by suction.Also, as shown in FIG. 2A, the system may include one or more meters 216 and 218 (eg, sensors). These gauges can be configured to monitor the cleaning process. If there is a problem with the process (eg, roller failure, tape tear, etc.), the gauge can terminate the cleaning process to prevent damage to the compression-molded structure. For example, in the embodiment depicted in Figures 2A-2E, the loading roller may include a first tensiometer 216 configured to detect anomalies during system operation, and the unloading roller may include a first tensiometer configured to detect anomalies during system operation Two tensiometers 218. In other embodiments, the sensor or gauge may be positioned at some other location in the system.Figure 2B illustrates the system 200 during the second stage of one embodiment of the cleaning process. In FIG. 2B , the unloading roller 206 is in a first direction from a first position adjacent to the first end portion 203 of the compression-molded structure 202 toward a second position adjacent to the second end portion 205 of the compression-molded structure 202 position moves. In some embodiments, the unloading roller 206 does not rotate while moving in the first direction. Thus, because the unload roller 206 is attached to the first end region of the cleaning tape 214, as the unload roller 206 moves toward the second position, the unload roller unwinds a segment of the cleaning tape 214 from the loading roller 204, thereby increasing the The length of cleaning tape 214 dispensed from the loading roll 204 . In other embodiments, a portion of the unload roll 206 may rotate as the unload roll moves in the first direction. However, the portion of the unload roller 206 that is attached to the cleaning tape 214 does not rotate, thereby unwinding a segment of the cleaning tape 214 from the loading roller 204 as the unload roller moves toward the second position. In yet other embodiments in which the unloading roller 206 rotates as it moves in the first direction, the unloading roller may carry the cleaning tape 214 and dispense the cleaning tape 214 as it moves from the first position to the second position.Figure 2C illustrates the system 200 during the third stage of one embodiment of the cleaning process. In FIG. 2C , the unloading roller 206 has reached a second position adjacent the second end portion 205 of the compression-molded structure 202 . In some embodiments, the unload roll 206 may optionally be moved to a third position closer to the compression-molded structure 202 . By moving to the third position, the unloading roller 206 forces the dispensed section of the cleaning tape 214 extending between the unloading roller 206 and the loading roller 204 closer to the compression-molded structure 202 . In other embodiments, the unload roll 206 is not moved to the third position, and the second position is close enough to the compression-molded structure 202 to place the dispensed section of the cleaning tape 214 extending between the unload roll 206 and the load roll 204 in tight By compression molding structure 202 . When the unload roller 206 is in its second or third position, the load roller 204 may be tensioned to prevent further dispensing of the cleaning tape 214 .In some embodiments, the vacuum support tube 210 releasably secures the first region of the cleaning tape 214 adjacent the first end portion 203 once the unloading roller 206 has reached its second or third position. To this end, the vacuum support tube 210 may apply suction or other force suitable for securing the first region of the cleaning tape 214 . Alternatively, the first region of the cleaning tape 214 may be releasably secured by another structure, such as the support roller 212 or the loading roller 204 . Additionally, the unloading roller may also releasably secure a second region (eg, end region) of the cleaning tape 214 once the unloading roller 206 is in its second or third position. Releasably securing the first region of cleaning tape 214 near first end portion 203 and releasably securing the second region of cleaning tape 214 near second end portion 205 helps keep the tape near surface 207 Stay in place.Once a length of cleaning tape 214 spans from the first end portion 203 of the compression-molded structure 202 to the second end portion 205 of the compression-molded structure 202, the length of cleaning tape 214 is removably adhered to the surface 207 . In some embodiments, attachment roller 208 removably adheres cleaning tape 214 to surface 207 . For example, the attachment roller 208 may move toward the segment of the cleaning tape 214 and the compression-molded structure 202 and apply pressure on the cleaning tape 214 such that the adhesive surface of the cleaning tape 214 presses against the surface 207 of the compression-molded structure 202 by This removably adheres cleaning tape 214 to surface 207 . The attachment roller 208 may further move or roll along the surface 207 of the compression-molded structure 202 to removably adhere to the surface 207 a larger segment of the cleaning tape 214 positioned between the attachment roller and the compression-molded structure 202 . For example, the attachment roller may be moved from a first position adjacent the second end portion 205 of the compression-molded structure 202 to a second position adjacent the first end portion 203 of the compression-molded structure 202 . In some embodiments, the attachment roller can be moved back and forth between the first and second positions to ensure that the cleaning tape 214 adheres smoothly and/or properly to the surface 207 .However, in some embodiments, the system 200 does not include attachment rollers. In such embodiments, the cleaning tape is adhered to surface 207 by another mechanism. For example, cleaning tape may be removably adhered to surface 207 as the unloading roller moves from its first position adjacent first end portion 203 to its second position adjacent second end portion 205 . In other embodiments, once the unload roll is in its second position, the unload roll and the load roll can each be moved toward the compression-molded structure 202, thereby cleaning the segment of the cleaning tape 214 that extends between the unload roll and the load roll Press against surface 207 .Figure 2D illustrates the system 200 during the fourth stage of one embodiment of the cleaning process. In FIG. 2D , the attachment roller 208 has moved back toward its initial position spaced from the compression-molded structure 202 . The unloading roller 206 is then moved back from its second or third position towards its first position. For example, the unloading roller 206 moves from the second end portion 205 towards the first end portion 203 . As the unloading roller moves toward the first end portion 203, the unloading roller removes the cleaning tape 214 that is removably adhered to the surface 207. The unloading roller 206 may remove the cleaning tape 214 by rotating as the unloading roller moves toward the first end portion 203, thereby pulling the cleaning tape from the surface 207 and wrapping the removed section of the cleaning tape 214 around the rotating unloading on roller 206. In other embodiments, the unloading roller 206 may move toward the first end portion 203 without rotating, but will still remove the cleaning tape 214 adhering to the surface 207 .Figure 2E illustrates the system 200 during the fifth stage of one embodiment of the cleaning process. In FIG. 2E , the unloading roller 206 has returned to its fourth position at or near its first position adjacent to the first end portion 203 of the compression-molded structure 202 . Thus, cleaning tape 214 has been removed from surface 207 and wrapped around unload roller 206 . The vacuum support tube 210, if present, can release its attraction to the cleaning tape 214, thereby returning the system 200 to a configuration substantially similar to its starting position shown in Figure 2A.The process of adhering cleaning tape to a surface of a compression-molded structure and subsequently removing the cleaning tape from the surface as described herein may remove foreign debris, such as dust and other particles, from the surface. Thus, performing the cleaning process outlined in FIGS. 2A-2E using the system 200 prior to loading the strip containing the substrate and semiconductor die into the compression molding structure can prevent substrate indentation and die cracking during the compression molding cycle malfunction and/or reduce its likelihood.In some embodiments, the system 200 is capable of repeating the cleaning process outlined in Figures 2A-2E without user intervention or assistance. For example, the system 200 may be capable of requiring user intervention or assistance after repeating the cleaning process one, two, three, four, five, or more times. In some embodiments, the limiting factor for the number of times the cleaning process can be repeated without user intervention is the amount of cleaning tape that the loading and/or unloading rollers can carry. For example, if the loading roller 204 runs out of cleaning tape 214, the user may replace the fully dispensed roll of cleaning tape with a new roll of unused cleaning tape. The user can also remove previously dispensed cleaning tape wrapped around the unload roller 206 . Furthermore, in some embodiments, the system 200 and cleaning process may be automated such that cleaning the compression-molded structure 202 does not require user assistance.3 is a flow diagram of a method 300 for cleaning a compression-molded structure in accordance with selected embodiments of the present technology. The method 300 may include providing a loading roll that carries the cleaning tape (process portion 302). As previously mentioned, the cleaning tape can be any tape suitable for use in the environment of compression molded structures. For example, the cleaning tape may be a high temperature tape (eg, 3M Polyimide Film Tape 5413). The method 300 further includes moving the unloading roller attached to the end region of the cleaning tape from a first position adjacent to the first end portion of the compression-molded structure to a second position adjacent to the second end portion of the compression-molded structure. Second position. This movement dispenses a segment of cleaning tape along the length of the compression-molded structure (process portion 304).Method 300 continues by removably adhering the segment of cleaning tape dispensed along the length of the compression-molded structure to the compression-molded structure (process portion 306). As previously discussed, adhesion can occur by various mechanisms. For example, an attachment roller may removably adhere the cleaning tape to the compression-molded structure by pressing the cleaning tape against the compression-molded structure along the length of the structure. Additionally or alternatively, the unloading roller may removably adhere the cleaning tape to the compression-molded structure when moved from the first position to the second position. In yet another example, the end region of the cleaning tape may be secured by securing the end region of the cleaning tape to the unload roller at the second end portion of the compression molded structure and by securing the first region of the cleaning tape to the first end portion of the compression molded structure portion to removably adhere the cleaning tape to the compression molded structure. The two secured regions of the cleaning tape can be moved toward the compression-molded structure until the segment of the tape located between the secured regions is juxtaposed with the compression-molded structure.After removably adhering the cleaning tape to the compression-molded structure, method 300 continues by removably adhering the cleaning tape to the compression-molded structure by moving the unload roller from the second position toward the first position. Segment removal (process portion 308). As previously mentioned, the unloading roller may remove the cleaning tape by rotating the unloading roller as it moves toward the first position, thereby pulling the cleaning tape from the compression-molded structure and wrapping the removed section of the cleaning tape around the rotating unloading on the roll. However, in other embodiments, the unloading roller may be moved toward the first position without rotation, but will still remove cleaning tape adhering to the compression-molded structure.4 is a flowchart of a method 400 for cleaning a compression-molded structure in accordance with selected embodiments of the present technology. Method 400 may begin by providing a loading roll that carries cleaning tape (process portion 402). The method 400 further includes moving the unloading roller attached to the end region of the cleaning tape from a first position adjacent to the first end portion of the compression-molded structure to a second position adjacent to the second end portion of the compression-molded structure. Second position. This movement dispenses a segment of cleaning tape along the length of the compression-molded structure (process portion 404).Method 400 continues by releasably securing an area of cleaning tape near the first end portion of the compression-molded structure (process portion 406). This area of the cleaning tape can be secured by, for example, a vacuum support tube. Additionally or alternatively, this region may be releasably secured by another structure such as a support or loading roller. The method 400 further includes releasably securing the end region of the cleaning tape adjacent the second end portion of the compression-molded structure (process portion 408). For example, the end regions can be releasably fastened by means of unloading rollers.Method 400 continues by removably adhering the length of cleaning tape to a compression-molded structure with an attachment roller (process portion 410). As previously discussed, the attachment roller may apply pressure on the cleaning tape such that the adhesive surface of the cleaning tape is pressed against the compression-molded structure, thereby removably adhering the cleaning tape to the compression-molded structure. The attachment roller may also be moved or rolled along the compression-molded structure to removably adhere to the structure a larger section of the cleaning tape positioned between the attachment roller and the compression-molded structure. For example, the attachment roller may be moved from a first position adjacent the second end portion of the compression-molded structure to a second position adjacent the first end portion of the compression-molded structure. In some embodiments, the attachment roller can be moved back and forth between the first and second positions to ensure that the cleaning tape adheres smoothly and/or properly to the surface.After removably adhering the cleaning tape to the compression-molded structure, method 400 continues by removably adhering the cleaning tape to the compression-molded structure by moving the unload roller from the second position toward the first position. Segment removal (process portion 412). As previously mentioned, the unloading roller may remove the cleaning tape by rotating the unloading roller as it moves toward the first position, thereby pulling the cleaning tape from the compression-molded structure and wrapping the removed section of the cleaning tape around the rotating unloading on the roll. However, in other embodiments, the unloading roller may be moved toward the first position without rotation, but will still remove cleaning tape adhering to the compression-molded structure.As used herein and as can be understood from the foregoing discussion, the term "roller" refers to any device capable of performing the described function and does not limit the device to conventional tape rollers. In general, the rollers described herein will be capable of linear translation, rotational translation, or both linear and rotational translation. However, in some embodiments, the rollers may be completely stationary.For example, the loading roller may be any device capable of carrying and dispensing cleaning tape in accordance with the present techniques. For example, in some embodiments, the loading roller may be configured to rotate about a central axis while remaining in a stationary position. In other examples, the loading roller is capable of both rotational and linear translation.Furthermore, the unloading roller may be any device capable of moving from a first position adjacent the first end portion of the compression-molded structure to a second position adjacent the second end portion of the compression-molded structure in accordance with the present techniques. The unloading roller may be capable of both rotational and linear translation. For example, in some embodiments, the unloading roller may be configured to move from a first position to a second position without rotation (ie, linear translation), and while rotating about a central axis (ie, rotational translation) Move from the second position to the third position (ie, linear translation). In some embodiments, a portion of the unloading roller may rotate while moving from the first position to the second position, while the second portion attached to the cleaning tape does not rotate. This ensures that the cleaning tape will be dispensed along the length of the compression molded structure. However, when the unloading roll removes the cleaning tape from the compression-molded structure, the unloading roll will rotate so that the removed cleaning tape wraps around the unloading roll.Furthermore, as can be appreciated by those skilled in the art in light of this disclosure, the techniques of the present invention may be capable of cleaning more than one surface of a compression-molded structure. For example, the techniques of the present invention may be used to clean the top locking block of a compression-molded structure. However, the techniques of the present invention may also be used to clean the bottom cavity and/or another other surface of a compression molded structure.This disclosure is not intended to be exhaustive or to limit the technology to the precise form disclosed herein. Although specific embodiments are disclosed herein for illustrative purposes, various equivalent modifications are possible without departing from the technology of the present invention, as those skilled in the art will recognize. In some instances, well-known structures and functions have not been shown or described in detail to avoid unnecessarily obscuring the description of embodiments of the present technology. Although the steps of a method may be presented herein in a particular order, alternative embodiments may perform the steps in a different order. Similarly, certain aspects of the techniques disclosed in the context of particular embodiments may be combined or eliminated in other embodiments. Furthermore, while the advantages associated with certain embodiments of the present technology may have been disclosed in the context of those embodiments, other embodiments of the present technology may have been disclosed in the context of those embodiments, other implementations Examples may also exhibit such advantages, and not all embodiments necessarily exhibit such advantages or other advantages disclosed herein in order to fall within the scope of the present technology. Accordingly, the present disclosure and related technology may encompass other embodiments not expressly shown or described herein.Throughout this disclosure, the singular terms "a/an (a, an)" and "the" include plural referents unless the context clearly dictates otherwise. Similarly, when referring to a list of two or more items, the use of "or" in such a list should be construed unless the word "or" is expressly limited to refer to only the single item and no other items. to contain (a) any single item in the list, (b) all items in the list, or (c) any combination of items in the list. Furthermore, the term "comprising" is used throughout to mean that at least one or more of the stated features are included, such that any greater number of the same and/or additional types of other features are not excluded. Reference herein to "one embodiment," "anembodiment," or similar expressions means that a particular feature, structure, operation, or characteristic described in connection with the embodiment may be included in at least one aspect of the present technology. in one embodiment. Thus, appearances of such phrases or expressions herein are not necessarily all referring to the same embodiment. Furthermore, the various specific features, structures, operations or characteristics may be combined in any suitable manner in one or more embodiments.
Aspects of the disclosure are directed to a bandpass filter including four BAW resonators in a full lattice arrangement with two matching inductors at the input and output terminal. Further aspects of the disclosure are directed to combing BAW filter chips with embedded 3D inductors. The 3D-inductors are embedded in either mold (850) or a carrier substrate (1610) by combining front and back-side RDLs (960, 970, 980; 1667,1675) with through-mold vias, TMVs, (930) or through substrate vias (1630). The BAW chips are either in the same mold as the TMV inductors or on the carrier substrate comprising the TSV inductors.
CLAIMSWhat is claimed is:1. A method for forming one or more individual bandpass filters on an integrated circuit (IC), the method comprising:positioning a first redistribution layer (RDL) in a wafer layer on the integrated circuit (IC);placing one or more vertical conductive pillars above the wafer layer;forming a plurality of inductors by coating a first passivation layer onto the wafer layer;plating a second redistribution layer (RDL) over the first passivation layer; and coating a second passivation layer above the second redistribution layer (RDL).2. The method of claim 1, wherein the wafer layer is a molded wafer layer.3. The method of claim 1, wherein the one or more vertical conductive pillars are either copper (Cu) pillars or aluminum (Al) pillars.4. The method of claim 1, wherein the wafer layer is a high-resistivity silicon (HRS) wafer, a gallium arsenide (GaAs) wafer or a glass wafer.5. The method of claim 1, further comprising assembling a plurality of resonator chips onto the wafer layer.6. The method of claim 5, wherein one of the plurality of resonator chips is a bulk acoustic wave (BAW) resonator.7. The method of claim 5, further comprising covering the wafer layer with a molding material to form a molded wafer layer.8. The method of claim 7, wherein the molding material is an epoxy.9. The method of claim 7, further comprising using a transfer-molding process or a compression molding process for covering the wafer layer with the molding material.22 10. The method of claim 7, further comprising back-grinding the molded wafer layer to expose the one or more vertical conductive pillars.11. The method of claim 7, further comprising forming an interconnection layer above the second passivation layer.12. The method of claim 11, wherein the interconnection layer includes one or more of a solder ball or a conductive pad.13. The method of claim 11, further comprising dicing the integrated circuit (IC) to obtain the one or more individual bandpass filters.14. A method for forming one or more individual bandpass filters on an integrated circuit (IC), the method comprising:forming a through glass via (TGV) within a wafer layer on the integrated circuit(IC);coating a first passivation layer on top of the wafer layer;placing a first redistribution layer (RDL) above the first passivation layer, wherein the first RDL is placed over one or more vertical conductive pillars;flipping the integrated circuit (IC);coating the wafer layer with a second passivation layer; andplacing a second redistribution layer (RDL) above the second passivation layer to form a plurality of inductors.15. The method of claim 14, wherein the wafer layer is a high-resistivity silicon (HRS) wafer, a gallium arsenide (GaAs) wafer or a glass wafer.16. The method of claim 14, further comprising filling the through glass via (TGV) through metallic plating to form the one or more vertical conductive pillars.17. The method of claim 16, wherein the metallic plating is copper plating.2318. The method of claim 14, further comprising forming the one or more vertical conductive pillars through either a laser drilling process or an etching process.19. The method of claim 18, further comprising forming the one or more vertical conductive pillars through either a copper plating process or a conductive paste filling process.20. The method of claim 14, further comprising coating a third passivation layer above the first redistribution layer (RDL) and exposing a portion of the third passivation layer for assembling one or more resonator chips.21. The method of claim 20, further comprising using a plating process to place one or more interconnection pads above the third passivation layer.22. The method of claim 21, wherein the one or more resonator chips are assembled on top of the one or more interconnection pads.23. The method of claim 22, wherein the one or more resonator chips is a plurality of bulk acoustic wave (BAW) resonators.24. The method of claim 22, further comprising covering the third passivation layer and the one or more resonator chips with a molding material.25. The method of claim 24, wherein the molding material is an epoxy.26. The method of claim 24, further comprising using a transfer-molding process or a compression molding process for covering the third passivation layer and the one or more resonator chips with the molding material.27. The method of claim 24, further comprising:coating a fourth passivation layer above the second RDL; andcreating an interconnection layer above the fourth passivation layer.2428. The method of claim 27, further comprising adding one or more conductive pads or solder balls for creating the interconnection layer.29. The method of claim 27, further comprising dicing the integrated circuit (IC) to obtain the one or more individual bandpass filters.30. A bandpass filter in an integrated circuit (IC) comprising:a plurality of resonators including a first resonator, a second resonator, a third resonator and a fourth resonator, and wherein the second resonator and the third resonator are in parallel, andwherein the first resonator includes a first terminal and a second terminal,wherein the second resonator includes a second resonator top terminal and a second resonator bottom terminal;wherein the third resonator includes a third resonator top terminal and a third resonator bottom terminal,wherein the fourth resonator includes a third terminal and a fourth terminal, andwherein the first terminal is coupled to the second resonator top terminal, wherein the second terminal is coupled to the third resonator top terminal,wherein the third terminal is coupled to the third resonator bottom terminal,wherein the fourth terminal is coupled to the second resonator bottom terminal; anda first inductor coupled to the first terminal and the third terminal; and a second inductor coupled to the second terminal and the fourth terminal.31. A computer-readable medium storing computer executable code, operable on a device comprising at least one processor and at least one memory coupled to the at least one processor, wherein the at least one processor is configured to implement one or more individual bandpass filters on an integrated circuit (IC), the computer executable code comprising:25 instructions for causing a computer to position a first redistribution layer (RDL) in a wafer layer on the integrated circuit (IC);instructions for causing the computer to place one or more vertical conductive pillars above the wafer layer;instructions for causing the computer to assemble a plurality of resonator chips onto the wafer layer;instructions for causing the computer to cover the wafer layer with a molding material to form a molded wafer layer;instructions for causing the computer to form a plurality of inductors by coating a first passivation layer onto the molded wafer layer, by plating a second redistribution layer (RDL) over the first passivation layer and by coating a second passivation layer above the second RDL;instructions for causing the computer to form an interconnection layer above the second passivation layer; andinstructions for causing the computer to dice the integrated circuit (IC) to obtain one or more individual bandpass filters.32. A computer-readable medium storing computer executable code, operable on a device comprising at least one processor and at least one memory coupled to the at least one processor, wherein the at least one processor is configured to implement one or more individual bandpass filters on an integrated circuit (IC), the computer executable code comprising:instructions for causing a computer to form a through glass via (TGV) within a wafer layer on the integrated circuit (IC);instructions for causing the computer to coat a first passivation layer on top of the wafer layer and to place a first redistribution layer (RDL) above the first passivation layer, wherein the first RDL is placed over one or more vertical conductive pillars; instructions for causing the computer to coat a second passivation layer above the first RDL and to expose a portion of the second passivation layer for assembling one or more resonator chips;instructions for causing the computer to use a plating process to place one or more interconnection pads above the second passivation layer;26 instructions for causing the computer to cover the second passivation layer and the one or more resonator chips with a molding material;instructions for causing the computer to flip the integrated circuit (IC), to coat the wafer layer with a third passivation layer and to place a second RDL above the third passivation layer to form a plurality of inductors;instructions for causing the computer to coat a fourth passivation layer above the second RDL and to create an interconnection layer above the fourth passivation layer; andinstructions for causing the computer to dice the integrated circuit (IC) to obtain one or more individual bandpass filters.27
CO-PACKAGING OF BAW FILTERS AND 3D INDUCTORS REALIZED WITH THROUGH-SUBSTRATE ORTHROUGH-MOLD VIAS AND RDLSCLAIM OF PRIORITY[0001] The present Application for Patent claims priority to Application No. 16/279,902 entitled“WIDEBAND FILTER WITH RESONATORS AND INDUCTORS” filed February 19, 2019, and assigned to the assignee hereof and hereby expressly incorporated by reference herein.TECHNICAL FIELD[0002] This disclosure relates generally to the field of wideband filtering, and, in particular, to a wideband filter with resonator(s) and inductor(s).BACKGROUND[0003] Bandpass filters are circuit elements used for selective signal transmission. One type of bandpass filter used at microwave frequencies are bulk acoustic wave (BAW) filters. Some implementations of BAW filters have limited passband widths, typically less than 100 MHz. BAW filter implementations with much wider passband width (e.g., up to 400 MHz) are needed for wideband applications, such as Fifth Generation (5G) wireless communication systems.SUMMARY[0004] The following presents a simplified summary of one or more aspects of the present disclosure, in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated features of the disclosure, and is intended neither to identify key or critical elements of all aspects of the disclosure nor to delineate the scope of any or all aspects of the disclosure. Its sole purpose is to present some concepts of one or more aspects of the disclosure in a simplified form as a prelude to the more detailed description that is presented later.[0005] In one aspect, the disclosure provides a wideband filter with resonator(s) and inductor(s). Accordingly, a method for forming one or more individual bandpass filters on an integrated circuit (IC), the method including positioning a first redistribution layer (RDL) in a wafer layer on the integrated circuit (IC); placing one or more vertical conductive pillars above the wafer layer; forming a plurality of inductors by coating a first passivation layer onto the wafer layer; plating a second redistribution layer (RDL) over the first passivation layer; and coating a second passivation layer above the second redistribution layer (RDL).[0006] In one example, the wafer layer is a molded wafer layer. In one example, the one or more vertical conductive pillars are either copper (Cu) pillars or aluminum (Al) pillars. In one example, the wafer layer is a high-resistivity silicon (HRS) wafer, a gallium arsenide (GaAs) wafer or a glass wafer.[0007] In one example, the method further includes assembling a plurality of resonator chips onto the wafer layer. In one example, one of the plurality of resonator chips is a bulk acoustic wave (BAW) resonator. In one example, the method further includes covering the wafer layer with a molding material to form a molded wafer layer. In one example, the molding material is an epoxy.[0008] In one example, the method further includes using a transfer-molding process or a compression molding process for covering the wafer layer with the molding material. In one example, the method further includes back-grinding the molded wafer layer to expose the one or more vertical conductive pillars. In one example, the method further includes forming an interconnection layer above the second passivation layer. In one example, the interconnection layer includes solder balls or conductive pads. In one example, the method further includes dicing the integrated circuit (IC) to obtain the one or more individual bandpass filters.[0009] Another aspect of the disclosure provides a method for forming one or more individual bandpass filters on an integrated circuit (IC), the method including forming a through glass via (TGV) within a wafer layer on the integrated circuit (IC); coating a first passivation layer on top of the wafer layer; placing a first redistribution layer (RDL) above the first passivation layer, wherein the first RDL is placed over one or more vertical conductive pillars; flipping the integrated circuit (IC); coating the wafer layer with a second passivation layer; and placing a second redistribution layer (RDL) above the second passivation layer to form a plurality of inductors.[0010] In one example, the wafer layer is a high-resistivity silicon (HRS) wafer, a gallium arsenide (GaAs) wafer or a glass wafer. In one example, the method further includes filling the through glass via (TGV) through metallic plating to form the one or more vertical conductive pillars. In one example, the metallic plating is copper plating.2 [0011] In one example, the method further includes forming the one or more vertical conductive pillars through either a laser drilling process or an etching process. In one example, the method further includes forming the one or more vertical conductive pillars through either a copper plating process or a conductive paste filling process. In one example, the method further includes coating a third passivation layer above the first redistribution layer (RDL) and exposing a portion of the third passivation layer for assembling one or more resonator chips.[0012] In one example, the method further includes using a plating process to place one or more interconnection pads above the third passivation layer. In one example, the one or more resonator chips are assembled on top of the one or more interconnection pads. In one example, the one or more resonator chips is a plurality of bulk acoustic wave (BAW) resonators. In one example, the method further includes covering the third passivation layer and the one or more resonator chips with a molding material. In one example, the molding material is an epoxy.[0013] In one example, the method further includes using a transfer-molding process or a compression molding process for covering the third passivation layer and the one or more resonator chips with the molding material. In one example, the method further includes coating a fourth passivation layer above the second RDL; and creating an interconnection layer above the fourth passivation layer. In one example, the method further includes adding one or more conductive pads or solder balls for creating the interconnection layer. In one example, the method further includes dicing the integrated circuit (IC) to obtain the one or more individual bandpass filters.[0014] Another aspect of the disclosure provides a bandpass filter in an integrated circuit (IC) including a plurality of resonators including a first resonator, a second resonator, a third resonator and a fourth resonator, and wherein the second resonator and the third resonator are in parallel, and wherein the first resonator includes a first terminal and a second terminal, wherein the second resonator includes a second resonator top terminal and a second resonator bottom terminal; wherein the third resonator includes a third resonator top terminal and a third resonator bottom terminal, wherein the fourth resonator includes a third terminal and a fourth terminal, and wherein the first terminal is coupled to the second resonator top terminal, wherein the second terminal is coupled to the third resonator top terminal, wherein the third terminal is coupled to the third resonator bottom terminal, wherein the fourth terminal is coupled to3 the second resonator bottom terminal; and a first inductor coupled to the first terminal and the third terminal; and a second inductor coupled to the second terminal and the fourth terminal.[0015] Another aspect of the disclosure provides a computer-readable medium storing computer executable code, operable on a device including at least one processor and at least one memory coupled to the at least one processor, wherein the at least one processor is configured to implement one or more individual bandpass filters on an integrated circuit (IC), the computer executable code including instructions for causing a computer to position a first redistribution layer (RDL) in a wafer layer on the integrated circuit (IC); instructions for causing the computer to place one or more vertical conductive pillars above the wafer layer; instructions for causing the computer to assemble a plurality of resonator chips onto the wafer layer; instructions for causing the computer to cover the wafer layer with a molding material to form a molded wafer layer; instructions for causing the computer to form a plurality of inductors by coating a first passivation layer onto the molded wafer layer, by plating a second redistribution layer (RDL) over the first passivation layer and by coating a second passivation layer above the second RDL; instructions for causing the computer to form an interconnection layer above the second passivation layer; and instructions for causing the computer to dice the integrated circuit (IC) to obtain one or more individual bandpass filters.[0016] Another aspect of the disclosure provides a computer-readable medium storing computer executable code, operable on a device including at least one processor and at least one memory coupled to the at least one processor, wherein the at least one processor is configured to implement one or more individual bandpass filters on an integrated circuit (IC), the computer executable code including instructions for causing a computer to form a through glass via (TGV) within a wafer layer on the integrated circuit (IC); instructions for causing the computer to coat a first passivation layer on top of the wafer layer and to place a first redistribution layer (RDL) above the first passivation layer, wherein the first RDL is placed over one or more vertical conductive pillars; instructions for causing the computer to coat a second passivation layer above the first RDL and to expose a portion of the second passivation layer for assembling one or more resonator chips; instructions for causing the computer to use a plating process to place one or more interconnection pads above the second passivation layer; instructions for causing the computer to cover the second passivation layer and the one4 or more resonator chips with a molding material; instructions for causing the computer to flip the integrated circuit (IC), to coat the wafer layer with a third passivation layer and to place a second RDL above the third passivation layer to form a plurality of inductors; instructions for causing the computer to coat a fourth passivation layer above the second RDL and to create an interconnection layer above the fourth passivation layer; and instructions for causing the computer to dice the integrated circuit (IC) to obtain one or more individual bandpass filters.[0017] These and other aspects of the disclosure will become more fully understood upon a review of the detailed description, which follows. Other aspects, features, and implementations of the present disclosure will become apparent to those of ordinary skill in the art, upon reviewing the following description of specific, exemplary implementations of the present invention in conjunction with the accompanying figures. While features of the present invention may be discussed relative to certain implementations and figures below, all implementations of the present invention can include one or more of the advantageous features discussed herein. In other words, while one or more implementations may be discussed as having certain advantageous features, one or more of such features may also be used in accordance with the various implementations of the invention discussed herein. In similar fashion, while exemplary implementations may be discussed below as device, system, or method implementations it should be understood that such exemplary implementations can be implemented in various devices, systems, and methods.BRIEF DESCRIPTION OF THE DRAWINGS[0018] FIG. 1 illustrates an example graph of a filter transfer function for a bulk acoustic wave (BAW) filter.[0019] FIG. 2 illustrates an example of a bandpass filter with a combination of bulk acoustic wave (BAW) resonators and inductors.[0020] FIG. 3 illustrates an example of an electrical schematic diagram of a bandpass filter with a combination of resonators and inductors.[0021] FIG. 4 illustrates an example filter transfer function for a bandpass filter with a combination of resonators and inductors.[0022] FIG. 5 illustrates an example implementation of a bandpass filter with a combination of resonators on a chip and inductors.5 [0023] FIG. 6 illustrates an example first step for a first integrated circuit (IC) process for a bandpass filter with a combination of resonators on a chip and inductors.[0024] FIG. 7 illustrates an example second step for the first integrated circuit (IC) process for the bandpass filter with the combination of resonators on the chip and inductors.[0025] FIG. 8 illustrates an example third step for the first integrated circuit (IC) process for the bandpass filter with the combination of resonators on the chip and inductors.[0026] FIG. 9 illustrates an example fourth step for the first integrated circuit (IC) process for the bandpass filter with the combination of resonators on the chip and inductors.[0027] FIG. 10 illustrates an example top view of the first integrated circuit (IC) process for the bandpass filter with the combination of resonators on the chip and inductors.[0028] FIG. 11 illustrates an example fifth step for the first integrated circuit (IC) process for the bandpass filter with the combination of resonators on the chip and inductors.[0029] FIG. 12 illustrates an example first step for a second integrated circuit (IC) process for a bandpass filter with a combination of resonators on a chip and inductors.[0030] FIG. 13 illustrates an example second step for the second integrated circuit (IC) process for the bandpass filter with the combination of resonators on the chip and inductors.[0031] FIG. 14 illustrates an example third step for the second integrated circuit (IC) process for the bandpass filter with the combination of resonators on the chip and inductors.[0032] FIG. 15 illustrates an example fourth step for the second integrated circuit (IC) process for the bandpass filter with the combination of resonators on the chip and inductors.[0033] FIG. 16 illustrates an example fifth step for the second integrated circuit (IC) process for the bandpass filter with the combination of resonators on the chip and inductors.[0034] FIG. 17 illustrates an example sixth step for the second integrated circuit (IC) process for the bandpass filter with the combination of resonators on the chip and inductors.6 [0035] FIG. 18 illustrates an example of a first integrated circuit (IC) process flow for manufacturing a bandpass filter with a combination of resonators on a chip and inductors within an integrated circuit (IC).[0036] FIG. 19 illustrates an example of a second integrated circuit (IC) process flow for manufacturing a bandpass filter with a combination of resonators on a chip and inductors within an integrated circuit (IC).DETAILED DESCRIPTION[0037] The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well known structures and components are shown in block diagram form in order to avoid obscuring such concepts.[0038] While for purposes of simplicity of explanation, the methodologies are shown and described as a series of acts, it is to be understood and appreciated that the methodologies are not limited by the order of acts, as some acts may, in accordance with one or more aspects, occur in different orders and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a methodology could alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all illustrated acts may be required to implement a methodology in accordance with one or more aspects.[0039] Electrical circuits which use passive and active circuit elements are widely used to implement a variety of signal processing functions. In one example, signal processing functions may be described in a time domain (i.e., as a function of time) or in a frequency domain (i.e., as a function of frequency). In the frequency domain, for example, signals may be described by a frequency spectrum, e.g., an amplitude response and a phase response over frequency. A filter is a circuit element which relies on frequency domain properties such as a filter transfer function to transform an input frequency spectrum of an input signal into an output frequency spectrum of an output7 signal. There are many different examples of filters such as low-pass filters, high-pass filters, bandpass filters, bandstop filters, etc.[0040] In one example, electrical circuits include radio frequency front-end (RFFE) modules which may have power amplifiers, low noise amplifiers, switches, filters, and/or transformers, etc. In one example, bandpass filters are circuit elements in electrical circuits which may be used to selectively transmit or reject a signal depending on the frequency spectrum of the signal. For example, the signal may have a frequency spectrum which has significant energy distribution over a range of frequencies from a low frequency Eow to a high frequency ftiiGH. A first key characteristic of a bandpass filter is its passband, i.e., a first range of frequencies which is transmitted through the bandpass filter. For example, the passband may be specified by frequency values with a half-power response, e.g., -3 dB amplitude response points.[0041] A second key characteristic of a bandpass filter is its stopband. The stopband is a second range of frequencies which is rejected by the bandpass filter. A third key characteristic of a bandpass filter is its rolloff. The rolloff is the attenuation slope (e.g., dB/MHz) in transitioning from its passband to its stopband. A fourth key characteristic of a bandpass filter is its insertion loss. The insertion loss is the amount of attenuation over its passband. In one example, rolloff (attenuation slope) and insertion loss may be trade parameters in the bandpass filter design. For example, a cascade (i.e., series connection) of individual bandpass filter devices may allow a trade between higher rolloff versus lower insertion loss.[0042] In one example, bandpass filter implementations in the microwave frequency region (e.g., around 1-10 GHz) may include surface acoustic wave (SAW) filters and bulk acoustic wave (BAW) filters. For example, a bandpass filter may be implemented with a plurality of resonators. Resonators are devices which exhibit frequency resonance. In one example, a SAW filter may be implemented using a plurality of SAW resonators. In one example, a BAW filter may be implemented with a plurality of BAW resonators. These SAW/BAW filters may provide sharp rolloff, but relatively narrowband bandpass filtering, for example, a passband of around 100 MHz over a center frequency of around 3 to 6 GHz. One skilled in the art would understand that the passband and center frequency stated herein are mere examples, and that the present disclosure is not limited to the example disclosed herein.8 [0043] Alternatively, narrowband may be defined as a passband which is less than 5% of the center frequency. However, in some cases, e.g., 5G wireless applications, bandpass filters with relatively wideband bandpass filtering (e.g., a passband of up to 400 MHz) are desired. Alternatively, wideband may be defined as a passband which is greater than 5% of the center frequency. In some examples, SAW/BAW filters are not capable of providing such wideband performance. The present disclosure provides bandpass filter implementations for wideband bandpass filtering in the microwave frequency region with wideband performances, for example, greater than 5% of the center frequency.[0044] FIG. 1 illustrates an example graph 100 of a filter transfer function for a bulk acoustic wave (BAW) filter. In the example graph 100, the vertical axis shows an amplitude response in decibels (dB) and the horizontal axis shows a frequency range in MHz. In FIG. 1, the amplitude response for the BAW filter in shown in the range of -70 dB to 0 dB over a frequency range of 1560 MHz to 1935 MHz. In the example graph 100, a passband width (e.g., between -3 dB amplitude response points) is relatively narrowband (e.g., less than 100 MHz wide).[0045] In one example, the passband may be specified as a relative bandwidth. The relative bandwidth may be defined as a ratio of passband width to center frequency. In the example graph 100, the relative bandwidth is less than 5% in this case (i.e., 75 MHz passband width over 1745 MHz center frequency is about 4.3% relative bandwidth). FIG. 1 shows that the filter transfer function includes a sharp rolloff from the passband to the stopband without achieving a wide passband (e.g., greater than 100 MHz bandwidth). That is, the filter transfer function of the BAW filter includes a sharp rolloff from the passband to the stopband with a passband bandwidth of less than 100 MHz.[0046] FIG. 2 illustrates an example of a bandpass filter 200 with a combination of bulk acoustic wave (BAW) resonators and inductors. In one example, the inductors are three- dimensional (3D) inductors. In the bandpass filter 200, a low-loss substrate (e.g., glass wafer) may be used to implement high-Q inductors through a metal plating process. In one example, a high-Q (i.e., high quality) inductor is an inductor with highly resonant behavior. For example, a BAW resonator process cannot be used to implement an inductor, such as a high-Q inductor. In one example, a packaging approach to integrate BAW resonators and inductors forms a wideband bandpass filter with sharp rolloff.9 [0047] FIG. 3 illustrates an example of an electrical schematic diagram of a bandpass filter 300 with a combination of resonators and inductors. In one example, the bandpass filter 300 includes four resonators: a first resonator 310, a second resonator 320, a third resonator 330 and a fourth resonator 340. One skilled in the art would understand that although four resonators are shown, other quantities of resonators may be used within the scope and spirit of the present disclosure.[0048] In one example, the first resonator 310 is a first BAW resonator, the second resonator 320 is a second BAW resonator, the third resonator is a third BAW resonator, and the fourth resonator is a fourth BAW resonator. Although BAW resonators are disclosed herein, in some examples, other types of resonators, such as, but not limited to surface acoustic wave (SAW) resonators, may be used.[0049] In one example, the third resonator 330 includes a first terminal 331 and a second terminal 332. And, the fourth resonator 340 includes a first terminal 342 and a second terminal 342. In one example, the first resonator 310 is connected to the first terminal 331 of the third resonator 330 and the second resonator 320 is connected to the second terminal 332 of the third resonator 330 as shown in FIG. 3. In one example, the first resonator 310 is also connected to the second terminal 342 of the fourth resonator 340 and the second resonator 320 is also connected to the first terminal 341 of the fourth resonator 340, as shown in FIG. 3.[0050] In one example, the bandpass filter 300 includes two inductors: a first inductor350 and a second inductor 360. In one example, the first inductor 350 is a first 3-D inductor and the second inductor 360 is a second 3-D inductor. In one example, the first inductor 350 is connected to the first terminal 331 of the third resonator 330 and to the first terminal 341 of the fourth resonator 340. And, the second inductor 360 is connected to the second terminal 332 of the third resonator 330 and to the second terminal 342 of the fourth resonator 340. One skilled in the art would understand that although two inductors are shown in FIG. 3, that other quantities of inductors may be used within the scope and spirit of the present disclosure.[0051] In one example, the bandpass filter 300 includes two resistors: a first resistor 370 and a second resistor 380. In one example, the first resistor 370 is connected in parallel to the first inductor 350 and the second resistor 380 is connected in parallel to the second inductor 360. In one example, the impedance of the first resistor 370 and of the second resistor 380 is 50 ohms. One skilled in the art would understand that other values10 of the first resistor 370 and the second resistor 380 may be used within the scope and spirit of the present disclosure. One skilled in the art would understand that although two resistors are shown in FIG. 3, other quantities of resistors may be used within the scope and spirit of the present disclosure.[0052] FIG. 4 illustrates an example filter transfer function 400 for a bandpass filter with a combination of resonators and inductors. Regarding FIG. 4, amplitude response is shown in the vertical axis and frequency range is shown in the horizontal axis. In the example shown in FIG. 4, the amplitude response in decibels (dB) for the bandpass filter is shown over a frequency range between 1 GHz and 8 GHz. In one example, the amplitude response from a filter input to a filter output is labeled as having two components: S(l,2) which is a transfer function from filter input to filter output and S(l,l) which is a reflection function at the filter input. In this example, a passband width (e.g., between -3 dB amplitude response points) is relatively wide, e.g., greater than 400 MHz wide. For example, at a frequency of 3.460 GHz, an amplitude of -0.725 dB is shown and at a frequency of 3.860 GHz, an amplitude response of -0.666 dB is shown. In one example, the relative bandwidth is greater than 10% in this case. In one example, a sharp rolloff of the bandpass filter is attained.[0053] FIG. 5 illustrates an example implementation of a bandpass filter 500 with a combination of resonators on a chip and inductors. For example, a chip is a monolithic integrated circuit. In one example, the bandpass filter 500 includes four resonators: a first resonator 510, a second resonator 520, a third resonator 530 and a fourth resonator 540. In one example, the first resonator 510, the second resonator 520, the third resonator 530 and the fourth resonator 540 are bulk acoustic wave (BAW) resonators embedded in the chip. In another example, one or more of the four resonators is a surface acoustic wave (SAW) resonator. In one example, the bandpass filter 500 includes two inductors: a first inductor 550 and a second inductor 560. In one example, the first inductor 550 and the second inductor 560 are 3-D inductors. In one example, the bandpass filter 500 includes module pads 570 (e.g., electrical connectors), a passivation layer 580, a molding layer 590 and a glass layer 595.[0054] FIG. 6 illustrates an example first step 600 for a first integrated circuit (IC) process for a bandpass filter with a combination of resonators on a chip and inductors. For example, the first IC process is a through mount via (TMV) process. In one example, a wafer layer 610 is used as a substrate for subsequent wafer-level processing.11 In one example, the wafer layer is a glass wafer. In another example, the wafer layer is a silicon (Si) wafer (e.g., a high resistivity silicon wafer) or a gallium arsenide (GaAs) wafer. In one example, a bottom redistribution layer (RDL) 620 is plated with a wafer plating process and is positioned in the wafer layer 610. The bottom RDL 620 may serve as a bottom trace of inductors (e.g., 3-D inductors). In one example, vertical conductive pillars 630 are placed above the wafer layer. For example, vertical conductive pillars may be copper (Cu) pillars, aluminum (Al) pillars, or other metallic pillars. For example, vertical conductive pillars may be made through lithographic and wafer-plating processes. In one example, the processes include photoresist (PR), exposure, developing, copper plating, photoresist stripping, etc. For example, the height of the vertical conductive pillars may be 150-200 micrometers (pm), although other dimensions are also within the scope and spirit of the present disclosure.[0055] FIG. 7 illustrates an example second step 700 for the first integrated circuit (IC) process for the bandpass filter with the combination of resonators on the chip and inductors. In one example, a plurality of resonator chips 740 are assembled into the wafer layer 710. In one example, the wafer layer is a glass wafer. For example, the plurality of resonator chips 740 may be a plurality of bulk acoustic wave (BAW) resonators. In another example, the plurality of resonator chips 740 may be a plurality of surface acoustic wave (SAW) resonators. In one example, there is no limitation on the spatial separation between a resonator chip 740 and a vertical conductive pillar 730. In one example, a bottom redistribution layer (RDL) 720 is positioned in the wafer layer 710.[0056] FIG. 8 illustrates an example third step 800 for the first integrated circuit (IC) process for the bandpass filter with the combination of resonators on the chip and inductors. In one example, a wafer layer 810 is covered by a molding material 850 (e.g., epoxy) to create a molded-covered wafer using a molding process such as transfer molding or compression molding. In one example, the wafer layer is a glass wafer. In one example, the molded-covered wafer may be back-grinded to expose (i.e., remove molding material) vertical conductive pillars 830 for subsequent interconnection processing.[0057] FIG. 9 illustrates an example fourth step 900 for the first integrated circuit (IC) process for the bandpass filter with the combination of resonators on the chip and inductors. In one example, a first passivation layer 960 is coated on top of a molded-12 covered wafer. In one example, a lithographic process may be used to plate a top RDL 970 and a via connection 980 above the first passivation layer 960 simultaneously. In one example, a second passivation layer 990 may be coated on top of the top RDL 970. In one example, inductors are formed from a combination of a bottom RDL 920, vertical conductive pillars 930 and top RDL 970. Also indicated in FIG. 9, as an example, is a region wherein a 3-D inductor is formed. In one example, the first passivation layer 960 made of polyimide. In one example, the second passivation layer 990 made of polyimide.[0058] FIG. 10 illustrates an example top view 1000 of the first integrated circuit (IC) process for the bandpass filter with the combination of resonators on the chip and inductors. Shown in FIG. 10 is a 3-D inductor formed by a bottom RDL 1075, vertical conductive pillars 1030 and a top RDL 1070. Also indicated in FIG. 10, as an example, is a region wherein a 3-D inductor is formed.[0059] FIG. 11 illustrates an example fifth step 1100 for the first integrated circuit (IC) process for the bandpass filter with the combination of resonators on the chip and inductors. In one example, an interconnection layer 1195 is formed above a second passivation layer 1190. For example, the interconnection layer 1195 may include solder balls or pads or other interconnection elements using a plating process or a ball drop process.[0060] In one example, individual bandpass filter devices may be obtained from the IC through a dicing process. Also, for example, individual bandpass filter devices may be connected in cascade to obtain increased rolloff with higher insertion loss. That is, a cascade (i.e., series connection) of individual bandpass filter devices may allow a trade between higher rolloff vs. lower insertion loss.[0061] FIG. 12 illustrates an example first step 1200 for a second integrated circuit (IC) process for a bandpass filter with a combination of resonators on a chip and inductors. For example, the second IC process is a through glass via (TGV) process. In one example, a through glass via (TGV) 1220 is formed within a wafer layer 1210. For example, the wafer layer 1210 may be made of a glass layer or other materials (e.g. high-resistivity silicon (HRS), gallium arsenide (GaAs), etc.). In one example, the through glass via (TGV) 1220 is filled through metallic plating, e.g., copper plating, to form vertical conductive pillars 1230. For example, the vertical conductive pillars 1230 may be made through a laser drill or etching process along with a copper plating or13 conductive paste filling process. In one example, the vertical conductive pillars 1230 are vertical copper pillars.[0062] FIG. 13 illustrates an example second step 1300 for the second integrated circuit(IC) process for the bandpass filter with the combination of resonators on the chip and inductors. In one example, a wafer layer 1310 is coated with a first passivation layer 1360 (e.g., a first dielectric layer). In one example, the first passivation layer 1360 is made of polyimide. In one example, a first redistribution layer (RDL) 1370 is placed above the first passivation layer 1360 over vertical conductive pillars 1330. For example, the first RDL 1370 may be placed using lithographic and plating processes.[0063] FIG. 14 illustrates an example third step 1400 for the second integrated circuit(IC) process for the bandpass filter with the combination of resonators on the chip and inductors. In one example, a second passivation layer 1465 (e.g., dielectric material) is coated above the first RDL 1470 and the second passivation layer 1465 is exposed for resonator assembly. In one example, the second passivation layer 1465 is made of polyimide. In one example, interconnection pads 1440 are placed above the second passivation layer 1465 using a plating process. In one example, a plurality of resonator chips 1480 are assembled on top of the interconnection pads 1440. For example, the plurality of resonator chips 1480 may be a plurality of bulk acoustic wave (BAW) resonators.[0064] FIG. 15 illustrates an example fourth step 1500 for the second integrated circuit(IC) process for the bandpass filter with the combination of resonators on the chip and inductors. In one example, a second passivation layer 1565 and a plurality of resonator chips 1580 is covered with a molding material 1585 (e.g., epoxy). In one example, the plurality of resonator chips 1580 may be a plurality of bulk acoustic wave (BAW) resonators. In one example, the covering is performed using a molding process, e.g. transfer molding or compression molding.[0065] FIG. 16 illustrates an example fifth step 1600 for the second integrated circuit(IC) process for the bandpass filter with the combination of resonators on the chip and inductors. In one example, the IC is flipped and a wafer layer 1610 is coated with a third passivation layer 1667. In one example, a second RDL 1675 is placed above the third passivation layer 1667 using lithographic and plating processes. In one example, inductors are formed from a combination of the first RDL (shown in FIG. 13 as 1370),14 vertical conductive pillars 1630 and a second RDL 1675 as part of a wafer packaging process.[0066] FIG. 17 illustrates an example sixth step 1700 for the second integrated circuit(IC) process for the bandpass filter with the combination of resonators on the chip and inductors. In one example, a fourth passivation layer 1769 is coated above a second RDL and an interconnection layer 1799 is created above the fourth passivation layer 1769. In one example, the interconnection layer 1799 includes package pads or drop balls using lithographic and plating processes.[0067] In one example, individual bandpass filter devices may be obtained from the IC through a dicing process. Also, for example, individual bandpass filter devices may be connected in cascade to obtain increased rolloff with higher insertion loss. That is, a cascade (i.e., series connection) of individual bandpass filter devices may allow a trade between higher rolloff vs. lower insertion loss.[0068] FIG. 18 illustrates an example of a first integrated circuit (IC) process flow 1800 for manufacturing a bandpass filter with a combination of resonators on a chip and inductors within an integrated circuit (IC). In one example, the first IC process flow may include a through mold via (TMV). In one example, Through-Mold- Vi a (TMV) is a vertical interconnection where patterns in both sides of a molding material can be connected. The TMV may be made through a regular plating process using photo-resist (PR) to define vias. After the vias (e.g., copper material) are formed, molding material maybe coated on top and cured. A grinding process may be used to remove some molding material and to expose the vias for a subsequent interconnection process.).[0069] In block 1810, position a first redistribution layer (RDL) in a wafer layer on an integrated circuit (IC). In one example, the wafer layer is a glass wafer. In another example, the wafer layer is a high-resistivity silicon (HRS) wafer or a gallium arsenide (GaAs) wafer. For example, the first RDL may be plated using a wafer-plating process and may serve as a bottom trace of inductors.[0070] In block 1820, place one or more vertical conductive pillars above the wafer layer. In one example, the vertical conductive pillars may be copper (Cu) pillars, aluminum (Al) pillars, or other metallic pillars. For example, a height of the vertical conductive pillars may be 150-200 micrometers (pm).[0071] In block 1830, assemble a plurality of resonator chips onto the wafer layer. In one example, the resonator chips are bulk acoustic wave (BAW) resonators. In one15 example, a spatial separation between the resonator chips and the vertical conductive pillars is not limited except for assembly design rules.[0072] In block 1840, cover the wafer layer with a molding material to form a molded wafer layer. In one example, the molding material is epoxy. In one example, covering the wafer layer is achieved by using a molding process such as transfer-molding or compression molding. In one example, the molded wafer layer may be back-grinded to expose the vertical conductive pillars for subsequent interconnection processing.[0073] In block 1850, form a plurality of inductors by coating a first passivation layer onto the molded wafer layer, plating a second redistribution layer (RDL) over the first passivation layer and coating a second passivation layer above the second RDL. In one example, the first passivation layer and the second passivation layer are made of polyimide. In one example, the plating of the second redistribution layer (RDL) also plates a via connection using a lithographic process. In one example, one or more inductors are formed from the first RDL, the vertical conductive pillars and/or the second redistribution layer (RDL).[0074] In block 1860, form an interconnection layer above the second passivation layer.In one example, the interconnection layer may include one or more of: solder balls, conductive pads and/or other interconnection elements. In one example, the forming of the interconnection layer may use a plating process or a ball drop process.[0075] In block 1870, dice the integrated circuit (IC) to obtain one or more individual bandpass filters.[0076] FIG. 19 illustrates an example of a second integrated circuit (IC) process flow1900 for manufacturing a bandpass filter with a combination of resonators on a chip and inductors within an integrated circuit (IC). In one example, the second IC process flow may include a through glass via (TGV). In one example, Through-Silicon-Via is a vertical interconnection where patterns in both sides of a silicon wafer can be connected. TSV may be made either through etching process or laser-drilling process. In etch process, some silicon material is etched away, and then filled with plated Cu, A1 or other metals. In the laser drilling process, holes maybe created by the laser drilling process, and then maybe filled with plated Cu, A1 or other metals.[0077] In block 1910, form a through glass via (TGV) within a wafer layer on an integrated circuit (IC). In one example, the wafer layer may be made of a glass layer or other materials, such as but not limited to high-resistivity silicon (HRS) or gallium16 arsenide (GaAs), etc. In one example, the through glass via (TGV) may be filled through metallic plating (e.g., copper plating) to form vertical conductive pillars. In one example, vertical conductive pillars may be formed through either a laser drilling process or an etching process. And, in addition, the vertical conductive pillars may be further formed through either a copper plating process or a conductive paste filling process. In one example, the vertical conductive pillars are vertical copper pillars.[0078] In block 1920, coat a first passivation layer on top of the wafer layer and place a first redistribution layer (RDL) above the first passivation layer, wherein the first RDL is placed over one or more vertical conductive pillars. In one example, the first passivation layer is a first dielectric layer. In one example, the first passivation layer is made of polyimide. In one example, the first RDL may be placed using lithographic and plating processes.[0079] In block 1930, coat a second passivation layer above the first RDL and expose a portion of the second passivation layer for assembling one or more resonator chips. In one example, the second passivation layer is a second dielectric layer. In one example, the second passivation layer is made of polyimide.[0080] In block 1940, use a plating process to place one or more interconnection pads above the second passivation layer. In one example, the one or more resonator chips are assembled on top of the one or more interconnection pads. In one example, the one or more resonator chips may be a plurality of bulk acoustic wave (BAW) resonators.[0081] In block 1950, cover the second passivation layer and the one or more resonator chips with a molding material. In one example, the molding material is an epoxy. In one example, a molding process is used to cover the second passivation layer and the one or more resonator chips with the molding material. In one example, the molding process includes transfer molding or compression molding.[0082] In block 1960, flip the integrated circuit (IC), coat the wafer layer with a third passivation layer and place a second RDL above the third passivation layer to form a plurality of inductors. In one example, a lithographic process and/or a plating process are used to place the second RDL above the third passivation layer. In one example, the plurality of inductors is formed from a combination of the first RDL, the vertical conductive pillars and the second RDL as part of a wafer packaging process.[0083] In block 1970, coat a fourth passivation layer above the second RDL and create an interconnection layer above the fourth passivation layer. In one example, the17 interconnection layer includes package pads or drop balls using lithographic and plating processes. In one example, the interconnection layer is created by adding one or more conductive pads and/or solder balls.[0084] In block 1980, dice the integrated circuit (IC) to obtain one or more individual bandpass filters.[0085] In one aspect, the present disclosure relates to a combination of bulk acoustic wave (BAW) resonators and 3 -dimensional (3-D) inductors to provide a bandpass filter with both a wideband passband and a sharp rolloff. The 3-D inductors are implemented using a low-loss substrate, for example a glass wafer, to make high-Q inductors through a metal plating process. The 3-D inductors are integrated with a plurality of BAW resonators to form the bandpass filter. In one example, the 3-D inductors may be made on a glass wafer either through a through-mold-via (TMV) process or a through-glass- via (TGV) process.[0086] In one aspect, the present disclosure provides a high-integration and high- performance filter module with improved tolerance relative to low temperature co-fired ceramic (LTCC) technology and laminate solutions for the inductors. And, in one aspect, the present disclosure discloses methods for providing a small form-factor and/or at a low cost.[0087] In one aspect, one or more of the steps for providing a bandpass filter within an integrated circuit in FIGs. 18 and 19 may be executed by one or more processors which may include hardware, software, firmware, etc. In one aspect, one or more of the steps in FIGs. 18 and/or 19 may be executed by one or more processors which may include hardware, software, firmware, etc. The one or more processors, for example, may be used to execute software or firmware needed to perform the steps in the flow diagrams of FIGs. 18 and 19. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.[0088] The software may reside on a computer-readable medium. The computer- readable medium may be a non-transitory computer-readable medium. A non-transitory computer-readable medium includes, by way of example, a magnetic storage device18 (e.g., hard disk, floppy disk, magnetic strip), an optical disk (e.g., a compact disc (CD) or a digital versatile disc (DVD)), a smart card, a flash memory device (e.g., a card, a stick, or a key drive), a random access memory (RAM), a read only memory (ROM), a programmable ROM (PROM), an erasable PROM (EPROM), an electrically erasable PROM (EEPROM), a register, a removable disk, and any other suitable medium for storing software and/or instructions that may be accessed and read by a computer. The computer-readable medium may also include, by way of example, a carrier wave, a transmission line, and any other suitable medium for transmitting software and/or instructions that may be accessed and read by a computer. The computer-readable medium may reside in the processing system, external to the processing system, or distributed across multiple entities including the processing system. The computer- readable medium may be embodied in a computer program product. By way of example, a computer program product may include a computer-readable medium in packaging materials. The computer-readable medium may include software or firmware for providing a bandpass filter within an integrated circuit. Those skilled in the art will recognize how best to implement the described functionality presented throughout this disclosure depending on the particular application and the overall design constraints imposed on the overall system.[0089] Any circuitry included in the processor(s) is merely provided as an example, and other means for carrying out the described functions may be included within various aspects of the present disclosure, including but not limited to the instructions stored in the computer-readable medium, or any other suitable apparatus or means described herein, and utilizing, for example, the processes and/or algorithms described herein in relation to the example flow diagram.[0090] Within the present disclosure, the word“exemplary” is used to mean“serving as an example, instance, or illustration.” Any implementation or aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects of the disclosure. Likewise, the term“aspects” does not require that all aspects of the disclosure include the discussed feature, advantage or mode of operation. The term“coupled” is used herein to refer to the direct or indirect coupling between two objects. For example, if object A physically touches object B, and object B touches object C, then objects A and C may still be considered coupled to one another— even if they do not directly physically touch each other. For instance, a first die may be coupled19 to a second die in a package even though the first die is never directly physically in contact with the second die. The terms“circuit” and“circuitry” are used broadly, and intended to include both hardware implementations of electrical devices and conductors that, when connected and configured, enable the performance of the functions described in the present disclosure, without limitation as to the type of electronic circuits, as well as software implementations of information and instructions that, when executed by a processor, enable the performance of the functions described in the present disclosure.[0091] One or more of the components, steps, features and/or functions illustrated in the figures may be rearranged and/or combined into a single component, step, feature or function or embodied in several components, steps, or functions. Additional elements, components, steps, and/or functions may also be added without departing from novel features disclosed herein. The apparatus, devices, and/or components illustrated in the figures may be configured to perform one or more of the methods, features, or steps described herein. The novel algorithms described herein may also be efficiently implemented in software and/or embedded in hardware.[0092] It is to be understood that the specific order or hierarchy of steps in the methods disclosed is an illustration of exemplary processes. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the methods may be rearranged. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented unless specifically recited therein.[0093] The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but are to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean“one and only one” unless specifically so stated, but rather“one or more.” Unless specifically stated otherwise, the term“some” refers to one or more. A phrase referring to“at least one of’ a list of items refers to any combination of those items, including single members. As an example,“at least one of: a, b, or c” is intended to cover: a; b; c; a and b; a and c; b and c; and a, b and c. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure20 that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. §112, sixth paragraph, unless the element is expressly recited using the phrase“means for” or, in the case of a method claim, the element is recited using the phrase“step for”.21
An accelerometer or other motion sensor is used to provide power to an entire processor system such that the processor does not need to be powered to process the motion signals for initial power-on control. After the processor system is powered on, the processor system may receive and process motion signals as normal, including, for example, performing various power-control functions such as power-down of certain components or the entire system upon detection of a lack of motion for a predetermined amount of time.
What is claimed is: 1. An electronic device comprising: a power source; an inertial sensor coupled to the power source via a non-switched connection; and a master power switch coupled to the power source and to the inertial sensor, wherein the master power switch is actuated by a predetermined signal from the inertial sensor to turn on the device. 2. An electronic device according to claim 1, wherein the inertial sensor comprises a MEMS accelerometer. 3. An electronic device according to claim 1, wherein the predetermined signal indicates a predetermined motion of the device. 4. An electronic device according to claim 3, wherein the predetermined motion includes at least one of movement in a specific direction, movement in a specific pattern, movement with a specific intensity, or movement of a specific duration. 5. An electronic device according to claim 1, further comprising a processor system coupled to the master power switch and to the inertial sensor, wherein the processor system is configured to process signals from the inertial sensor after the device is turned on. 6. An electronic device according to claim 5, wherein the processor system is configured to control power to at least one component of the device based on the signals from the inertial sensor. 7. An electronic device according to claim 1, wherein the inertial sensor includes a circuit to cycle the inertial sensor on and off to provide additional power savings. 8. An electronic device according to claim 1, wherein the master power switch is actuated by a second predetermined signal from the inertial sensor to turn off the device. 9. An electronic device according to claim 8, wherein the second predetermined signal indicates a predetermined motion of the device. 10. An electronic device according to claim 9, wherein the predetermined motion includes at least one of movement in a specific direction, movement in a specific pattern, movement with a specific intensity, or movement of a specific duration. 11. A master power switch system for an electronic device, the master power switch system comprising: an inertial sensor couplable to a power source via a non-switched connection; and a master power switch couplable to the power source and coupled to the inertial sensor, wherein the master power switch is actuated by a predetermined signal from the inertial sensor to turn on the device. 12. A system according to claim 11 , wherein the inertial sensor comprises a MEMS accelerometer. 13. A system according to claim 11, wherein the predetermined signal indicates a predetermined motion of the device. 14. A system according to claim 13, wherein the predetermined motion includes at least one of movement in a specific direction, movement in a specific pattern, movement with a specific intensity, or movement of a specific duration. 15. A system according to claim 11, wherein the inertial sensor includes a circuit to cycle the inertial sensor on and off to provide additional power savings. 16. A system according to claim 11 , wherein the master power switch is actuated by a second predetermined signal from the inertial sensor to turn off the device. 17. A system according to claim 16, wherein the second predetermined signal indicates a predetermined motion of the device. 18. A system according to claim 17, wherein the predetermined motion includes at least one of movement in a specific direction, movement in a specific pattern, movement with a specific intensity, or movement of a specific duration.
ACCELEROMETER-CONTROLLED MASTER POWER SWITCH FOR ELECTRONIC DEVICES FIELD OF THE INVENTION The present invention relates generally to power control in electronic devices, and, more particularly, to accelerometer-controlled power switching for electronic devices. BACKGROUND OF THE INVENTION These days, many types of devices (typically but not necessarily battery-powered) include an accelerometer or other inertial sensor (e.g., gyroscope) to perform various motion-based control functions in external devices (e.g., as an input to a video game system) and/or in the handheld device itself (e.g., motion-based orientation of displays, navigation of menus, data entry, etc.). Certain devices include motion-based power control functionality, such as transitioning into a "sleep" mode when the handheld device is stationary for some period of time in order to reduce power consumption or "waking up" upon detection of certain motion. In the "sleep" mode, certain circuitry is disabled or otherwise configured to reduce power consumption, although certain core functionality (such as the processor or portion of the processor needed for processing the accelerometer signals) generally remains powered and running in order to perform the accelerometer-based power control functions. Generally speaking, the inertial sensor is coupled to a hardware-based processor that is configured to perform power control functions based at least in part on the signals generated by the inertial sensor. The hardware -based processor typically includes and/or controls various types of peripherals, such as a microprocessor core, a wireless transceiver (e.g., cellular, WiFi, etc.), a display (e.g., an LCD screen), various input- output devices, and other types of circuitry, and the processor can selectively manage these peripherals (and sometimes its own circuitry) to manage power consumption. For example, the processor may selectively turn off a display or the backlighting of the display, may turn off the wireless transceiver, may turn off some processor circuitry, etc. In some devices, the inertial sensor and a related detection module may be powered on while the processor and its peripherals remain powered off, allowing for a limited amount of sensor-based functionality with substantial power savings. For example, United States Published Patent Application Nos. US2009/0240463, US2009/0293615, and US2009/0240462 (each of which is hereby incorporated herein by reference in its entirety) describe devices in which event capturing is triggered by a signal from a MEMS inertial sensor, such as for saving stored data or storing new data upon detection of a predetermined event. Here, the detection module may start to look for activity automatically at designated times, such as when the device is turned on or at periodic intervals thereafter. SUMMARY OF EXEMPLARY EMBODIMENTS In one embodiment, an electronic device comprises a power source, an inertial sensor coupled to the power source via a non-switched connection, and a master power switch coupled to the power source and to the inertial sensor, wherein the master power switch is actuated by a predetermined signal from the inertial sensor to turn on the device. In another embodiment, a master power switch system for an electronic device includes an inertial sensor couplable to a power source via a non-switched connection and a master power switch couplable to the power source and coupled to the inertial sensor, wherein the master power switch is actuated by a predetermined signal from the inertial sensor to turn on the device. In various alternative embodiments, the inertial sensor may include a MEMS accelerometer, a MEMS gyroscope, or other inertial sensor. The predetermined signal indicates a predetermined motion of the device, such as, for example, movement in a specific direction, movement in a specific pattern, movement with a specific intensity, and/or movement of a specific duration. The inertial sensor may include a circuit to cycle the inertial sensor on and off to provide additional power savings. In further embodiments, the master power switch may be actuated by a second predetermined signal from the inertial sensor to turn off the device. This second predetermined signal indicates a predetermined motion of the device, such as, for example, movement in a specific direction, movement in a specific pattern, movement with a specific intensity, or movement of a specific duration. The motion used to turn off the device may be the same motion used to turn on the device or may be a different motion. The device may include a processor system coupled to the master power switch and to the inertial sensor, in which case the processor system may be configured to process signals from the inertial sensor after the device is turned on, for example, to control power to at least one component of the device based on the signals from the inertial sensor. Additional embodiments may be disclosed and claimed. BRIEF DESCRIPTION OF THE DRAWINGS The foregoing and advantages of the invention will be appreciated more fully from the following further description thereof with reference to the accompanying drawings wherein: FIG. 1 is a simplified schematic block diagram of power control circuitry as may be found in the prior art; FIG.2 is a schematic block diagram showing one exemplary embodiment for motion-based device power-on based on the circuitry shown in FIG. 1; FIG. 3 is a schematic block diagram showing another exemplary embodiment for motion-based device power-on control based on the circuitry shown in FIG. 1; and FIG. 4 is a schematic block diagram of relevant components of an inertial sensor in accordance the embodiments of shown in FIG. 2 or FIG. 3. It should be noted that the foregoing figures and the elements depicted therein are not necessarily drawn to consistent scale or to any scale. Unless the context otherwise suggests, like elements are indicated by like numerals. DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTS In embodiments of the present invention, an accelerometer, gyroscope, or other motion sensor (referred to generally as an inertial sensor) is used in conjunction with a power switch as a master power switch to turn on the device and, in some embodiments, also to turn off the device. After the device is powered on, thereby providing power to the processor system, the processor system may receive and process signals from the inertial sensor as normal, including, for example, performing various power-control functions such as power-down of certain components or the entire device upon detection of a lack of motion for a predetermined amount of time. Various exemplary embodiments are described herein with reference to the power control circuitry shown in FIG. 1, although it should be noted that the power control circuitry shown in FIG. 1 is exemplary and is not meant to represent all power control circuitry to which embodiments of the present invention can be applied. Thus, the present invention is not limited to the power control circuitry shown in FIG. 1 or to any specific power control circuitry. FIG. 1 is a simplified schematic block diagram of power control circuitry as may be found in the prior art, allowing system power to be turned on using a power switch but thereafter allowing for processor-based management of system power. Among other things, this power control circuitry includes a power supply 102 (e.g., a battery), a power switch 104, an OR gate 106, a processor system 108, and a power regulator 110. Typically, the processor system 108 includes a processor (e.g., a microprocessor) and related components (e.g., memory) and peripherals (e.g., a wireless transceiver, a display, a keyboard, etc.). The power supply 102 is connected to the power switch 104 via connection 117 and to the power regulator 110 via connection 118. When the power switch 104 is operated so as to close the circuit to connection 114, a power-on signal is supplied via the OR gate 106 and connection 115 to the power regulator 110, causing the power regulator 110 to provide power from connection 118 to the processor system 108 via the connection 116. The processor system 108 outputs a signal on connection 112 in order to keep the power regulator 110 on even when the power switch 104 is open. Additional circuitry (not shown) is typically included, e.g., as part of connection 115, to smooth signal-bounce effects caused by operation of the power switch 104 or to effectuate power-on only if the power switch is depressed for a specific amount of time. The power switch 104 may be any of a variety of switches, such as a mechanical switch, and electrical switch, etc. After the processor system 108 is powered on, the processor system 108 then can control power for the system independently of the power switch 104. For example, the processor system 108 may selectively ignore subsequent transitions of the power switch 104 or may detect when the power switch 104 has been pressed for a predetermined length of time (via connection 113) and cause the system to power off by removing the signal from connection 112 (additional circuitry, not shown, may be included to permit power-off when the power switch is still depressed, as is done in various computer and handheld devices). Additionally or alternatively, the processor system 108 may turn off power to the entire system or to various components in the processor system 108 upon detecting that the device has not been operated for a predetermined period of time (e.g., no operation of a keyboard, touchscreen, or other controls). Additionally or alternatively, the processor system 108 may include an inertial sensor and may turn off power to the entire system or turn off or reduce power to various components in the processor system 108 upon detecting a predetermined condition, such as detecting a lack of motion for a predetermined amount of time or detecting a specific "power off' motion. For example, the processor system 108 may turn off or reduce power for peripherals such as a wireless transceiver, a display, or portions of a microprocessor core. Partial shutdowns may be reversed upon detection of predetermined signals from the inertial sensor, which may indicate either general movement of the device (e.g., virtually any motion) or specific movement of the device (e.g., movement in a specific direction, pattern, intensity, or duration). For such selective power-down and power-up functionality, the processor system 108 typically must be powered on at least sufficiently to process the sensor signals, make the power up/down decision, and perform the appropriate control functionality to effectuate the power control function. Therefore, the inertial sensor generally cannot be used to control power-up of the overall processor system 108, since at least a portion of the processor system 108 must be powered and running. In exemplary embodiments of the present invention, the inertial sensor is used in conjunction with the power switch as a master power switch to turn on the device. In this way, when the device is powered off, essentially all circuitry other than the inertial sensor may be powered off while still allowing for motion-based power-on of the device. FIG.2 is a schematic block diagram showing one exemplary embodiment for motion-based device power-on based on the circuitry shown in FIG. 1. Here, the inertial sensor 121 (which may be the inertial sensor from the processor system 108 or may be a separate inertial sensor) is coupled to the power supply 102 via a non-switched connection 119 and is essentially always "on" (although in certain embodiments, the inertial sensor may be cycled on/off in order to provide additional power savings, e.g., using circuitry internal or external to the inertial sensor). The inertial sensor 121 is coupled to the electronic switch 104 via connection 122 and is coupled to the processor system 108 via connection 120 over which it provides sensor signals for traditional processing. The inertial sensor 121 is configured to provide an appropriate output signal on connection 122 for operating the switch 104 when the device is moved in a predetermined fashion. For example, the inertial sensor 121 may be configured to provide the appropriate output signal upon detection of predetermined sensor signals, which may indicate either general movement of the device (e.g., virtually any motion) or specific movement of the device (e.g., movement in a specific direction, pattern, intensity, or duration). Once the switch 104 is operated, the device is powered on and operates as discussed above with reference to FIG. 1, i.e., the processor system 108 is powered on and then may control power for the device. Embodiments optionally may allow for motion-controlled device power-off via the master power switch, for example, by producing a signal on connection 122 to maintain the power switch 104 in an off position for a sufficient period of time for the processor system 108 to shut off the system, or by connecting the power switch 104 such that the device is shut off upon receipt by the power switch 104 of a predetermined signal from the inertial sensor. Unlike power-off control in which the device is powered off by the processor upon detecting a lack of motion/use for a predetermined time, here, the device is powered off via the power switch upon detecting a specific movement of the device (e.g., movement in a specific direction, pattern, intensity, or duration). The motion used to turn off the device may be the same motion used to turn on the device or may be a different motion. Thus, in some embodiments, the power switch 104 may control power to the device, e.g., closing the switch 104 turns on the device and opening the switch 104 shuts off the device such that the switch 104 needs to remain closed in order to maintain the device in the powered-on state. Circuitry may be included in the inertial sensor to perform motion-based device power switching such as discussed herein. For example, the inertial sensor may include a circuit that produces a first predetermined signal to power on the device upon detection of a first motion (e.g., a signal that toggles the switch closed and then opened for an embodiment such as FIG. 2, or in some other embodiments a signal that closes the switch and holds it closed) and that produces a second predetermined signal to power off the device upon detection of a second motion (e.g., a signal that holds the switch closed for an embodiment such as FIG. 2, or in some other embodiments a signal that opens the switch). In some embodiments, the master power switch 104 is integrated into the inertial sensor, such that the entire unit can be used in the device as a motion-controlled master power switch. Of course, additional functionality may be integrated into such a device, e.g., a power regulator, a microprocessor or microcontroller that is powered on by the built-in acceleration-controlled power switch, etc. FIG. 3 is a schematic block diagram showing another exemplary embodiment for motion-based device power-on control based on the circuitry shown in FIG. 1. Here, the switch 104 is integrated into the inertial sensor 123. As with the circuitry in FIG. 2, the inertial sensor 123 is always provided with power via connection 117/119. The inertial sensor 121 is configured to provide an appropriate output signal on connection 122 for operating the switch 104 when the device is moved in a predetermined fashion, for example, as discussed above for reversing a partial shut-down. Once the switch is operated, the device operates as discussed above with reference to FIG. 1, i.e., the processor system 108 can then process sensor signals from connection 120 including motion-based power control. FIG. 4 is a schematic block diagram of relevant components of inertial sensor 121 for producing the output signal on connection 122, in accordance the embodiments of shown in FIG. 2 or FIG. 3. The inertial sensor 121 includes a sensor 402, such as, for example, sensing fingers that are electrostatically coupled with a movable proof mass. The output of the sensor 402 is amplified by amplifier 404 and then passed through a high-pass filter 406 and NOT gate 408, such that the output signal on connection 122 is triggered (in this case, a transition from high to low) if and when the magnitude of the motion sensor output meets or exceeds a predetermined threshold. Embodiments of the present invention can include motion-based device power-on functionality in virtually any type of device and for a wide variety of reasons. For example, motion-based device power-on can be used simply to turn on a device or to turn on a device for a specific purpose, e.g., to activate an alarm, to turn on a security camera, etc. Motion-based device power-on can be used to detect virtually any type of movement such as movement caused by a user (e.g., picking up or moving the device), movement caused by natural phenomena (e.g., earthquake or tsunami), movement of a vehicle (e.g., detecting theft/operation of a vehicle), movement caused by a security breach (e.g., detecting the opening/closing of a door or locking/unlo eking of a lock), etc. Thus, certain embodiments may include low-G accelerometers (e.g., accelerometers configured to detect acceleration between around 1.2g to 15g), while other embodiments may include high-G accelerometers (e.g., accelerometers configured to detect acceleration between around 30g to 500g). Embodiments are particularly useful for devices that are powered by battery or other limited energy source, where the potential power savings provided by embodiments of the present invention may allow for increased battery life. It should be noted that arrows may be used in drawings to represent communication, transfer, or other activity involving two or more entities. Double-ended arrows generally indicate that activity may occur in both directions (e.g., a command/request in one direction with a corresponding reply back in the other direction, or peer-to-peer communications initiated by either entity), although in some situations, activity may not necessarily occur in both directions. Single-ended arrows generally indicate activity exclusively or predominantly in one direction, although it should be noted that, in certain situations, such directional activity actually may involve activities in both directions (e.g., a message from a sender to a receiver and an acknowledgement back from the receiver to the sender, or establishment of a connection prior to a transfer and termination of the connection following the transfer). Thus, the type of arrow used in a particular drawing to represent a particular activity is exemplary and should not be seen as limiting. It should be noted that headings are used above for convenience and are not to be construed as limiting the present invention in any way. The present invention may be embodied in other specific forms without departing from the true scope of the invention, and numerous variations and modifications will be apparent to those skilled in the art based on the teachings herein. Any references to the "invention" are intended to refer to exemplary embodiments of the invention and should not be construed to refer to all embodiments of the invention unless the context otherwise requires. The described embodiments are to be considered in all respects only as illustrative and not restrictive.
A computer system is disclosed with a host bridge that arbitrates access to a system resource from a CPU via a host bus and from a set of bus agents via a peripheral bus. A separate set of priority classes are provided to the CPU and to the bus agents and programmable timers are included to tune system resource allocation between the host and processor busses.
What is claimed is: 1. A computer readable medium that provides instructions, which when executed on a processor, cause the processor to perform operations comprising:defining a first priority scheme for requests to access a system resource and a peripheral bus via a host bus and a second priority scheme for requests to access the system resource via the peripheral bus; arbitrating between a host bus request and a peripheral bus request to designate a first priority request and a second priority request based upon the first priority scheme and the second priority scheme; and providing access to the system resource for the first priority request while processing any one of the second priority request, a host bus request to access the peripheral bus, and a transaction between at least two of the bus agents. 2. The computer readable medium of claim 1, wherein the instructions further comprise preempting the CPU by asserting an AHOLD signal on the host bus while asserting a grant signal to a requesting bus agent on the peripheral bus and by asserting a BOFF signal on the host bus in response to a transaction on the host bus that conflicts with a transaction from the requesting bus agent.3. The computer readable medium of claim 1, wherein the first priority scheme for host bus requests comprises a CPU high state and a CPU low state and wherein providing a separate set of priority classes includes granting priority to a host bus request while in the CPU high state and granting priority to a peripheral bus request while in the CPU low state.4. The computer readable medium of claim 3, wherein the instructions further comprise granting priority to the host bus request if no requests from the peripheral bus are active while in the CPU low state.5. The computer readable medium of claim 3, wherein a programmable latency timer determines an amount of time that the CPU stays in the CPU high state.6. The computer readable medium of claim 3, wherein a programmable watchdog timer indicates an activity time for the CPU such that the arbiter moves the CPU to the CPU low state if the watchdog timer expires.7. The computer readable medium of claim 1, wherein the instructions further comprise draining a buffer for posting data for transfer from the CPU to one of the bus agents before granting access to the system resource via the peripheral bus.
RELATED APPLICATIONSThis Application is a continuation of U.S. patent application Ser. No. 08/924,209, filed on Sep. 5, 1997, now a U.S. Pat. No. 6,212,589, which is a continuation of U.S. patent application Ser. No. 08/379,157, filed Jan. 27, 1995, now abandoned.BACKGROUND OF THE INVENTION1. Field of the InventionThe present invention pertains to the field of computer systems. More particularly, this invention relates to a system resource arbitration mechanism in a host bridge.2. BackgroundPrior computer systems commonly include a central processing unit (CPU) that communicates with various computer system elements via a host bus. Prior computer systems may also include a peripheral bus that enables communication among a variety of peripheral components. Such a computer system typically includes a host bridge that enables communication between the host bus and the peripheral bus. Such a host bridge typically enables the CPU to access bus agents coupled to the peripheral bus and may enable the bus agents coupled to the peripheral bus to access system resources such as a main memory for the computer system.Such a computer system typically implements an arbitration mechanism that coordinates accesses to system resources from the host bus and the peripheral bus. For example, such an arbitration mechanism is required to coordinate between main memory accesses by the CPU and main memory accesses by the various bus agents coupled to the peripheral bus. In addition, such an arbitration mechanism typically coordinates between accesses that originate with the CPU and that are targeted for a bus agent on the peripheral bus and accesses that originate on the peripheral bus that are targeted either for a system resource or another bus agent coupled to the peripheral bus.One type of prior computer system implements a relatively simple arbitration mechanism that employs a set of hold/hold acknowledge bus control signals coupled to the CPU. Such a simple arbitration mechanism asserts the hold signal to the CPU whenever access to system resources is required by one of the bus agents coupled to the peripheral bus. The CPU usually responds to the hold signal from the arbitration mechanism by returning the hold acknowledge signal after completing activity underway on the host bus and any required data coherency transactions.Such a hold/hold acknowledge implementation provides a relatively low cost arbitration mechanism for a computer system. Unfortunately, such simple hold/hold acknowledge arbitration mechanisms severely limit the performance of the computer system. For example, such arbitration mechanisms usually do not allow concurrent bus transactions over the host bus and the peripheral bus. In addition, such arbitration mechanisms usually do not allow communication between bus agents coupled to the peripheral bus while the CPU is accessing a system resource such as the main memory. Moreover, such a hold/hold acknowledge arbitration mechanism typically requires a long latency between the assertion of the hold signal by the arbitration mechanism and the hold acknowledge response by the CPU. Such long latencies decrease the overall bandwidth available for data transfer in such a system.Other prior computer systems may implement relatively complex arbitration mechanism. For example, one such computer system employs an arbitration hold/back-off signaling protocol to the CPU on the host bus that allows full concurrent operation between the host bus and the peripheral bus. Such an arbitration hold/back-off signaling protocol typically decreases the latency required for the arbitration mechanism to gain control over the host bus. Unfortunately such an arbitration mechanism usually requires a relatively complex set of arbiter logic in order to ensure proper data flow and data coherency in the system. Such complex arbiter logic typically increases the overall cost of such a computer system.SUMMARY OF THE INVENTIONOne object of the present invention is to provide a host bridge with an arbiter that enables a CPU to access main memory while the host bridge completes data transfer posted by the CPU for transfer over the peripheral bus.Another object of the present invention is to enable a CPU to main memory access to complete in parallel with the start of a main memory access that originates on the peripheral bus.Another object of the present invention is to enable concurrency between CPU to main memory accesses and communication transactions on the peripheral bus between peripheral bus agent peers.These and other objects are provided by a computer system that includes a system resource and a host bridge that enables access to the system resource from a CPU via a host bus and from a set of bus agents via a peripheral bus. The host bridge provides an arbiter that implements a separate set of priority classes to the CPU and to the bus agents on the peripheral bus for coordinating access to the system resource. For one embodiment, the priority classes for the CPU include a CPU high state and a CPU low state. The arbiter grants priority to the CPU while in the CPU high state and grants access to the separately prioritized bus agents on the peripheral bus while in the CPU low state. The host bridge includes a programmable latency timer that determines an amount of time that the CPU stays in the CPU high state and a programmable watchdog timer that indicates an inactivity time for the CPU for removing the CPU to the CPU low state.Other features and advantages of the present invention will be apparent from the accompanying drawings, and from the detailed description that follows below.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings in which like references indicate similar elements, and in which:FIG. 1 illustrates a computer system for one embodiment which comprises a central processing unit (CPU), a host bridge circuit, a main memory, and a set of peripheral bus agents coupled to a peripheral bus;FIG. 2 illustrates the host bridge circuit for one embodiment which includes an arbiter that coordinates system resource access requests that originate on the host and peripheral busses;FIG. 3 illustrates the priority class implemented by the arbiter for access transactions to the main memory that originate from the CPU;FIG. 4 illustrates the separate priority class for the peripheral bus agents coupled to the peripheral bus;FIG. 5 illustrates arbitration by the arbiter in response to a request for the main memory or the peripheral bus while the CPU 12 is in the CPU high priority state;FIG. 6 illustrates a bus preemption mechanism for the host bus that is employed by the host bridge circuit;FIG. 7 illustrates the management of the host to peripheral buffer in the host bridge circuit during accesses to the main memory that originate via the peripheral bus.DETAILED DESCRIPTIONFIG. 1 illustrates a computer system 10 for one embodiment. The computer system 10 comprises a central processing unit (CPU) 12, a host bridge circuit 14, a main memory 16, and a set of peripheral bus agents 20-26. The host bridge circuit 14 enables communication between the CPU 12 coupled to a host bus 30 and the peripheral bus agents 20-26 each coupled to a peripheral bus 32. The peripheral bus agents 20-26 may be referred to as peripheral bus peers.The host bridge circuit 14 functions as a memory controller for the main memory 16. The host bridge circuit 14 enables read and write access to the main memory 16 from the host bus 30 and the peripheral bus 32. The host bridge circuit 14 coordinates accesses to the main memory 16 that originate on the peripheral bus 32 with accesses to the main memory 16 that originate on the host bus 30.In addition, the host bridge circuit 14 functions as an arbiter for resources of the computer system 10 including the main memory 16. For example the host bridge circuit 14 arbitrates between requests from the CPU 12 and the peripheral bus agents 20-26 for access to the main memory 16 via a memory path 34.The host bridge circuit 14 also functions as a bus bridge between the host bus 30 and the peripheral bus 32. The host bridge circuit 14 enables transactions originating on the host bus 30 to propagate to the peripheral bus 32.The host bridge circuit 14 also enables transaction originating on the peripheral bus 32 to propagate to the host bus 30.FIG. 2 illustrates the host bridge circuit 14 for one embodiment. The host bridge circuit 14 includes a host bus interface 44 that enables communication over the host bus 30 and a peripheral bus interface 46 that enables communication over the peripheral bus 32. The host bridge circuit 14 further comprises an arbiter 42 that arbitrates between requests for access to system resources such as the main memory 16. The request may originate from agents coupled to the host bus 30 such as the CPU 12 or agents coupled to the peripheral bus 32 such as the peripheral bus agents 20-26.The host bus interface 44 senses data transfer sequences such as read and write transactions that initiate on the host bus 30. The host bus interface 44 notifies the arbiter 42 of data transfer sequences that originate on the host bus 30 and that target for the main memory 16. The arbiter 42 then arbitrates such requests according to a priority of the CPU 12 as indicated by previous transactions to the main memory 16 from the peripheral bus 32 as well as timers maintained in a set of resource allocation timers 40. The resource allocation timers are programmable by the CPU 12 via the host bus 30 and allow the CPU 12 to tune the relative priorities for system resource allocation between the CPU 12 and the peripheral bus agents 20-26.The host bus interface 44 transfers write data received over the host bus and targeted for the main memory 16 into a DRAM write buffer 48 through a multiplexer 54. In addition, the host bus interface 44 buffers or "posts" write data targeted for a bus agent coupled to the peripheral bus 32 in a host to peripheral buffer 52.The peripheral bus interface 46 senses data transfer sequences such as read and write transactions that occur on the peripheral bus 32 and that originate from one of the peripheral bus agents 20-26. The peripheral bus interface 46 notifies the arbiter 42 of any data transfer sequences targeted for the main memory 16. The arbiter 42 arbitrates such requests based upon an independent rotating priority scheme for the peripheral bus agents 20-26 and the relative priority of the CPU 12. If a write transaction is granted by the arbiter 42, the peripheral bus interface 46 posts the write data received over the peripheral bus 32 into a peripheral write buffer 50. The data from the peripheral write buffer 50 is transferred into the DRAM write buffer 48 through the multiplexer 54 for transfer to the main memory 16 over the memory path 34.FIG. 3 illustrates the priority mechanism employed by the arbiter 42 for access transactions to the main memory 16 and the peripheral bus 32 that originate from the CPU 12. The arbiter 42 provides a separate priority scheme for the CPU 12. For one embodiment, the CPU 12 resides in either a CPU high priority state or a CPU low priority state. The CPU 12 wins arbitration over the peripheral bus agents 20-26 while in the CPU high priority state.Upon a reset of the computer system 10, the CPU 12 assumes the CPU high priority state. In the CPU high priority state, the arbiter 42 grants priority access to the main memory 16 and the peripheral bus 32 for any accesses that originate from the CPU 12 via the host bus 30. The CPU 12 stays in the CPU high priority state for a time interval determined by a latency timer and a CPU watchdog timer contained in the resource allocation timers 40.After the CPU 12 transitions to the CPU low priority state, the arbiter 42 grants priority access to the main memory 16 and the peripheral bus 32 to accesses that originate from one of the peripheral bus agents 20-26 over the peripheral bus 32. If no peripheral requests are present while the CPU 12 is in the CPU low priority state, the arbiter 42 grants priority access to the system resources to the CPU 12. The CPU 12 remains in the CPU low priority state until the arbiter 42 grants three accesses to the main memory 16 and the peripheral bus 32 from the peripheral bus 32. Three such grants to bus agents coupled to the peripheral bus 32 cause the CPU 12 to enter the CPU high priority state for the interval determined by the resource allocation timers 40.FIG. 4 illustrates the priority scheme for the peripheral bus agents 20-26. 26.The arbiter 42 provides a separate priority scheme for the bus agents coupled to the peripheral bus 32. The peripheral bus agents 20-26 correspond to bus requests REQ0-REQ3. The arbiter 42 maintains a rotating priority scheme for the peripheral bus agents 20-26. Each request from the peripheral bus agents 20-26 is arbitrated and according to the CPU high or CPU low priority state of the CPU 12 at the time of the request.FIG. 5 illustrates arbitration by the arbiter 42 in response to a request for the main memory 16 and the peripheral bus 32 via the peripheral bus 32 while the CPU 12 is in the CPU high priority state. At block 100, the CPU 12 assumes the high priority state due to either a system reset or three consecutive grants the arbiter 42 to bus agents coupled to the peripheral bus 32.At block 102, the arbiter 42 is notified of a request from a bus agent coupled to the peripheral bus 32. Thereafter, at decision block 104 the arbiter 42 determines whether the latency timer contained in the resource allocation timers 40 has expired. If the latency timer has expired at decision block 104 then control proceeds to block 108. At block 108, the arbiter 42 causes the peripheral bus interface 46 to assert a grant to the requesting peripheral bus agent coupled to the peripheral bus 32. Thereafter, at block 110 the arbiter 42 sets the CPU 12 to the CPU low priority state.If the latency timer has not expired at decision block 104, then control proceeds to block 106. At block 106, the arbiter 42 determines whether the CPU watchdog timer of the resource allocation timers 40 has expired. The CPU watchdog timer is reset with a predetermined watchdog timer value whenever a request for a system resource is received over the host bus 30. An expired CPU watchdog timer at decision block 106 indicates an idle period for requests from the CPU 12. If the CPU watchdog timer has expired at decision block 106, then control proceeds to block 108 to grant the peripheral bus 32 to the requesting peripheral bus agent and to set the CPU 12 to the CPU low priority state at block 110.FIG. 6 illustrates a bus preemption mechanism for the host bus 30 that is employed by the host bridge circuit 14. The arbiter 42 employs the bus preemption mechanism shown to prevent conflicts between concurrent accesses for system resources such as the main memory 16 or the peripheral bus 32 that originate via the host bus 30 and the peripheral bus 32.At block 120, the arbiter 42 senses a request from a peripheral bus agent coupled to the peripheral bus 32. The arbiter then waits for the CPU 12 to exit the CPU high priority state, and waits for any pending writes posted in the buffer 52 to drain. Thereafter, at block 122 the arbiter 42 causes the host bus interface 44 to assert the AHOLD signal on the host bus 30 while causing the peripheral bus interface 46 to issue a grant over the peripheral bus 32 to the requesting peripheral bus agent. The AHOLD signal on the host bus 30 causes the CPU 12 to finish up the current transaction on the host bus 30 and to relinquish control of the next address bus cycle over the host bus 30.Thereafter, at decision block 124 the arbiter 42 determines whether a conflicting access to the request granted on the peripheral bus 32 is received via the host bus 30. If a conflicting access via the host bus 30 is received at decision block 124, then control proceeds to block 126. At block 126, the arbiter 42 causes the host bus interface 44 to assert a back-off (BOFF) signal over the host bus 30. The BOFF signal causes the CPU 12 to immediately relinquish control over the host bus 30 and terminate the conflicting access. On the other hand, if a conflicting access via the host bus 30 is not detected, then control proceeds to block 128 to continue the normal processing of the peripheral bus request granted during block 122.For one embodiment, the peripheral bus32 conforms to a published peripheral component interface (PCI) standard bus specification. The PCI bus standard provides that each of the peripheral bus agents 20-26 implement a master latency timer initiated by a FRAME control signal on the peripheral bus 32. The peripheral bus interface 46 deasserts the grant signal on the peripheral bus 32 upon detection of the FRAME signal on the peripheral bus 32 from the requesting peripheral bus agent. Thereafter, the master latency timer in the requesting peripheral bus agent expires and causes the requesting peripheral bus agent to release control of the peripheral bus 32. Thereafter, the arbiter 42 rearbitrates accesses to system resources including the main memory 16 and the peripheral bus 32 that originate from both the host bus 30 and the peripheral bus 32. Such an early deassertion of the peripheral bus 32 grant by the peripheral bus interface 46 ensures regularly occurring rearbitration cycles for system resources without the need for specific processor request indication from the CPU 12 to the host bridge circuit 14.FIG. 7 illustrates the management of the host to peripheral buffer 52 in the host bridge circuit 14 during accesses to the main memory 16 that originate via the peripheral bus 32. At block 130, the arbiter 42 receives a request from a peripheral bus agent coupled to the peripheral bus 32 that targets the main memory 16.Thereafter at block 132, the arbiter 42 causes the host bus interface 44 to disable write accesses received over the host bus 30 that are targeted for an agent coupled to the peripheral bus 32. In such a manner, the CPU 12 is prevented from posting more data into the host to peripheral buffer 52 during a buffer drain operation.At block 134, the arbiter 42 begins draining the host to peripheral buffer 52 to the appropriate target bus agents coupled on the peripheral bus 32 through the peripheral bus interface 46. While the PCI to peripheral buffer 52 is being drained to the peripheral bus 32, the arbiter 42 causes the host bus interface 44 to allow accesses to the main memory 16 that originate on the host bus 30.At block 138, the drain of the host to peripheral buffer 52 completes. Thereafter at block 140, the arbiter 42 reenables peripheral bus accesses from the host bus 30 by allowing new data to be posted to the host to peripheral buffer 52 from the host bus 30.In the foregoing specification the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are accordingly to be regarded as illustrative rather than a restrictive sense.
A circuit (200) including an output node (OUT) and a cross-coupled pair of semiconductor devices (204, 214) configured to provide, at the output node, an output signal in a second voltage domain (VDDH) based on an input signal in a first voltage domain (VDDL) is described herein. The circuit further includes a pull-up assist circuit (230) coupled to the output node; and a look-ahead circuit (220) coupled to the pull-up assist circuit, wherein the look-ahead circuit is configured to cause the pull-up assist circuit to assist in increasing a voltage level at the output node when there is a decrease in a voltage level of an inverted output signal in the second voltage domain from a high voltage level of the second voltage domain to a low voltage level of the second voltage domain.
CLAIMS1. A circuit, comprising:an output node;a cross-coupled pair of semiconductor devices configured to provide, at the output node, an output signal in a second voltage domain based on an input signal in a first voltage domain;a pull-up assist circuit coupled to the output node; anda look-ahead circuit coupled to the pull-up assist circuit,wherein the look-ahead circuit is configured to cause the pull-up assist circuit to assist in increasing a voltage level at the output node when there is a decrease in a voltage level of an inverted output signal in the second voltage domain from a high voltage level of the second voltage domain to a low voltage level of the second voltage domain.2. The circuit of claim 1, wherein the look-ahead circuit is configured to cause the pull-up assist circuit to store a pre-charge when the inverted output signal is transitioning to the high voltage level of the second voltage domain and enable the pull- up assist circuit to increase a charge at the output node using the pre-charge when the inverted output signal is transitioning to the low voltage level of the second voltage domain.3. The circuit of claim 1, wherein the look-ahead circuit is configured to disable the pull-up assist circuit in a voltage scaling mode, wherein during the voltage scaling mode an operating voltage level of the first voltage domain is lowered.4. The circuit of claim 1, wherein the pull-up assist circuit comprises a first semiconductor device coupled to, and configured to be switched by, the look-ahead circuit to allow accumulation of charge at a pull-up node; and a second semiconductor device coupled to the first semiconductor device to allow the accumulated charge at the pull-up node to be provided to the output node based on the inverted output signal.5. The circuit of claim 4, wherein the first semiconductor device comprises a transistor having a gate, and the look-ahead circuit comprises an output coupled to the gate of the transistor.6. The circuit of claim 4, wherein the second semiconductor device comprises a transistor having a drain coupled to the output node.7. The circuit of claim 4, further comprising a pull-down semiconductor device configured to reduce the voltage level of the output signal at the output node, wherein the second semiconductor device of the pull-up assist circuit is sized based on the pull-down semiconductor device.8. The circuit of claim 1 , wherein the look-ahead circuit comprises an inverter having an input configured to receive a signal based on the output signal.9. A circuit, comprising:an output node;a cross-coupled pair of semiconductor devices configured to provide, at the output node, an output signal in a second voltage domain based on an input signal in a first voltage domain;pull-up assist means for increasing a voltage level at the output node; and look-ahead means for causing the pull-up assist means to increase the voltage level at the output node when there is a decrease in a voltage level of an inverted output signal in the second voltage domain from a high voltage level of the second voltage domain to a low voltage level of the second voltage domain.10. The circuit of claim 9, wherein the look-ahead means is configured to cause the pull-up assist means to store a pre-charge when the inverted output signal is transitioning to the high voltage level of the second voltage domain and enable the pull- up assist means to increase a charge at the output node using the pre-charge when the inverted output signal is transitioning to the low voltage level of the second voltage domain.11. The circuit of claim 9, wherein the look-ahead means is configured to disable the pull-up assist means in a voltage scaling mode, wherein during the voltage scaling mode an operating voltage level of the first voltage domain is lowered.12. The circuit of claim 9, wherein the pull-up assist means comprises a first semiconductor means coupled to, and configured to be switched by, the look-ahead means to allow accumulation of charge at a pull-up node; and a second semiconductor means coupled to the first semiconductor means to allow the accumulated charge at the pull-up node to be provided to the output node based on the inverted output signal.13. The circuit of claim 12, wherein the first semiconductor means comprises a transistor having a gate, and the look-ahead means comprises an output coupled to the gate of the transistor.14. The circuit of claim 12, wherein the second semiconductor means comprises a transistor having a drain coupled to the output node.15. The circuit of claim 12, further comprising a pull-down semiconductor means configured to reduce the voltage level of the output signal at the output node, wherein the second semiconductor means of the pull-up assist means is sized based on the pull-down semiconductor means.16. The circuit of claim 9, wherein the look-ahead means comprises an inverter having an input configured to receive a signal based on the output signal.17. A method, comprising:in a cross-coupled pair of semiconductor devices configured to provide an output signal in a second voltage domain based on an input signal in a first voltage domain, generating an inverted output signal based on the output signal;detecting a decrease in a voltage level of the inverted output signal in the second voltage domain from a high voltage level of the second voltage domain to a low voltage level of the second voltage domain; andincreasing a voltage level at the output node when the decrease of the voltage level of the inverted output signal has been detected.18. The method of claim 17, further comprising:storing a pre-charge when a transition of the inverted output signal towards the high voltage level of the second voltage domain is detected.19. The method of claim 18, further comprising:increasing a charge at the output node using the pre-charge when the inverted output signal is transitioning to the low voltage level of the second voltage domain.20. The method of claim 19, wherein increasing the charge at the output node comprises:switching a first semiconductor device to allow accumulation of the pre-charge at a pull-up node; andswitching a second semiconductor device coupled to the first semiconductor device to allow the pre-charge at the pull-up node to be provided to the output node based on the inverted output signal.21. A processing system comprising:a memory circuit configured to operate in a first voltage domain;a processing circuit configured to operate in a second voltage domain and further configured to access the memory circuit using an address signal; anda level shifter coupled to the processing circuit and the memory circuit and configured to translate the address signal, the level shifter comprising:an output node;a cross-coupled pair of semiconductor devices configured to provide, at the output node, an output signal in the second voltage domain based on the address signal in the first voltage domain;a pull-up assist circuit coupled to the output node; anda look-ahead circuit coupled to the pull-up assist circuit, wherein the look-ahead circuit is configured to cause the pull-up assist circuit to assist in increasing a voltage level at the output node when there is a decrease in a voltage level of an inverted output signal in the second voltage domain from a high voltage level of the second voltage domain to a low voltage level of the second voltage domain.22. The processing system of claim 21, wherein the look-ahead circuit is configured to cause the pull-up assist circuit to store a pre-charge when the inverted output signal is transitioning to the high voltage level of the second voltage domain and enable the pull-up assist circuit to increase a charge at the output node using the pre- charge when the inverted output signal is transitioning to the low voltage level of the second voltage domain.23. The processing system of claim 21, wherein the look-ahead circuit is configured to disable the pull-up assist circuit in a voltage scaling mode, wherein during the voltage scaling mode an operating voltage level of the first voltage domain is lowered.
HIGH-SPEED LEVEL SHIFTERCROSS-REFERENCE TO RELATED APPLICATION[0001] This application claims priority to and the benefit of Non-ProvisionalApplication No. 15/473,124 filed in the U.S. Patent and Trademark Office on March 29, 2017, the entire content of which is incorporated herein by reference.BACKGROUNDField[0002] Aspects of the present disclosure relate generally to memories, and more particularly, to a high-speed level shifter.Background[0003] As semiconductor technology has advanced into the submicron region, power supply voltage is scaled down in concert with the scaling down of transistor dimensions. For example, microprocessors are now manufactured with transistors that operate in a low-voltage domain that can include supply voltage levels as low as sub-one volt. These microprocessors typically include designs that use dual power rails, each having a different voltage domain. In these implementations, circuitry in a low-voltage domain may still need to interface with circuitry that operates in a higher-voltage domain. To save power, circuitry for memory address decoding that generates addressing signals for memory circuits such as word line select signals, operates in the low-voltage domain. However, the resulting decoded word line select signal must then be level-shifted up into the higher-voltage domain of the memory to drive the selected word line. In these dual voltage rail approaches, a level shifter circuit is used to shift voltage levels for memory addressing signals (such as the word line select signal) from an input signal in a first voltage domain with a first voltage level to an output signal in a second, higher- voltage domain with a second voltage level.[0004] Traditional level shifters, which has an output that falls fast and rises slowly, will cause a large timing window for these logic transitions. Large timing windows translate into large setup/hold windows that negatively affect current designs. For example, uneven output transition times will result in a timing hit wherever a level shifter logic is used. At higher operating speeds, conventional level shifting approaches introduce too much delay. [0005] Accordingly, there is a need in improved memory designs for improved level shifting speeds, such as for transitions from a low-voltage domain to a high-voltage domain, while still supporting wider voltage level ranges between these domains.SUMMARY[0006] The following presents a simplified summary of one or more aspects of the disclosed approach, in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated features of the disclosure, and is intended neither to identify key or critical elements of all aspects of the disclosure nor to delineate the scope of any or all aspects of the disclosure. Its sole purpose is to present some concepts of one or more aspects of the disclosure in a simplified form as a prelude to the more detailed description that is presented later.[0007] In one aspect, the disclosure provides a circuit including an output node and a cross-coupled pair of semiconductor devices configured to provide, at the output node, an output signal in a second voltage domain based on an input signal in a first voltage domain. The circuit further includes a pull-up assist circuit coupled to the output node; and a look-ahead circuit coupled to the pull-up assist circuit, wherein the look-ahead circuit is configured to cause the pull-up assist circuit to assist in increasing a voltage level at the output node when there is a decrease in a voltage level of an inverted output signal in the second voltage domain from a high voltage level of the second voltage domain to a low voltage level of the second voltage domain.[0008] Another aspect of the disclosure provides circuit having an output node and a cross-coupled pair of semiconductor devices configured to provide, at the output node, an output signal in a second voltage domain based on an input signal in a first voltage domain. The circuit further includes pull-up assist means for increasing a voltage level at the output node; and look-ahead means for causing the pull-up assist means to increase the voltage level at the output node when there is a decrease in a voltage level of an inverted output signal in the second voltage domain from a high voltage level of the second voltage domain to a low voltage level of the second voltage domain.[0009] Yet another aspect of the disclosure provides a method that, in a cross-coupled pair of semiconductor devices configured to provide an output signal in a second voltage domain based on an input signal in a first voltage domain, generates an inverted output signal based on the output signal. The method also includes detecting a decrease in a voltage level of the inverted output signal in the second voltage domain from a high voltage level of the second voltage domain to a low voltage level of the second voltage domain; and increasing a voltage level at the output node when the decrease of the voltage level of the inverted output signal has been detected.[0010] Still yet another aspect of the disclosure provides a processing system including a memory circuit configured to operate in a first voltage domain; a processing circuit configured to operate in a second voltage domain and further configured to access the memory circuit using an address signal; and a level shifter coupled to the processing circuit and the memory circuit and configured to translate the address signal. The level shifter includes an output node and a cross-coupled pair of semiconductor devices configured to provide, at the output node, an output signal in the second voltage domain based on the address signal in the first voltage domain. The level shifter further provides a pull-up assist circuit coupled to the output node; and a look-ahead circuit coupled to the pull-up assist circuit, wherein the look-ahead circuit is configured to cause the pull-up assist circuit to assist in increasing a voltage level at the output node when there is a decrease in a voltage level of an inverted output signal in the second voltage domain from a high voltage level of the second voltage domain to a low voltage level of the second voltage domain.[0011] These and other aspects of the disclosure will become more fully understood upon a review of the detailed description, which follows.BRIEF DESCRIPTION OF THE DRAWINGS[0012] These and other sample aspects of the disclosure will be described in the detailed description that follow, and in the accompanying drawings.[0013] FIG. 1 is a circuit diagram of a prior art level shifting circuit.[0014] FIG. 2 is a conceptual diagram of a level shifter configured in accordance with one aspect of a high-speed level shifter described herein.[0015] FIG. 3 is a block diagram of a level shifter configured in accordance with the level shifter of FIG. 2.[0016] FIG. 4 is a conceptual diagram of another level shifter configured in accordance with another aspect of the high-speed level shifter described herein.[0017] FIG. 5 is a block diagram of a level shifter configured in accordance with the level shifter of FIG. 4.[0018] FIG. 6 is a block diagram illustrating a look-ahead module that may be implemented in the level shifter of FIG. 2. [0019] FIG. 7 is a block diagram illustrating another look-ahead module that may be implemented in the level shifter of FIG. 2.[0020] FIG. 8 is a block diagram illustrating yet another look-ahead module that may be implemented in the level shifter of FIG. 2.[0021] FIG. 9 is a block diagram illustrating still yet another look-ahead module that may be implemented in the level shifter of FIG. 2.[0022] FIG. 10 is a flow diagram illustrating a level-shifting operation.[0023] FIG. 11 is a block diagram illustrating an example of a memory in which level shifters configured in accordance with various aspects of the high-speed level shifter disclosed herein may be used.[0024] In accordance with common practice, some of the drawings may be simplified for clarity. Thus, the drawings may not depict all of the components of a given apparatus (e.g., device) or method. Finally, like reference numerals may be used to denote like features throughout the specification and figures.DETAILED DESCRIPTION[0025] The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well known structures and components are shown in block diagram form in order to avoid obscuring such concepts.[0026] A conventional level shifter 100 as shown in FIG. 1 may perform voltage level shifting, in a dual voltage rail system, for a word line select signal from an input signal (INPUT) in a first voltage domain with a first voltage level (VDDL) to an output signal (OUTPUT) in a second, higher voltage domain with a second voltage level (VDDH). The input signal drives a gate of an NMOS transistor MN1 102. If the input signal is low (ground or VSS), the NMOS transistor MNl 102 switches off, allowing a node Nl, effectively where an inverted output signal (OUTPUT b) may be found, to float with respect to ground. An inverted input signal (INPUT_b) drives a gate of an NMOS transistor MN2 104. The value of the inverted input signal should be VDDL when the input signal is low, which switches on the NMOS transistor M2 104 to pull a node N2 to ground. The output signal is taken from the node N2.[0027] Continuing to refer to FIG. 1, the node N2 is coupled to a gate of a PMOS transistor MP1 106 that has its drain coupled to the node Nl. The PMOS transistor MP1 106 is cross-coupled with respect to a PMOS transistor MP2 108. The input signal also drives a gate of a PMOS transistor MP3 110 in series with transistor MP1 106. When the input signal is low, both the PMOS transistor MP 3 110 and the PMOS transistor MP1 106 will be on, which charges the node Nl to the second higher voltage domain power supply level, the second voltage level VDDH. The node Nl drives the gate of the PMOS transistor MP2 108 coupled to the node N2. The PMOS transistor MP2 108 will thus be off when the input signal is low. Another PMOS transistor MP4 112 that has its gate driven by the inverted input signal is in series with the PMOS transistor MP3 110.[0028] In response to the input signal switching high to VDDL, the NMOS transistorMNl 102 will switch on and the NMOS transistor M2 104 will switch off. The output node N2, which had been discharged while the input signal was low, must then float until the PMOS transistor MP2 108 can be switched on. In turn, the PMOS transistor MP2 108 cannot switch on until the NMOS transistor MNl 102 can discharge the node Nl. However, the PMOS transistor MP1 106 is still momentarily on and attempting to keep the node Nl charged, which thus fights with the NMOS transistor MNl 102 discharging the node Nl. The PMOS transistor MP3 110 is only weakly on because VDDL is effectively a weak zero with regard to VDDH. The PMOS transistor MP3 110 thus assists NMOS transistor MNl 102 in terms of discharging the node Nl by restricting the flow of charge to the PMOS transistor MP1 106. Once the node Nl is discharged, the PMOS transistor MP2 108 will switch on. Since the PMOS transistor MP4 112 will already be on due to the inverted input signal being driven low, the switching on of the PMOS transistor MP2 108 will charge the output signal to VDDH. An analogous struggle occurs between the NMOS transistor M2 104 and the PMOS transistor MP2 108 when the inverted input signal is driven to VDDL in response to the input signal transitioning low.[0029] The fight between the NMOS and PMOS transistors in the level shifter 100, which adversely affects memory timing due to the delay incurred during the NMOS/PMOS struggle, is exacerbated where the size of pull-down transistors, such as the NMOS transistors MNl and MN2, are skewed to allow the level shifter to perform output transitions at more extreme voltage level differentials (e.g., low input, high output). In other words, this differential is desirable so that the level shifter 100 may operate for larger dual -rail voltage ranges, but the skewing affects the timing negatively. In the level shifter 100, the P/N ratio is 1 : 1 :6, as indicated in the brackets next to each transistor. Specifically, with respect to the timing, the level shifter 100 will have a lower rising speed but a high dropping speed at the output.[0030] To eliminate the delay in conventional level-shifting, a level shifter is provided that includes look-ahead and pull-up assist features that increases the speed of the level shifting for rising output signals (i.e., where the output rises from ground to the voltage level of the higher voltage domain). The look-ahead features operate as a predictive function to ensure the pull-up assist feature is ready to help in increasing the rate at which an output signal transitions from low (e.g., 0) to high (e.g., VDDH) when an input signal is switched from low (e.g., 0) to high (e.g., VDDL).[0031] A level shifter 200 is provided in FIG. 2 that includes a look-ahead module 220 and a pull-up assist module 230 that provides the look-ahead feature and the pull-up assist feature, respectively. The pull-up assist module 230 provides assistance in increasing the charge at an output node (OUT) when there is a rising input signal transition at an input node (IN) so that the increased charge allows an output signal to rise faster (i.e., the output signal rises from ground to the voltage level of the higher voltage domain). The look-ahead module 220 provides presetting of the pull-up assist module 230 to assist the level shifter 200 during the rising transition. The level shifter 200 also includes an inverted input node (IN b) and an inverted output node (OUT b) where signals configured to be the complements of the signals at the input node and the output node, respectively, are found. The level shifter 200 includes a cross-coupled pair of transistor chains including a first transistor chain with an NMOS transistor 202, a PMOS transistor 204, and a PMOS transistor 206; and, a second transistor chain with an NMOS transistor 212, a PMOS transistor 214, and a PMOS transistor 216. With the exception of the changes described herein due to the inclusion of the look-ahead module 220 and the pull-up assist module 230 in the level shifter 200, it may be assumed that the NMOS transistor 202, the PMOS transistor 204, and the PMOS transistor 206 of FIG. 2 are configured and operate similarly to the NMOS transistor MN1 102, the PMOS transistor MP1 106, and the PMOS transistor MP3 110, respectively, of FIG. 1. It may similarly be assumed that the NMOS transistor 212, the PMOS transistor 214, and the PMOS transistor 216 of FIG. 2 are configured and operate similarly to the NMOS transistor MN2 104, the PMOS transistor MP2 108, and the PMOS transistor MP4 112, respectively, of FIG. 1. Through these assumptions, description that may be duplicative may be avoided.[0032] FIG. 3 illustrates an example implementation of the look-ahead module 220 and the pull-up assist module 230 in a level shifter 300 that similarly includes an input node (IN) and an output node (OUT) as well as an inverted input node (IN b) and an inverted output node (OUT b) that are configured to be the complements of the signals at the input node and the output node, respectively. The level shifter 300 includes a cross- coupled pair of transistor chains including a first transistor chain with an NMOS transistor 302, a PMOS transistor 304, and a PMOS transistor 306; and, a second transistor chain with an NMOS transistor 312, a PMOS transistor 314, and a PMOS transistor 316. It should be noted that the comments regarding avoidance of duplicative description of the configuration and operation for the cross-coupled pair of transistor chains in the level shifter 200 of FIG. 2 applies equally to the cross-coupled pair of transistor chains in the level shifter 300.[0033] In one aspect of the disclosed high-speed level shifter, the pull-up assist module230 may be implemented as an extra pull-up chain with a pair of PMOS transistors to enhance the rising speed for one side of the level shifter 300. The pair of PMOS transistors are shown as a PMOS transistor MP1 332 and a PMOS transistor MP2 334. The look-ahead module 220 may be implemented by two inverters that provide a pull- up switch signal (PU S WITCH) to operate the pull-up assist module 230. The two inverters are shown as a first inverter 322 and a second inverter 324, and operate as a detecting scheme that provides a slower transition for the value of the signal PU SWITCH as it follows each change in value of the output signal. Thus, when the output signal transitions from high-to-low, preferably the signal PU SWITCH should transition from high-to-low as well, but transition slower than that of the output signal. The same slower transition for the signal PU SWITCH as compared to the transition of the output signal is also desired when the output signal transitions from low-to-high. In one aspect of the disclosed high-speed level shifter, the second inverter 324 may be implemented using a tri-state device so that the signal PU SWITCH may be transitioned when both the output signal and the inverted output signal are completely transferred.[0034] Continuing to refer to FIG. 3, when the output signal at the output node is low, the signal PU SWITCH will also be low. This will open up the PMOS transistor MP1 332 and pre-charge the signal at a pull-up assist node (PU VDD) between the PMOS transistor MPl 332 and the PMOS transistor MP2 334 to wait for a coming rising signal. In other words, the signal PU SWITCH will turn on the PMOS transistor MPl 332 and pre-charge the PU VDD node after the output signal at the output node starts falling to 0, and thereby await the coming rising transition for the output signal. Further, when the output signal is low, an inverted output signal at the node OUT b will be high, which turns off the PMOS transistor MP2 334. Thus, output of the PMOS transistor MP2 334 does not affect the output signal at the output node.[0035] When the input signal transitions from low to high (e.g., from ground to VDDL), the inverted output signal will be pulled down first, which will turn on the PMOS transistor MP2 334. Effectively, it may be said that the NMOS transistor 302 coupled to the node OUT b will bring the inverted output signal down very quickly because of the fast falling transition provided by the strong NMOS transistor receiving the input signal at node IN. Once the PMOS transistor MP2 334 is turned on, because the PU VDD node is already pre-charged, the output signal at the node OUT may be pulled-up quickly.[0036] After the output signal rises to high (i.e., VDDH), the signal PU S WITCH will turn off the PMOS transistor MPl 332 so the speed for the coming falling transition will not be affected by the PMOS transistor MPl 332 and the PMOS transistor MP2 334. In effect, whether the signal PU SWITCH is switching the PMOS transistor MPl 332 off or on, it is desirable for the signal PU SWITCH to switch the PMOS transistor MPl 332 in a manner that follows timing of the transition of the output signal.[0037] In one aspect of the disclosed high-speed level shifter, the PMOS transistor MPl332 and the PMOS transistor MP2 334 in the level shifter 300 may be much stronger than the other PMOS transistors used in the level shifter. For example, the PMOS transistor MPl 332 and the PMOS transistor MP2 334 may be similarly sized to the NMOS transistor 312.[0038] In another aspect of the disclosed high-speed level shifter, the second inverter324 is controlled by the inverted output signal at the node OUT b, which transitions slower than the output signal at the node OUT because it may be desirable to wait for the transition of the output signal to be complete before the signal PU SWITCH is changed. Effectively, the second inverter 324 may be used to implements a detection scheme that will transition the signal PU SWITCH only when the output signal and the inverted output signal at the node OUT and the node OUT b, respectively, have completely transitioned (i.e., settled to a steady state).[0039] In certain memory designs, a static voltage scaling (SVS) signal may be used to slow down memory operations when the external, system level circuits interfacing with the memory is operating in low voltage (e.g., low power) modes. Specifically, the SVS signal will be high to enable a slower mode of operation for the memory when VDDL is lower because that lower VDDL will slow down the other circuits (e.g., external logic) to which the memory is interfacing and the memory needs to operate at a commensurate speed.[0040] Certain issues may arise when static voltage scaling is enabled. Continuing to take the level shifter 300 of FIG. 3 as an example, during low voltage conditions, if the output signal and the inverse output signal are not able to settle quickly, then the signal PU SWITCH will be in an unknown state. The signal PU SWITCH will turn on the PMOS transistor MP1 332, which will cause lag when the level shifter 300 is attempting to pull down the output signal at the node OUTPUT because the PMOS transistor MP2 334 will attempt to maintain the output signal at a high level, which is undesirable.[0041] FIG. 4 illustrates another example implementation of a level shifter 400 configured in accordance with various aspects of the high-speed level shifter described herein that includes a look-ahead module 420 and a pull-up assist module 430. Unless otherwise described herein, the configuration and the operation of the level shifter 400 is similar to the configuration and operation of the level shifter 200 as shown in FIG. 2. Thus, for example, the configuration and operation of a cross-coupled pair of transistor chains in the level shifter 400 that includes a first transistor chain with an NMOS transistor 402, a PMOS transistor 404, and a PMOS transistor 406; and a second transistor chain with an NMOS transistor 412, a PMOS transistor 414, and a PMOS transistor 416, is similar to that noted for the cross-coupled pair of transistor chains in the level shifter 200 as described above.[0042] In accordance with various aspects of the high-speed level shifter disclosed herein, the look-ahead module 420 provides a mode selection switch function for protection against unwanted noise/margin. For example, to address any potential issues when the static voltage scaling mode is enabled, the look-ahead module 420 may be configured with the ability to disable operation of the pull-up assist module 430. In one aspect of the disclosed high-speed level shifter, the look-ahead module 420 may receive an inverted static voltage scaling (SVS_b) signal and, based on the value of that SVS_b signal, may prevent operation of the pull-up assist module 430. The speed and performance of the level shifter 400 with the pull-up assist function disabled (i.e., when the pull-up assist module 430 is disabled) will be similar to a conventional level shifter.[0043] FIG. 5 illustrates an example implementation of the look-ahead module 420 and the pull-up assist module 430 in a level shifter 500. Similar to the configuration of the pull-up assist module 230 as implemented in the level shifter 300 of FIG. 3, the pull-up assist module 430 may be implemented as an extra pull-up chain with a pair of PMOS transistors to enhance the rising speed for one side of the level shifter 500. The pair of PMOS transistors are shown as a PMOS transistor MP1 532 and a PMOS transistor MP2 534.[0044] To address any potential issues when static voltage scaling is enabled, the look- ahead module as implemented in the level shifter 500 of FIG. 5 includes the ability to disable operation of the extra pull-up chain that includes the PMOS transistor MP1 532 and the PMOS transistor MP2 534. In the level shifter 500, the signal SVS_b will be low to keep the signal PU SWITCH high so as to disable operation of the PMOS transistor MP1 532, and thereby disable the operation of the pull-up assist function. Specifically, when the PMOS transistor 562 receives, at its gate, the signal SVS_b, then the PMOS transistor 562 will be enabled and the PU SWITCH signal will be high. Similarly, a NMOS transistor 542 also receives the signal SVS_b to cutoff the pulldown at the NMOS transistor 542 to avoid a potential short-circuit when PMOS transistor 562 is enabled. In other words, the signal SVS_b, being provided to both the NMOS transistor 542 and the PMOS transistor 562, will enable only one of these transistors.[0045] The level shifter 500 includes a complementary pair of transistors including anNMOS transistor 546 and a PMOS transistor 548 that implement an inverter function that receives output from an inverter 522 that is similar to the first inverter 322 in the level shifter 300 in FIG. 3. The level shifter 500 also includes an NMOS transistor 544 and a PMOS transistor 552 that receives the inverted output signal to implement the tri- state functionality as discussed above.[0046] FIG. 6 illustrates a look-ahead module 620 that may be used to implemented the look-ahead module 220 of FIG. 2 in accordance with various aspects of the disclosed high-speed level shifter, where a complementary pair of transistors including an NMOS transistor 646 and a PMOS transistor 648 that implement an inverter function that receives output from an inverter 622 that is similar to the first inverter 322 in the level shifter 300 in FIG. 3. A PMOS transistor 644 may be used to receive the inverted output signal and control the operation of the look-ahead module 620 in generating the signal PU SWITCH based on the value of the inverted output signal.[0047] FIG. 7 illustrates a look-ahead module 720 that may be used to implemented the look-ahead module 220 of FIG. 2 in accordance with various aspects of the disclosed high-speed level shifter where a NAND gate 724 that implement a NAND function using, as inputs, the inverted output signal as well as an output from an inverter 722 that is similar to the first inverter 322 in the level shifter 300 in FIG. 3. Thus, the value of the signal PU_SWITCH is based on an output of the NAND gate 724 based on the values of the inverted output signal and the inverted output signal, where the signal PU SWITCH is low only when the output signal has settled into a steady state of low and the inverted output signal has settled into a steady state of high. With these inputs, the NAND gate 724 will output a high value as the signal PU_SWITCH.[0048] FIG. 8 illustrates a look-ahead module 820 that may be used to implemented the look-ahead module 220 of FIG. 2 in accordance with various aspects of the disclosed high-speed level shifter where the look-ahead module 820 includes a first inverter 822 and a second inverter 824 coupled in series. The second inverter 824 receive an output from the first inverter 822 that is similar to the first inverter 322 in the level shifter 300 in FIG. 3. Thus, the value of the signal PU S WITCH is based on an output of the second inverter 824 based on the output of the first inverter 822, which takes as its input the output signal. The signal PU SWITCH is generated in manner similar to the look- ahead module is implemented by the first inverter 322 and the second inverter 324 in the level shifter 300 of FIG. 3, except the second inverter 324 does not operate as a tri- state device.[0049] FIG. 9 illustrates a look-ahead module 920 that may be used to implemented the look-ahead module 220 of FIG. 2 in accordance with various aspects of the disclosed high-speed level shifter where the look-ahead module 920 includes an inverter 922 that is similar to the first inverter 322 in the level shifter 300 in FIG. 3. The value of the signal PU SWITCH is based on an output of the inverter 922, which takes as its input the inverted output signal. Thus, when the inverted output signal is high, then the signal PU_SWITCH is low.[0050] It should be clear from the disclosure contained herein that any of the look-ahead modules shown and described herein may include a function to disable operation of a pull-up assist module to which the look-ahead module is coupled similar to the operation of the look-ahead module 520 in FIG. 5. Further, the disabling of the operation of the pull-up assist module may be based on the aforementioned and described SVS signal or another signal.[0051] FIG. 10 illustrates a process 1000 for operation of a level shifter in accordance with various aspects of the disclosed high-speed level shifter, where at 1002, in a cross- coupled pair of semiconductor devices configured to provide an output signal in a second voltage domain based on an input signal in a first voltage domain, an inverted output signal is generated based on the output signal. An example of the cross-coupled pair of semiconductor devices has been described herein with reference to FIG. 2 as part of the cross-coupled pair of transistor chains including the first transistor chain with the NMOS transistor 202, the PMOS transistor 204, and the PMOS transistor 206; and, the second transistor chain with the NMOS transistor 212, the PMOS transistor 214, and the PMOS transistor 216.[0052] At 1004, a decrease in a voltage level of the inverted output signal in the second voltage domain from a high voltage level of the second voltage domain to a low voltage level of the second voltage domain is detected; and[0053] At 1006, a voltage level at the output node is increased when the decrease of the voltage level of the inverted output signal has been detected.[0054] FIG. 11 illustrates relevant portions of an example memory access scheme 1100 in which level shifters configured in accordance with various aspects of the disclosed high-speed level shifter may be implemented. The memory access scheme 1100 may include a plurality of word lines ranging from 0 to n provided to select various cells in a plurality of memory cells 1132. For example, if there are two hundred and fifty-six (256) word lines, then n equals 255 and the plurality of word lines ranges from a first word line (WL-0) to a final word line (WL-255). It will be appreciated, however, that the number of word lines in the plurality of word lines may be greater than or less than 256 in alternative implementations. An 8-bit address 1102 is thus sufficient to select for any one of the 256 word lines, where the address 1102 includes a plurality of bits ranging from a first address bit AO through a last address bit A7. A logic-power-domain row decoder 1112 may be used to decode the address 1102. The row decoder 1112 may be a part of a processor or processing system (not shown). Further, the various modules and components shown in FIG. 11 may be incorporated as a part of a processor or processing system. For example, a single integrated processing system may include one or more processors that interface with one or more memory subsystems in a single semiconductor device[0055] The logic power domain is powered by a logic power supply voltage VDDL.The row decoder 1112 is thus coupled to a logic domain power supply node supplying the logic power supply voltage VDDL. In contrast to the row decoder 1112, a plurality of level shifters 1122 operate within a memory power domain powered by a memory power supply voltage VDDH that is distinct from the logic power supply voltage VDDL. The plurality of memory cells 1132 also operates in the memory power domain. In general, the relative levels for the logic power supply voltage VDDL and the memory power supply voltage VDDH will depend upon the mode of operation for the integrated circuit including the memory access scheme 1100.[0056] Should the logic power domain be in a standby or low power mode of operation, the memory power supply voltage VDDH may be higher than the logic power supply voltage VDDL. Conversely, should the logic power domain be in a high-power mode whereas the memory power domain is in a low power mode of operation, the logic power supply voltage VDDL may be higher than the memory power supply voltage VDDH. Typically, the logic power supply voltage VDDL is lower than the memory power supply voltage VDDH so the following discussion will assume that the memory power supply voltage VDDH is indeed greater than logic power supply voltage VDDL. However, it will be appreciated that the level-shifting disclosed herein may also be applicable to level shifting down in amplitude with regard to driving the word lines.[0057] In FIG. 11, if n equals 256, the row decoder 1112 may decode the address 1102 into 256 different decoded signals such that the decoded signals correspond on a one-to- one basis with the word lines. It will be appreciated that in some implementations the row decoder 1112 may be configured to decode subsets of address bits of the address 1102 into a plurality of corresponding decoded signals. In other words, instead of decoding the entire address into a single decoded signal that individually corresponds to a particular word line, the row decoder 1112 may be configured to decode three address bits (for example, address bits AO, Al, and A2) into a first set of decoded signals (RA) that ranges from RAO to RA7 (not shown); another three address bits (for example, address bits A3, A4, and A5) into a second set of decoded signals that ranges from RB0 through RB7 (not illustrated); and a remaining two address bits such as address bits A6 and A7 into a third set of decoded signal that ranges from RC0 through RC3 (not shown). [0058] In order to avoid unnecessary complication of the description of FIG. 11, it will be assumed that the row decoder 1112 is configured to produce n different decoded signals and each word line (WL-0 to WL-n) has its own corresponding level-shifter 1122. Again, where n equals 256, there are thus 256 level shifters 1122 corresponding to the 256 word lines (WL-0 to WL-255), each level shifter operating to shift a received word line signal from the logic power supply voltage VDDL of the logic power domain to the memory power supply voltage VDDH for the memory power domain in accordance with the various aspects of the high-speed level shifter described herein. Once shifted, the word line signals may be used to access the plurality of memory cells 1132 as understood by those of ordinary skill in the art.[0059] For example, if the row decoder 1112 asserts a word line signal WL-5 (VDDL), then the level shifter coupled to that word line would shift the word line signal to generate a word line signal WL-5 (VDDH) to access the plurality of memory cells 1132. Referring back to the level shifter 200 described and illustrated with regards to FIG. 2, the word line signal WL-5 (VDDL) would be received at the input node IN and provide an output at the output node OUT of the word line signal WL-5 (VHHD).[0060] In accordance with various aspects of the disclosed high-speed level shifter, the various described level shifters include an output node and a cross-coupled pair of semiconductor devices configured to provide, at the output node, an output signal in a second voltage domain based on an input signal in a first voltage domain. The level shifters include pull-up assist means for increasing a voltage level at the output node of the level shifter. The pull-up assist means may be implemented as described using any of the pull-up assist modules described herein, such as the pull-up assist module 230 described in FIG. 2, or the pull-up assist module 430 described in FIG. 4.[0061] The level shifters described herein may also include look-ahead means for causing the pull-up assist means to increase the voltage level at the output node when there is a decrease in a voltage level of an inverted output signal in the second voltage domain from a high voltage level of the second voltage domain to a low voltage level of the second voltage domain. The look-ahead means may be implemented as described using any of the look-ahead modules described herein, such as the look-ahead module 220 described in FIG. 2, or the look-ahead module 420 described in FIG. 4. The look- ahead means may further be implemented as described using look-ahead modules such as the look-ahead module 620 described in FIG. 6, the look-ahead module 720 described in FIG. 7, the look-ahead module 820 described in FIG. 8, and the look-ahead module 920 described in FIG. 9.[0062] In general, the aforementioned means may be any module, or one or more modules, described herein that is, or are, configured to perform the functions recited by the aforementioned means.[0063] Several aspects of a high-speed level shifter have been presented with reference to a memory system. As those skilled in the art will readily appreciate, various aspects described throughout this disclosure may be extended to other devices that may utilize level shifting functionality.[0064] The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented within an integrated circuit ("IC"), an access terminal, or an access point. The IC may comprise a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, electrical components, optical components, mechanical components, or any combination thereof designed to perform the functions described herein, and may execute codes or instructions that reside within the IC, outside of the IC, or both. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.[0065] It is understood that any specific order or hierarchy of steps in any disclosed process is an example of a sample approach. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged while remaining within the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.
Apparatuses and methods for transposing select gates, such as in a computing system and/or memory device, are provided. One example apparatus can include a group of memory cells and select gates electrically coupled to the group of memory cells. The select gates are arranged such that a pair of select gates are adjacent to each other along a first portion of each of the pair of select gates and are non-adjacent along a second portion of each of the pair of select gates.
What is claimed is: 1, AH apparaius, comprising: a group of memory cells; and seleci gates electrically coupled to the group of memory cells, wherein the seleci gates are arranged such that a pair of select gates is adjacent to each other along a first portion of each of the pair of select gales and are non-adjacent along a second portion of each of the pair of sel ect gates, 2, The apparaius of claim 1, wherein the respective select gates of each pair of the select gates are transposed, with one another at least once. 3, The apparatus of claim I, wherein the select gates are further arranged such that a first portion of each other of the select gates is adjacent to a first respecti ve one of a number of portions of a particular select gate. 4, The apparaius of claim 3, whereto the select gates are farther arranged such that a second portion of each other of the select gates is non-adjacent to a second respective one of a number of portions of a particular selec t gate. 5, The apparatus of claim I , wherein the select gates are further arranged such that one portion of every seleci gate is adjacent to a first respective portion of each other of the seleci gates and is non-adjacent a second respective portion of each other of the seleci gates. 6, The apparatus as in any one of claims 1 -5, wherein each of the select gates are transposed at least once with each other of the select gates. 7, Th apparaius as in any one of claims 1-5, wherein at least two, hot less than all, of the select gates are transposed with one another. 8. An apparatus, comprising: a number of memory ceils; andN select gates electrically coupled to the number of memory cells, N being greater th t) 1. each select gale being separated into M segments, M being greater than 1 , wherein the M segmeuis of the N select gates are arranged such that less than M segments of a first select gate are adjacent to segments of a second select gate. 9. The apparatus of claim 8, wherein at least 2 segments of the first selec t gate are adjacent io at least 2 segments of the second select gate. 10. The apparatus of claim 8, wherein M equals N. .5 1. The apparatus of claim 10, wherein M is less than , 12. The apparatus as in any one of claims 8- Π , wherein the M segments of the N select gates are arranged such that at most 2 segments of the each select gate are adjacent to a number of segments of any othe particula select gate. 13. The apparatus as in any one of claims 8-11, wherein the M segments of the N select gates are arranged such that segments of each select gate are adjacent to segments of ail of the other select gates of a block. 14. The apparatus as in any one of claims 8-11 , wherein the number of memory cells are arranged in a .3 -dimensional HAND memory device architecture. 15. An apparatus, comprising: memory cells arranged in 3 -dimensional memory ceil strings; and select gates electrically coupled to the memory cells, wherein the select gates are separated into segments, the segments being arranged in a two-dimensional matrix, positions on the matrix being referenced by coordinates (i, j), andwherein a select gate segment at (i, j) is eieetricaU coupled to a select gate segment al (i-HJ-H.) and a seleci gate segment at if j+i ) is electrically coupled to a select gate segment at (i+1 , j). 16. The apparatus of claim 15, wherein the select gate segment at (H i , j+ 1) is electrically coupled to a select gate segment at (i+2, j+2) and the select gate segment at (i-H, j) is electrically coupled to a select gate segment at (1+2, - 1). 17. The apparatus as in any one of claims 15-16, wherein the select gate segment at (i-H , j+ i ) is electrically coupled to a select gate segment at (i-;-2, j-H ) and the select gate segment at (i-H , j) is electricall coupled to a seleci gate segment at (i+2, j-3 }. J 8. An apparatus, comprising: a group of memory cells; and select gates electrically coupled to the group of memory cells, wherein at least one pair of the select gates has a transposition thereof such thai a first one of the select gates of the pair of select gates is adjacent a third one of the seleci gates on a first side of the transposition and a second one of the select gaies of the pair of select gates is adjacent the third one of the seleci gates on a second side of the transposition. 19, The apparatus of claim 18, wherein the transposition includes first conducti ve material of a first select gate being routed between segments of first conductive .material of a second select gate, the segments of first conductive material of the second select gate being electrically coupled by an interconnect formed of a second conductive material routed, over the first conductive material of the first seleci gate. 20, The apparatus of claim 19, wherein the first conductive material of the first select, gate and the first conductive material of the second select gate are conduct! ve y-doped poiysilieon. 21. The apparatus of claim 19, wherein the second conductive material comprises a metal material. 22. "The apparatus of claim 19, wherein the interconnect is at a different elevation than the first conducti e material of the 'first and second select gates. 23. The apparatus as in any one of claims 19-22, further including data lines electrically coupled to the select gates, the data lines comprising the second conductive material. 24. The apparatus of claim 23, wherein the interconnect is formed by a same process and of a same material as the data lines. 25. The apparatus of claim 23 , wherein the interconnect is at a same elevation as the data lines. 26. An apparatus, comprising', a group o memory ceils; select gates electrically coupled to the group of memory cells, a pair of the select gates including a transposition thereof; and data lines electrically con pled to the select gates, wherein the transposition includes first conductive materia! of a first select gate being routed between segments of first conductive material of a second select gate, the segments of first conductive material of the second select gate being electrically coupled by an interconnect formed of a second conducti ve material routed over the first conductive material of the first select gate, and wherein the data lines are formed of a third conductive material. 27. The apparatus of claim 26, wherein the second conductive material is different than the first conductive material and the third conductive material comprises a same materia! composition, but are formed at different elevations in the apparatus. 28. The apparatus of claim 26, wherein the interconnect is electrically coupled to the segments of the first conductive material of the second select gate through portions of the third conductive material. 29. The apparatus as in an one of claims 26-28, wherein the interconnection is at a same elevation as the data lines. 30. The apparatus as in any one of claims 26-28, wherein the interconnect is at a different elevation than the data lines. 31. The apparatus as in any one of claims 26-28, wherein the interconnect is at a different elevation than the semiconductor material of the first and second select gates. 32. The apparatus as in any one of claims 26-28, wherein: the first conductive material of the first select gate is electrically to corresponding shunts of the second conductive material, the segments of first conductive material of the second select gate are electrically coupled to corresponding shunts of the second conductive material, and the interconnect is continuous with the shunts corresponding to the segments of the first conductive material of the second select gate. 33. An apparatus, comprising-. a group of memory cells; and select gates electrically coupled to the group of memory cells, each of the select gates being separated into segments, a pair of the select gates including a transposhi on thereof, wherein the transposition includes a first interconnect between respective segments of a first select gate and a second interconnect between respective segments of a second select gale. 34. The apparatus of claim 33, further comprising data lines electrically coupled to the select gates, wherein the data lines are at a different elevation of the apparatus than the interconnects, 35. The apparatus as in an one of claims 33-34, wherein each of the segments comprises a conductively-doped pol ysilicon portion and a metal portion. 36. The apparatus as in any one of claims 33-34, wherein the first interconnect is at a different elevation than the second interconnect. 37. A method of forming transposed seiect gates, comprising: forming select gate first conducti ve material over a plurality of access lines in a stacked arrangement; separatin the select gate .first conductive material, into a matrix of segments in two dimensions; and forming a corresponding second conductive material over each segment, the corresponding second conductive material being electrically coupled near a respective end of each segment, wherein segments are coupled together into select gates traversing one of the two dimensions, 38. The method of claim 37, wherein forming a corresponding second conductive material comprises implementing a transposition of particular select gates in second one of the two dimen ions. 39. The method as in any one of claims 37-38, further including removing portions of the select gate and the plurality of access lines so as to expose each of the plurality of access lines in a staircase configuration. 40. The method of claim 39, further including separating the select gate first conductive material, the plurality of access lines, and the source select gate in the stacked arrangement into a plurality of stacked structures.
APPARATUSES AND METHODS OF TRANSPOSING SELECT GATES Jggh0icjj: Field {0001'j The present disclosure relates generally to semi conductor memory apparatuses and methods, and more particularl , to apparatuses and methods for transposing select gates. Background {ΘΘΘ2|[ Memory devices are typically provided as internal, semiconductor, integrated circuits in computers or other electronic devices. There are many different types of memory, including random-access memory (RAM), read only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), resistive memory (e.g., RRAM), and Flash memory, among others. {0003) Memory devices are utilized as volatile and non-volatile data storage for a wide range of electronic applications. Flash memory, wh ich is just one type of memory, typically uses a one-transistor memory cell that allows for high memory densities, high reliability, and low power consumption. Nonvolatile memory may be used in, for example, personal computers, portable memory sticks, solid state drives (SSDs), digital cameras, cellular telephones, portable music players such as MP3 players, movie players, and other electronic devices. Memory cells can be arranged into arrays, with the arrays being used in memory devices. {0004} Memory devices comprise a plurality of memory cells. The memory ceils can. be arranged into various groups such as a block, sub-block, etc. The memory device can include select gates that enable selection of an individual memory cell and/or a particular group of memory ceils to be operated upon. For example, select gates ma b used to select a group of memory ceils by connecting the group of memory cells to other parts of the memory device. Adjacent select gates can be capacitively coupled thereby permitting noise signals to leak from one select gate to another. Apparatuses and methods for transposing select gates can reduce capacitive coupling, and thereby reduce noise and improve memory device operation.Brief Description .of ,the .P myin.g$ [00051 Figure 1 A is a functional block diagram of a cross-sectional view of a prior art three dimensional (3D) Not AND (N AND) memory device, (0006] Figure 1 B is a functional block diagram of a top view of the prior art 3D HAND shown in Figure I A. {0007] Figure 2 A is a functional block diagram of prior art non- transposed select gates. [0008] Figure 2B is a functional block diagram of transposed select gates in accordance with one or more embodiments of the present; disclosure. [0009] Figures 3A-31 illustrate a process flo for reducing coupling capacitance in accordance with one or more embodiments of the presen t disclosure. [001 ] Figure 4 A Is a functional block diagram of a top view illustrating an implementation of a transposed select gate using a first conductive material and a second conductive material without a metal shunt in accordance with one or more embodiments of the present disclosure. [001.1] Figures 4B and 4C are functional block diagrams of cross- sectional views of the implementation of the transposed select gates shown in Figure 4A in accordance with one or more embodiments of the present disclosure. [0012] Figure 5 A is a functional block diagram of a top view illustrating an implementation of transposed select gates using segments of a first conducti ve material (without a shunt) electrically coupled by an. interconnection, of a third conductive materia! in accordance with one or more embodiments of the present disclosure, [0013] Figures 5B and 5C are functional block diagrams of cross- sectional views of the implementation of transposed select gates shown in Figure 5 A in accordance with one or more embodiments of the present disclosure. [00:14] Figure 6A is a functional block diagram of a top vie illustrating an implementation of transposed select gates using segments of a fi rst conductive materia! (with a shunt) electrically coupled by an interconnection of a third conductive material in accordance with one or more embodiments of the present disclosure.{00.15] Figures 6B and 6C are functional block diagrams of cross- sectional views of the implementation of tramposed select gates shown in Figure 6A in accordance with one or more embodiments of the present disclosure. £001.6'] Figure 7A is a functional block diagram of a top view illustrating an implementation of transposed select gates using segments of a first conductive materia! (and a shunt) electrically coupled by interconnections of a second and third conductive materials in accordance with one or more embodiments of the present disclosure. [0017] Figures 7B and 7C are functional block diagrams of cross- sectional views of the implementation of a transposed select shown, i Figure 7 A in accordance with ne or more embodiments of the presen t disclosure.. Detailed .Description {0018] Apparatuses and methods for transposing select gates, such as in a memor device, are provided. One example apparatus can include a group of memory cells and select gates electrically coupled to the group of memory ceils. The select gales are arranged, such that a pair of select gates are adjacent to each other along a first portion of each of the pair of select gates and are non-adjacent along a second portion of each of the pair of select gates. (0019] In the following detailed description of the present disclosure, reference is made to the accompanying drawings that form a part hereof, and in. which is shown by way of illustration how one or more embodiments of the disclosure may be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in. the art to practi ce t he embodiments of this disclosure, and it is to be understood that other embodiments may be utilized and that process, electrical, and/or structural changes may be made without departing from the scope of the present disclosure. As used herein, the designator "N" indicates that one or more of the particular feature so designated can be included with one or more embodiments of the present disclosure. {002Θ] The figures herein follow a numbering convention in which the first digit or digits correspond to the drawing figure number and the remaining digits identify an element or component in the drawing. Similar elements or components between different figures may he identi fied by the use of similar digits. As will be appreciated, elements shown in the various embodimentsherein can be added, exchanged, and/or eliminated so as to provide a number of additional embodiments of the present disclosure, in addition, the proportion and die relative scale of the elements provided in the figures are intended to illustrate various embodiments of the present disclosure and are not to be used in a limiting sense. {002 Ij The terms "first." "second," "third," and "fourth" are used herein, and in the claims, merely for convenience in differentiating the nomenclature of various features from one another. The use of such terms does not necessarily imply that the materials are of different composition, but sometimes are used to distinguish between materials formed at different elevations, at different times, or in different manners, even if of the same composition. The use of such terms does not intend to convey a particular ordering of the features, including, but not limited to, an order of forming. Furthermore, strict correspondence between terms "first," "second," "third," and "fourth" used herein is not Intended with terms "first," "second," "third," and "fourth" used in the claims. That is, "second conductive material" as used in the claims may or may not correspond to the "second conductive material" described herein. For example, "second conductive material" as used in the claims may correspond to the "third, conductive material" described herein, {0022 A memory device architecture according to the present disclosure can provide a reduced capacitive coupling between adjacent select gates of a same block of memory cells compared to pairs of select gates that are everywhere adjacent. According to various embodiments, a memory device architecture, such as a three dimensional. (3D) Not. AND (NAND) memory device memory device architecture, can be configured to reduce capacitive coupling between adjacent select gates of a block of memory cells. As used herein , a block of memory cells can be a group o memory cells t hat are erased together. A block of memory cells can have a plurality of sub-blocks. That is, a sub-block of memory ceils can be a portion of a block of memory ce lls, and each sub-block can have its own select gate, for example {0023) A large portion of power consumed by a memory device can be to charge and discharge signal lines such as select gates, data lines (e.g., bit lines), access lines (e.g., word lines), etc. As the density of memory cells in a memory- device increases, such as by reducing distances between memory cells and signallines, capacitance (e.g., parasitic capacitance) between memory cells and the signal lines can increase. Therefore, increasing parasitic capacitance can increase power consumption and healing. | 24j Figure 1. A is a functional block diagram of a cross -sectional view of a prior art three dimensional (3D) NAND memory device taken along cat line .1 A*1 A, as shown in Figure 1 B. Two dimensional (or planar) NAND strings can be arranged such that select gate transistors are connected at each side (e.g., source, drain) of a number of memory devices connected in series drain-to- source in a plane. Multi-dimension NAND memory devices (e.g., 3D NAND} can be formed by arranging the NAND strings in non-linear configurations, such as in a "IT shape for example. The 3 D N AND can be configured such that data lines and source lines can be shared between various groups of memory cells (e.g., sub-blocks, etc). Multi-dimension NAND memory devices can. be arranged vertically (e.g., vertical NAND), such, as the 3D N AND memory device shown in Figure ί A. 00251 Figure 1 A shows a memory device 100 having a number of strings (e.g., string 0, string I). For each string shown in Figure 1 A, a communication path 1 8, 1 10 extends from a bit line (BL) 1 12 to a source line (SRC) 1 14. The communication path 108, ! 10 can comprise a pillar of semiconductor material for a string of memory cells, for example . The bit line 1 12 and source line ί 1 can. be shared by various strings. While Figure 1 A shows a communication path 108, 1 10 that can. be linear and in a vertical orientation, multi-dimension NAND can have other communication path configurations and orientations between, the bit line 11.2 and source line 1 14, such as a U-shaped communication path between a bit line 1 12 and source line 3 1 that can be in close proximity to one another. As such, memory cells can be arranged in a 3D memory cell string (e.g., non-linear arrangement). {0026] The communication path 108, 1 1 between the bit line 1 12 and source line 1.1.4 associated with a particular string can have two or more select gates per string, including a drain select gate 102, 116 and a source select gate 1.06, 1 1 8. The two select or more gates per string operate to select the communication path. 1 8, .1 1 between the bit line 1 12 and source line 114 of a particular string. In this manner, a particular selected memory cell can beelectrically coupled to a bit line via the drain 102, 1 16 and source 106, 118 select gates. [0027] As shown in Figure 1A, the two or more select gates per string can be located at differeiii ends of the communication path 108, i 10 between the bit line 1 12 and source line 1 14. For example, one select gate can be located adjacent the bit line 112 and the other select gate can be located adjacent the source line 1 14. A seleci gate located adjacent the bit line 1 12 can be a drain select gate (SGD) 102, 1 16, which is shown as the upper select gate in Figure 1A. The select gate located adjacent the source line 1 14 can be a source select gate (SGS) 106, 118, shown as the lower select gate in Figure 1 A. [0028'j A number of control gates can be arranged between the two select gates along the communication path. 108, 1 10 between the bit line 1 12 and source Sine 1 14. The control gates can be arranged to select a particular memory cell on the corresponding string. The control gates can be implemented as word lines 104, as shown in Figure 1 A (e.g., WLO, WL 1 , WL2, and WL3). {0029] Figure IB is a functional block diagram of a top view of the prior art 3D MAND memory device shown in Figure I A. Figure I B shows that communication path 1 8, 109, 110, 1 1 1 can be located at the intersection between bit lines 1 12, 1 13 and select gates 10:2, 116. |0030| Figure 2 A is a functional block diagram of prior art non- transposed select gates. In an effort to increase the bit density, adjacent seleci gates can be arranged close to each other. For example, select gates of a 3D NAND can be arranged closer to one another than seleci gates of some two dimensional (2D) NAND configurations. The prior art select gates 219 are routed in straight runs, as shown in Figure 2 A. The prior art seleci gates 219 can include a conductively-doped polysilicon portion 220, and a metal portion 222, which serves to shunt the conductively-doped polysilicon portion 220 in order to decrease resistance. A contact 224 can electrically couple the conductively- doped polysilicon portion 220 with the metal portion 22 (e.g., at each end of a select gate). {0031] Figure 2A shows a number (e.g., eight) of seleci gates 219. Each select gate 219 can extend over a relatively long distance, and can be separated from other seleci gates by a space (e.g., according to minimum, feature size criterion). This configuration,, with long runs and small spacing between selectgates that are adjacent to each other everywhere can result in capacitive coupling between select gates, as indicated in Figure 2A at 221. [0032] Capacitive coupling can provide a pathway for noise between select gates. When one of the select gates goes high, such as for a read operation for example, an adjacent select gate(s) can receive noise via the capacitive coupling. The noise received via the capacitive coupling to the adjacent select gate(s) can increase the potential on the adjacent select gate(s), which can result in leakage current from a selected bit line to unseiected sub-blocks. These transient leakage currents can increase noise and impact the accuracy of reading operations. [0033] The capacitive coupling between adjacent select gates can be relatively large depending on dimensions by which the select gates are implemented. While capacitive coupling 22 1 is shown only between two of the eight, select gates provided in Figure 2A, capacitive coupling can exist between each respective pair of adjacent select gates, and such capacitive coupling is not illustrated in Figure 2A for clarity of other aspects. [0034] Figure 2B is a functional block, diagram of transposed select gates in accordance with one or more embodiments of the prese t disclosure. In some embodiments, there can be 8 drain select gates in a block of memory ceils. Select gates can be used to select a sub-set of the block of memory cells (e.g., a sub-block). For example, selected memory cells can be electrically coupled, to a data, line via a select gate for a particular sub-block. Select gates for different sub-blocks can be routed adjacent one another, such as is shown in Figure 2B. A particular select, gate 225, shown in Figure 28, can be separated, into a number of segments, such as segment' 226, That is, rather than one continuous run of a first conductive material (e.g., conductively-doped polysilicon) and a second conductive material (e.g., metal, alloy), as is shown in Figure 2A, the particular select gate can be separated into a number of segments, such as segment 226. Each segment (e.g., segment 226) of a select gate 225 can include a first conductive material portion (sometimes referred to hereinafter as a first conductive material segment) and a second conductive material portion (sometimes referred to hereinafter as a ''shunt"), with the second conductive material serving to shunt the first conductive material in order to decrease resistance along the select gate. A contact can electrically couple the firstconductive material portion with the second conductive material portion (e.g., at each end of a segment). [0035] Figure 2B shows each select gate 225 of the number of select gates illustrated in Figure 2B being comprised of 8 segments 226, The 8 segments (e.g., segment 226) per select gate 225, for each of 8 select gates, results in 64 total segments arranged in 8 rows and 8 columns, as shown in Figure 2B. However, embodiments of the present disclosure are not so limited, and select gates can be separated into more or fewer segments. For example, select gates could be separated into 4 segments per select gate. According to some embodiments, N select gates can be separated into M segments each. M can be equal to in various embodiments, and M can be a number different than N in other various embodiments (e.g., M equal to N/2, N/4, etc.). The greatest capacitive de-coupling can be achieved where M is equal to N. However, the cost for such maximum benefit is a greater number of transpositions. Some capacitive de-coupling can be achieved where is less than N, which uses fewer transpositions. According to various embodiments, the M segments of the N select gates can be arranged such that at most 2 segments of each select gate are adjacent to segments of any other select gate. According to some embodiments, the M segments of the N select gates can be arranged such that segments of each select gate can be adjacent to segments of all of the other select gates in the block. For example, a first segment, of a first select gate can be adjacent to a first segment of a second select gate, and a second segment of the first select gate can be adjacent to a second segment of a third select gate, and so on, as is illustrated m Figure 2.B. [0036] According to various embodiments of the present disclosure, all select gates need not be separated into a same quantity of segments. That is, respective select gates may be separated into different quantities of segments. For example, some select gates may be separated into 4 segments, and other select gates can be separated into 8 segments. Some select gates may not be separated into multiple segments, while others may be separated into multiple segments. The quantity and arrangement of segments for individual, select gates can be differen than shown to achieve desired performance and/or other criteria. [0037] Adjacent segments of a particular select gate can be electrically coupled via an interconnection 227 extending from one segment (e.g., segment226} to another. The mtercottnection 227 between segments (e.g., segment 226) can be comprised of metal, conduciiveiy-doped polysilkon. combinations thereof, and/or involve other compositions. The intercomiection 227 between segments (e.g., segment 226} can be a non-transposed interconnection 228 or transposi tion 229 of interconnections. For example, each of the interconnections included in the transposition 229 can comprise a respective extension of the metal portion of a segment, that can be positionaily interchanged between a boundary of the conducts vely-doped polysiltcon portion of a segment (e.g., segment 226) and a boundary of the conducts vely-doped poiysilicon portion of a next segment having a different horizontal coordinate, as shown in Figure 2B. [0038] in this manner, segments of different select gates can be brough t into proximity of one another so that one particular select gate is not always adjacent another particular select gate, and instead can have portions that are adjacent each of the other select gates. According to various embodiments, each, select gate can hav a segment adjacent a respecti ve segment of each other select gate of a block, as shown in Figure 2B. Where N select gates of a block are each separated into N segments, the select gates can be configured using transposed interconnections to be adjacent any other particular select gate for 2/N of the select gate run. For example, 8 select gates can be each separated into 8 segments. The select gates can be configured using transposed interconnections to be adjacent any other particular select gate for 2/8 (i.e., ¼) of the select gate run. [0039] The segments (e.g., segment 226) of the select gates are still capacidvei coupled, as indicated in Figure 2B at 23.1. The capacitor depicted at 231 in Figure 2.B, representative of the capacitive coupling between transposed select gates, can be relatively smaller than the capacitor shown at 221 in Figure 2 A, representati e of the capacitive coupling between non-transposed select gates, to convey that the capacitive coupling between transposed select gates can be relatively smaller than the capacitive coupling between non -transposed select gates. That is, the capacitive coupling of a particular select gate is divided up among the other select gates since only a portion of a particular select gate is adjacent to another select gate, as compared to the capacitive coupling of two select gates that are always adjacent to one another (e.g., alon an entire run of the seieci gates).10046] However, (he distance for which pairs of select gales are adjacent can be configured using transposed interconnections to be, for example, ¼ the entire run of the individual select gates. Therefore, the capaeitive coupling between any 2 select gates can be reduced by a factor of ¾, That is, instead of leakage current generated by a particular select gate .flowing to one other adjacent select gate, the leakage current can flow to several other select gates through adjacent segments. In this way, the leakage current flowing to any one other particular select gate can be reduced, thereby reducing the noise induced. |0041] According to some embodimen ts, metal portions of every select gate can be routed so that the conduciively-doped pofysilicon portions of two segments of the select gates are routed in the minimum pitch. However, embodiments of the present disclosure are not so limited. That is, capaeitive coupling can be reduced by transpositions that bring at least one segment of a select gate adjacent to a. segment of a different select, gate than, would be adjacent if the select gates are straight runs. {0042] According to one or more embodiments of the present, disclosure, die respective select gates of each pair of select, gates of a block can be transposed with one another at least once. According to various embodiments, some, but less than all, select gates of a block can be transposed with one another. According to some embodiments, each select gate of a block can be transposed at least once with each of the other select gates of the black. Select gates can be configured such that particular select gates are adjacent one another along some respectiv portion(s), and to be non-adjacent along another portion(s). Select gates can further be configured such that portions of a particular selec gate can. be adjacent, portions of all of the other select gates of a block. For example, a select gate can be transposed at least once with an adjacent select gate. According to another example, select gate can be transposed more than once with one, or several different, other select gate(s). Select gates can be configured such that some portion of every select gate of a block is adjacent some portion of each of the other select gates of the block and is non-adjacent some portion of each of the other select gates of the block. [0043] According to certain embodiments, memory cells can be arranged in 3-dimensional cell strings, and select gates can be electrically coupled to the memory cells. The select gates can be separated into segments that are arrangedin a two-dimensional matrix. Positions on the matrix can be referenced by a coordinate (i, j). For example, i can refer to a position in one (e.g., horizontal) dimension and j can refer to a position in another (e.g.. vertical) dimension. AUliougb horizontal and vertical dimensions are used in example, embodiments of the present disclosure are not limited to particular orientations. A select gate segment at (i, j) can be electrically coupled to a select gate segment at (i+1, j*! ). A select gate segment at (i, j+i) can be electrically coupled to a select gate segment at (i-H , j). Furthermore, the select gate segment at (i- 1. y- ! ) caxr be electrically coupled to a select gate segment at (i+2, j+2), and the select gate segment at (i-H , j) can be electrically coupled to a select gate segment at (i+2, j- 1). In this manner, adjacent select gates can be transposed. Additional transpositions can result in the select gates having adjacent portions and non- adjacent portions with one another. {0044] figures 3A~3i illustrate a process flow for reducing coupling, capacitance hi accordance with one or more embodiments of the present disclosure. Figure 3 A shows a functional block diagram of a cross-sectional view of a memory cell array (e.g., 3D AND) in accordance with one or more embodiments of the present disclosure. Material for the drain select gate 332, source select gate 334, nd word-lines 333, located between the select gates, are stacked. Insulation portions of the memory cell array between the select gates 332, 334 and word litres 333 are omitted for clarity. The present disclosure is not limited to the quantity and/or arrangement of various select gate and/or word lines illustrated as one example in the figures. While only 4 word lines 333 are shown i Figures 3A-3I, embodiments of the present disclosure are not so limited, and can include more (e.g., 8} or fewer word lines 333 than is shown in the figures. Figure 3B shows a functional block diagram of a top view of the memory celt array shown in Figure 3 A in accordance with one or more embodiments of the present disclosure. Form the top view, only the drain select gate 332 is visible in Figure 38. {0045] Figure 3C shows, a functional block diagram of a cross-sectional view of the memory cell array shown in Figure 3A after further processing in accordance with one or more embodiments of the present disclosure.. Via holes 342 are formed from one edge of the memory cell array to another edge of the memory cell array (e.g., from the top to the bottom). The via holes 342 are I Iformed through, the select gates 332, 334 and word lines 333, as shown. Figure 3D shows a functional block diagram of a top view of the memory cell array shown in Figure 3C in accordance with one or more embodiments of the present disclosure. That is, Figure 3D indicates the changes from Figure 3B after further processing to form the via holes 342 through the drain select gate 332. While only four via holes 342 are shown in Figures 3D, embodiments of the present disclosure are not so limited, and can include more (e.g., 20) or fewer via holes 342 than is shown in Figure 3D. [0046] Figure 3E shows a functional block diagram of a cross-sectional view of the memory cell arra shown in Figure 3C after further processing in accordance with one or more embodiments of the present disclosure. Figure 3E is shown being an expanded view with respect to Figure 3C, and also is shown including additional via holes 342 from that shown in Figures 3C and 3D. Figure 3.E shows the edges of the select gates 332, 334 and word-lines 333 arc formed (e.g., etched) into a staircase configuration. The term "staircase" is intended to mean a configuration where a first material, which can be located over a portion of a second material, is recessed in one direction such that the second material below can be accessed. For example, the first material may be formed over the second material but may not extend as far in a horizontal direction as the second material suc that the second material can be accessed near its edges, for example, in. a vertical direction. {0047] The edges of the select gates 332, 334 and word-lines 333 can be formed into a staircase configuration in at least one dimension (e.g., left-right, front-back), suc as in two dimensions as further illustrated in Figure 3F. Figure 3F shows a functional block diagram of a top view of the memory cell array shown in Figure 3E in accordance with one or more embodiments of the present disclosure. Thai is. Figure 3F also indicates the changes from Figure 3D after further processing to add more via holes 342 and form the select gates 332, 334 and word-lines 333 into a staircase shape in two dimensions. The dimensions of the "stair steps" can be such that individual select gates 332, 334 and word-lines 333 can be accessed from above, such as by forming of additional connecting structures. Othe configurations to provide access from above to individual select, gates 332, 334 and word-lines 333 can be formed in accordance with the present disclosure.10048] Figure 3G shows a functional block diagram of a top view of the memory ceil array shown in Figure 3F after further processing in accordance with one or more embodiments of the present disclosure. Figure 3G shows the select gates 332, 334 and word-lines 333 being further formed to separate those structures associated with a group of memory cells that are operated together (e.g., read, program., erase). That is, further forming can separate a first structure 350 (e.g., a first block BL O) and a second structure 352 (e.g., a second block BLK1 ). While Figure 3G shows separation of only two structures for simplicity, these methods ca be applied to a greater quantity of structures being formed simultaneously. [0049 j Some row(s) of via holes 342 may be removed during the process of separating structures 350 and 352. However, removing via holes 342 may not be required.. For example, the center row of via holes may not have been formed and iherefore need, not be removed, or structure separation may be accomplished, between rows of formed via holes 342, etc. |0050] Figure 3H shows a functional block diagram of the memory cell array shown in Figure 3G in accordance wit one or more embodiments of the present disclosure. That is. Figure 3H reflects the changes from Figure 3G after further processing to separate the drain seiect gate (e.g., 332 shown in Figure 3G) into multiple segments (e.g., portions). The drain select gate 332 can he formed so as to be separated into segments that can.be electricall coupled to constitute particular select gates (e.g., SGD0 and SGD1). For example, drain select gate segments 354 can be segments associated with SGD0, and drain select gate segments 356 can be segments associated with SGD ! . The processing to further separate the drain select gate 332 into multiple segments can be accomplished together or individually. The number of segments into which drain select gate 332 can be separated can depend on the number of select gates per block and the number of segments per select gate. For example, the quantity by which the drain select gate 332 can be separated in. one direction (e.g., horizontal direction as shown in Figure 3H) can correspond to the number of select gates per block, and the quantity by which the drain select gate 332 can be separated, i another direction (e.g., vertical direction as shown in Figure 3H) can correspond to the number segments desired per select gate. According to one or more embodiments, seiect gates segments can be formed by at least oneetch in directions that are substantially perpendicular to one another, for example. [0051 j Figure 3Ϊ shows a functional block diagram of the memory cell array shown in Figure 3H in accordance with one or more embodiments of the present disclosure- That is. Figure 31 reflects the changes from Figure 311 after further processing to form interconnections between the various segments into which the drain select gate 332 was separated. For example, the left-top segment 354 of BLKO (e.g., structure 350} can be electrically coupled with the right-bottom segment 354 of BLKO, with these two electrically coupled segments 354 being associated with select gate SCiD l of BLKO, as shown in Figure 31. Also, the left-bottom segment. 356 of BLKO can he electrically coupled with the right-to segment 356 of BLKO, with these t wo electrically coupled segments 356 being associated with select gate SGDO of BLKO. Similar (or different) interconnections can be formed with respect to other separated structures (e.g., BLKl 352). That is, interconnections between select gate segments of BLKl can be formed to achieve a select gate configuration for BL l that is similar to the select gate configuration of BLKO. Alternatively, interconnections between select gate segments can be formed to achieve a seleci gate configuration for BLKl that can be different than the select gate configuration of BLKO. The forming of various transpositions is discussed in more detail below with respect to Figures 4-7. {0052| !n the manner illustrated as an example in Figures 3A-31, transpositions can be formed between select gate segments towards forming a transposed select gate configuration to reduce the coupling capacitance between, any two particular select gates. For example, transpositions can be formed between select gate segments towards forming the select gate configuration shown in Figure 2B, For simplicity, Figure 31 illustrates the forming of two select gate and two segments per select gate. However, embodiments of the present invention are not limited to these quantities used in example, and can be extended to form many more select gates, segments per select gate, and or separated structures (e.g., blocks, sob-blocks, etc). Figure 31 additionally shows connections made to the various word lines 333 and the source select gate 334 at their respect edges, which are accessible at the respective stair step previously formed.10053] More generally, select gates of a multi-dimension (e.g., 3D) memory ceil string can be separated into multiple segments in both an X and/or Y directions, with select gate segment SCK'i, j) being electrically coupled with SG(i, or SGti-H, j), where i and j are integers. According to certain embodiments, a first materia! of the select gates (e.g., conductive!y-doped polysilicon) can be separated into multiple ('e.g., M) segments along the select gates. These multiple segments of two select gates can be arranged such that some segments are adjacent and some segments are non-adjacent (i.e., at least one other select gate segment separates the segments of two select gates}. [OOS43 A second material (e.g.. metal) of the select gates can be routed across the cell array. That is, the second material can be routed across and between segments associated with a particular select gate. The second material can be electrically coupled to the first material near each end of a select gate segment. In this manner, the capaciiive coupling between adjacent select gates for the second material can be reduced by a factor of 2 M since the distance that any pair of select gates is adjacent to one another is reduced to 2 ofM segments. Reduced capaciiive coupling between pairs of select gates can be associated with taster read and program operations in a cost effective way. [00553 Figure 4A is a functional block diagram of a top view illustrating an implementation of a transposed select gate using a first conductive material and a second conductive material without, a metal shunt in. accordance with one or more embodiments of the present disclosure. According to various embodiments, the second conductive material can be less resistive (i.e., more conductive) than the first conductive material. The first conductive material can comprise conduct! vely-dope polysilicon. The second conductive material can. include a metal material or an alloy containing a metal material. A transition in the positioning of select gate segments in one dimension (e.g., up/down direction in Figure 4A) can be accomplished with a first select gate having a continuous run of a first conductive material (e.g., conductiveiy-doped polysilicon) to a new location in the dimension, and a second select gate having separated segments of the first conductive material interconnected by a second conductive material, (e.g., metal, alloy). However, embodiments of the present disclosure are not so limited, and another material can be utilized lor the first conductive material, and another conductive materia! can be used for the second conductive material.|0056j For example as shown in Figure 4A, select gates SGDO and SGD! can be transposed with one another, and select gates SGD2 and SGD3 can be transposed with one another at a transition area 47 i , Turning first to the transposition of select gates SGDO and SGDI , a condiiciively-doped polysilicon portion 472 of drain seleci gate SGDI. can extend substantially perpendicular to a first number of bit lines 458 at one positioning along a dimension. The conducriveiy-doped polysilicon portion 472 of drain select gate SGDI can jog within the transition area 471 to a different positioning in the dimension, and can extend substantially perpendicular to a second number of bit lines 459 at the different positioning in the dimension. As can be seen in Figure 4A, the run of the conductivety-doped polysilicon portion 472 of drain select gate SGDI , including the jog in the transition area 471 , can be continuous. The drain select gate SGDI does not include a metal portion such as is shown in Figure 2 A at 0057] The conductiveiy-doped polysilicon portion 472 of drain select gate SGD Ican jog within the transition area 471 so as to have a portion that can be substantially perpendicular to another portion. That is, conducive! y-doped polysilicon portion 472 of drain select gate SGDI can jog within the transition area 471 using two discrete right angle turns, as shown in Figure 4A. However, embodiments of the present disclosure are not so limited, and some embodiments can have a jog thai can he formed having smooth curves of no more than 90 degrees of arc, or utilis in discrete turns of less than 90 degrees. [0058] One transposition illustrated in Figure 4A can involve seleci gates SGDO and SGDI. Therefore, the jog of drain select gate SGDI in the transition area 471. can be towards the locatio of the adjacent select gate (e.g., SGDO) with which the transposition is occurring. While Figure 4A show's the condactively-tloped polysilicon portion 472 of drain select gate SGDI jogging through the transition area 47 Ϊ and drain select gate SGDO being discontinuous across the transition area 471, this arrangement can be "reversed" such that a conductiveiy-doped polysilicon portion of drain seleci gate SGDO can be made continuous (e.g., jogging through the transition area 471 ) and the conductiveiy- doped polysilicon portion of seleci gate SGDI can be made discontinuous across the transition area 471.|0059j As shown in Figure 4 A, drain select gate SGDO segments 474- 1 , 474-2 can be electrically coupled across the transition area 471 by an interconnection 464. That is, drain select gate SGDO segment 474-1 can be electrically coupled to drain select gate SGDO segment 474-2). interconnection 464 can be, for example, formed of the second conductive material (e.g., metal, alloy). According to various embodiments, the bit lines (e.g., a first number of bit lines 458, a second number of bit lines 459, bit line 476) can also be formed of the second conductive material (e.g., metal, alloy) as well. Pillars 478 can be formed to electrically couple one or more bit lines (e.g., a first number of bit lines 458, a second .number of bit Sines 459, bit line 476) with one or more select gates (e.g., SGDO, SG.D L SGD2, SGD3). [0060] The transposition of drain select gates SGD2 and SGD3 can be accomplished in a similar fashion. Figure 4A shows a continuous portion of drain select gate SGD2 jogging in the transition area 471 and segments of drain select gate SGD3 formed from the first conductive material being discontinuous across the transition area 471 that are electrically coupled by an interconnect 479 routed over a continuous portion of drain select gate SGD2 jogging in the transition area 471. The transposition of drain selec t gates SGD2 and SGD3 are not limited to the configuration illustrated in Figure 4A, and can be alternatively implemented with drain select gate SGD3 being continuous through the transition area 471 and drain select gate SGD2 being separated into segments that are interconnected by a metal interconnection routed over the jogging portion of drain select gate SGD3. [006].] For simplicity, Figure 4A. shows transpositions between adjacent drain select gates. However, other transposition configurations may be formed. For example, transpositions can be formed between non-adjacent drain select gates using a jog that traverses at least, one intervening drain, select, gate. A larger quantity of drain select gates can involve a great quantity of transpositions occurring within a transition area 471. (0062) Figures 4B and 4C are functional block diagrams of cross- sectional views of the implementation of the transposed select gates shown in Figure 4 A in accordance with one or more embodiments of the present disclosure. Figure 4B is a view taken along cut line 4B-4B shown, in Figure 4A and shows one possible stack structure of word lines 473 located between asource select gate 475 and drain select gates (e.g., SGDO 474-1 and 474-2, SGD ! 472), which can be all located between perpendicularly-oriented bit litres (e.g. , a first number of bit lines 458, a second number of bit lines 459). A communication path 478, which can be formed in the via holes shown in Figures 3D and 3F-L connects the source line 477 and a bit line 476 through the drain 472, 474-1 and 474-2 and source select gates 475 and word lines 473. Although f igure 4B shows four word lines 473 for simplicity of illustration, embodiments of the present disclosure are not so limited arid more (e.g., eight) or fewer word lines 473 may be included in the stack structure. [0063] Figure 4C is a view taken along cut line 4C-4C shown in Figure 4A and shows that interconnection 464, which connects drain select gate SGDO segments 474-1, 474-2, can be formed so as to elevate the interconnection 464 above the elevation of the select gate SGDt 472, over which interconnection 464 is routed to accomplish, the transposition. -According to some embodiments, the interconnection 464 can be formed at the same elevation as the bit lines (e.g., BLO, BL i ), and can be formed from the same conductive material as the bit lines (e.g. , metal, alloy). The similar hatching pattern of BLO. BL I , and interconnection 464 indicates forming of similar materials. As such, the interconnection 464 can be formed by a same process and at a same time that the bit lines are formed. {0064] Interconnection 464 can be electrically coupled to the drain select gate SGDO segment 474-2 by a contact 462 between the first conductive material (e.g., eonductively-doped polysiiieon) of the drain select gate SGDO segment 474-2 and the second conductive material (e.g., metal, alloy) of the interconnection 464, as shown in Figures A and. 4C. interconnection 464 can similarly be electrically coupled to the drain select gate SGDO segment 474-1 by a contact 462 between the first conductive material (e.g.. conductively-doped polysilicon) of the drain select gate SGD segment 474-1 and the second conductive material of the interconnection 464, as shown in Figure 4A. According to various embodiments, the interconnection 464 can be electrically coupled near ends of the drain select gate SGDO segments 474-1 , 474-2 within (or nearest) the transition, area 471. The interconnection 464 can be routed over the jog in the transition area 471 of the select gate with which the transposition is occurring. The transition area 471 can be located between the first number of bitlines 458 and (he second number of bit lines 459 such thai the interconnection 464 does not interfere with bit line routing at the same elevation, and vice versa. [00651 Figure 5A is a functional block diagram of a top view illustrating an implementation of transposed select gates using segments of a first conducti ve material (without a shunt) electrically coupled by an interconnection of a third conducti ve material i accordance with one or more embodiment of the present disclosure. The arrangement and attributes of the features shown in Figure 5A are similar to those shown and described with respect to Figure 4A with the exceptions described below. Thai is, transition area 571 is similar to transition area 471 , first conductive maierial 572 is similar to material 472, segments 574-1 and 574-2 are similar to drai select gate SGDO segments 474-1 and 474-2, bit lines 576, 558, and 559 are similar to bit lines 476, 458, and 459 respectively, and pillars 578 are similar to pillars 478. [0066] Drain select gate SGDO segments 574-1, 574-2 can be electrically coupled across the transition area 5 1 by an interconnection 567. That is, drain select gate SGD segment 574- 1 can be electrically cotipled to draiii select gate SGDO segment 574-2. Interconnection. 567 can be, for example, formed of a third conductive material (e.g., metal alloy). The third conductive material can be of the same or different composition than that of the second conductive material (e.g., metal, alloy), which can be used to form the bit lines 558 and 559 as discussed above with respect to Figures 4A-4C. According to various embodiments, the third conductive materia! can be less resistive (i.e., more conductive) than the second conductive material .However, embodiments of the present disclosure are not so limited, and. in some embodiments interconnection 567 can be formed of the second conductive material. In some embodiments particular features can be formed at a same or different ele ation as other features, and/or formed at a same or di fferent; step in the forming process as other features. Interconnection 567 can be electrically coupled at each end by a contact 562, as discussed further below. Additional interconnection(s) 589 can electrically couple other drain select gate segments (e.g., SGD3) across the transition area 571. The additional interconnections) 58 can be electrically coupled to the other drain select gate segments (e.g., SGD3) b contacts 562 in a similar manner as discussed above.10067] Figures 5B and 5C are functional block diagrams of cross- sectional views of the implementation of tramposed select gates shown in Figure 5A in accordance with one or more embodiments of the present disclosure. Figure 5B is a view taken along cut line 5B-5B shown in Figure 5A and shows one possible stack structure of word lines 573 located between a source select gate 575 and drain select: gates SGDO 574-1 and 574-2, SGD.1 572, which can all be located between perpendicularly-oriented source line 577 and a bit line 576. The arrangement and attribuies of the features shown in Figure B are similar to those shown and described wit respect to Figure 4B. [0068] Figure 5C is a view taken along cut line 5C-5C shown in Figure 5A and shows that an interconnection 567, which, connects drain select gate SGDO segments 574- 1 and 574-2, can be formed so as to elevate the interconnection 567 not only above the elevation of the select gate SGD! 572, over which interconnection 567 is routed to accompl ish the transposition, but. also interconnection 567 can be formed above the elevation at which the bit lines (e.g., BO, Bl, 576, 558, 559) are formed. According to various embodiments, die interconnection 567 can be electrically coupled near ends of the drain select gate SGDO segments 574- 1 and 574-2 within the transition area 571 . The interconnection 567 can pass over the jog in the transition area 571 of the select gale with which the transposition is occurring. The transition area 571 can be located between the first number of bit lines 558 and die second number of bit lines 559. Interconnection 567 can be formed so as not interfere with bit line routing, and vice versa. [0069] Interconnectio 567 can be electrically coupled to each of the drain select gate SGDO segments .574-1 and 574-1 by a respective first contact. 569, a respective portion of the second conductive material. 564, and a respective second contact 562. in this manner, an electrical, path can be established between interconnection 567 and the drain select gate SGDO segments, which are formed of the first conductive material (e.g., conductively-doped polysilicon). Cornmunication path 578 can be formed to electrically couple one or more bit lines 576 with one or more select gates (e.g., SGDO, SGD! . SGD2, SGD3). [0070] Figure 6A is a functional block diagram of a top view illustrating an implementation of transposed select gates using segments of a firstconductive material (with a shunt) electrically coupled by an interconnection of a third conductive materia! in accordance with one or more embodiments of the present disclosure. The arrangement and attributes of the features shown in Figure <SA are similar to those shown and described, with respect to Figure 4A with the exceptions described below. That is, transition area 671 is similar to transition area 471, first conductive material 672 is similar to first conductive material 472, segments 674- 1 and 674-2 are similar to drain select gate SGDO segments 474-1 and 474-2, bit lines 676, 658, and 659 are similar to bit lines 476, 458, and 459 -respectively,, and pillars 678 are similar to pillars 478. [007Jj Conductive shunts 695- 1. and 695-2, 697-1 and 697-2 can he formed of a third conductive material (e.g., metal, alloy), which can be of a different composition than the first and/or second conductive materials. However, embodiments of the present disclosure are not so limited, and the composition of the first, second, and/or third, cond uc ti ve material can be the same. For example, the first, second, and/or third conductive materials can be of a same composition in some embodiments, but formed at different elevations in an apparatus. Also, other (e.g., non-metallic) conductive materials can be utilized for the bit lines, second conductive and/or third conductive materials, such as conductively-doped polysilieon. [0072] The conductive shunts 697-1 and 697-2 corresponding to the first conductive materia! 672 of select gate SG'Dl can be discontinuous, as shown in Figure 6A. The other select gate involved in the transposition (e.g., SGDO) can have separated segments 674-1 and 674-2 comprised of the first conductive material (e.g., eomluctiveiy-doped po.lysil.icon) and corresponding conductive shunts 695-1 and 695-2 that are electricall coupled via an interconnection 667, which is continuous with the shunts 695-1 and 695-2 and which thereby electrically couples the separated segments of first conductive material 674- i and 674-2. (0073) The interconnection 667 can be routed over the jog in the transition area 6 1 of the first conductive material 672 for the select gate (e.g., SGD lj wi th which the transposition is occurring. Interconnection 667 can be formed so as not interfere with routing of the bit lines 658, 59, the intervening discontinuous portions of second conducti ve material 693 (e.g., shunts), and the number of conductive source lines 691 by being formed at a different elevation.The interconnection 667 can be formed of the third conductive material (e.g., metal, alloy). [0074] As shown in Figure 6A, select gates SGDO and SGDi can be transposed with one another, and select gates SGD2 and SGD3 can be transposed with one another at the transition area 6? i .. Within the transition area 671. shunts 693 and a number of conductive source lines 6 1 can be formed between first number of bit lines 658 and a second, number of bit lines 659. The shunts 693 and conductive source lines 69! can be formed to be substantially parallel to the .first number of bit lines 658 and the second number of bit lines 659. Other shunts 693 can be formed outside the transition area 671 substantially parallel to the first number of bit lines 658 and the second, number of bit lines 659, The various shunts 693 and conductive source lines 691 can be formed of the same material (e.g., third conductive material) as the first number of bit lines 658 and the second number of bit lines 659, and therefore, can be formed b a same process as die first number of bit lines 658 and the second number of bit lines 659, [0075] Segments of the first conductive material 674-1 and 674-2 and conductive shunts 695-1 and 695-2 of drain select gate SGDO segments can be electrically coupled across the transition area 671 by a interconnection 667. interconnection 667 can be electricall y coupled to an end of one of drain sel ect gate SOD segments, for example, via a contact 669, shunt 693, and. contact 662, as discussed, further below. Other intereonneeiionCs) 689 can electrically couple other drain select gate segments (e.g., of SGD3) across the transition area 671. These other interconnection^) 689 can be electrically coupled to the other drain, select, gate segments (e.g., of SGD3) by contacts 669, shunts 693 and contacts 662 in a similar manner as discussed above. 100761 Figure 6B and 6C are functional block diagrams of cross- sectional views of the implementation of transposed select gates shown in Figure 6A in accordance with one or more embodiments of the present disclosure. Figure 6B is a view take along cut line 6B-6B shown in Figure 6A and shows one possible stack structure of word fines 673 located between a source select gate 675 and drain select gates (e.g., SGDO 674-1 and 674-2, SGDI 672), which can all be located between perpendicularly-oriented source line 677 and a bit line 676. The arrangement and attributes of the features shown in Figure 6B are oosimilar to those shown and described with respect to Figure 4B> with the addition of the conductive shunt 695- i of SGDO and the conductive shunt 697-1 of SGDl located above the bit line 676. |(K)77j Figure 6C is a view taken along cut line 6C-6C shown m Figure 6 A and shows a number of shunts 693 and a number of conductive source lines 6 1 located between bit lines BLO and BL1 at a same elevation. The number of shunts 693, the number of conductive source lines 6 1 , and bit lines BLO and. BL1 can be formed of a second conductive material (e.g., metal, alloy), as indicated by a same cross-hatching in Figure 6C. [0078] Figure 6C also shows that an. interconnection 667, which connects drain select gate SGDO segment 674-1 and conductive shunt 6 5-i to drain select gate SGDO segment 674-2 and conductive shunt 695-2, can be formed so as to elevate the interconnection 667 not only above the elevation of the .first conductive material 672 for select gate SGDl but also above the elevatio of the bit lines (e.g., BLO, BLl ), the number of shunts 693, and the number of conductive source lines 6 1. According to various embodiments, the interconnection 667 can be electrically coupled near ends of the first conductive material segments 674-1 and 674-2 and the conductive shunt segments 695-1 and 695-2 of the drain select gate SGDO within or adjacent to the transition area 671 , as shown in Figure 6A. |0079j Interconnection 667 and conductive shunt 695-2 can be electrically coupled to a conductive material portion 664 by a first contact 669, and conductive material portion 664 can be electrically coupled to the segment of first conductive material 674-2 for drain select gate SGDO by a second contact 662. In this manner, an electrical path can be established between interconnection 667 and conducti ve shunt 695 -2 and the se gment of first conductive material 674 for drain select gate SGDO. Communication path 678 can be formed to electrically couple one or more bit lines (e.g., BLO, BL1 ) through the dram select gates (e.g., SGDO, SGDl), word lines 673, and source select gate 675 to the source line 677, [008Θ] Figure 7 A. is a functional block diagram of a top view illustrating an implementation of transposed select gates rising segments of a first conductive material (and a shunt) electrically coupled by interconnections of a second and third conductive materials in accordance with one or moreembodiments of the pfeseoi disclosure. The arrangement arid attributes of the features shown in Figure 7A are similar to those shown and described with respect to Figure 6 A. with the exceptions described below. That is, transition area 771 is similar to transition area 671 , segments 774-1 and 774-2 are similar to drain select gate SGDO segments 674-1 and 674-2; conductive shunts 795-1. 795-2, 797» ! ,, and 797-2 are similar to conductive shunts 695-1 , 695-2, 697-1, and 697-2 respectively, bit lines 776, 758, and 759 are similar to bit lines 676, 658, and 659 respectively, (e.g., intervening discontinuous portions of second conductive material 793 (e.g., shunts) are similar to shunts 693, and pillars 778 are similar to pillars 678. [0081 j The arrangement shown in Figure 7A is different front the arrangement shown in Figure 6A in some respects. For example, Figure 6A shows first conductive material 672 of drain select gate SGDI being continuous across, and jogging within, the transition area 671. In contrast. Figure 7 A shows the first conducti ve material of drain select gate SGDI being discontinuous across the transition area 67 Ϊ . Instead, two segments of first conductive material 792- 1 and 792-1 , and conductive shunts 797-1 and 797-2 of drain select gate SGDI can be interconnected across the transition area 771 by an interconnect 757. 10082] Interconnect 7 7 can be electrically coupled to the conductive shun 797-2 of drain select gate SGDI on the right side of the transition area 77] by a contact 753, and interconnect 757 can be electrically coupled to the segment of the first conductive material 792-1 and conductive shunt 797-1 of drain select gate SGDI on the left side of the transition area 77] by a contact 769. According to some embodiments, the interconnection 757 can be formed at a saute elevation as the conductive shunts 797- 1 and 797-2. However, embodiments of the present disclosure are not so limited, and interconnection 757 can be formed at a different elevation as one or both of the conductive shunts 797-1 and 797-2, and coupled thereto b a contact as shown in Figures 7A-C. According to various embodiments, interconnections 757 and 767 can be formed at different elevations to enable one crossing over the other. Furthermore, interconnection 757 can be formed to pass over another interconnection (e.g., 767) or under another interconnection (e.g., 789) in the transition area 771, as shown in Figure 7 A.|0083j Segments of the first conducti ve material 774- and 774-2 and conductive shunts 795-1 and 795-2 of drain select gate SGDO segments can be electrically coupled across the transition area 771 by an interconnection 767. According to some embodiments, the interconnection 767 can be formed at a same elevation as the conductive shunts 795- 1 and 795-2. However, embodiments of the present disclosure are not so limited, and interconnection 767 can be formed at a different elevation as one or both of the conductive shunts 795-1 and 795-2, and coupled thereto by a contact as shown in Figures 7A-C. Interconnection 767 can elec trically couple ends of drain select gate SGDO segments, for example, via a contact 769 at each end of interconnection 767. In a. similar manner, other interconnections 757 and 789 can electrically couple respective other drain select gate segments (e.g., SGD2, SGD3), including the first, conductive materia! and corresponding conductive shunts, across the transition area 771. as shown in. Figure 7 A. {0084] Figures 7B and 7C are functional block diagrams of cross- sectional views of the implementation of transposed select gates shown in Figure 7A in accordance with one or more embodiments of the present disclosure. Figure 7B is a view taken along cut line 7B-7B shown in Figure 7A and. shows a stack structure of word lines 773 located between a source select gate 775 and drain select gates (e.g., SGDO 774-1 , SG.D1 792-1 ), which can ail be located between perpendicularly-oriented source line 777 and bit line 776. The arrangement and attributes of the features shown in Figure 7B are similar to those shown and described with respect to Figure 6B with the exception that in .Figure 7B the first. conductive material for SOD 1 7.92-1 is not continuous across the transition area 771 , as opposed to the continuous first conductive materia! 672 shown in Figure 6B. [0085] Figure 7C is a vie taken along cut line 7C-7C shown in Figure 7A. The arrangement and attributes of the features shown in Figure 7C are similar to those shown and described with respect to Figure 6C with the exception of the additional ini reon«eetion757 and associated structures described further below. |0086j Figure 7C also shows that, one of the shunts 793 can be electricall coupled to a segment of first conductive material 792-1 and interconnect 757 and conductive shunt 797-1 by contact 769. Another one of theshunts 793 can be electrically coupled to a segment of first conductive materia! 774-2 and interconnect 767 and conductive shunt 795-2 by contact 769. [0087] According to one or more embodiments, interconnection 757 can be formed from the third conductive material (e.g., metal alloy), which can be used to form the bit lines 758, 759, 776 and the shunts 793. According to one or more embodiments, interconnection 767 can be formed from the thu conductive material (e.g., metal, alloy), which can be used to form the conductive shunts 795- 1 and 795-2 ofSGDO and the conductive shunts 797-1 and 797-2 of SGDi . However, embodiments of the present disclosure are not so limited, and i nterconnection 757 and interconnection 767 can be formed of a same conductive materia! (e.g., second, conductive material, third conductive material) or another (e.g., noo -metallic) conductive material. For example, according to various embodiments of the presen disclosure interconnection 767 can be formed, of a fourth conductive material., different .from the third conductive material used to form interconnection 757. (0088) interconnection 757 can be formed at a different elevation titan iritercon.neci.ion 767 such that the interconnections do not interfere with one another as they cross one another. For example, interconnection 757 can be formed at a lower elevation than interconnection 767 such that interconnection 767 can be routed further away from the word lines 773 than interconnection 757, Alternatively, interconnection 767 can. be formed at a lower elevation than interconnection 757 such that interconnection. 757 can. be routed further away from the word lines 773 than interconnection 767. |0089j Although specific embodiments have been illustrated and described herein, those of ordinary skill, in the art will, appreciate that an. arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. Thi disclosure is intended to cove adaptations or variations of various embodiments of the present disclosure. It is to be understood that the above description has been made in. an. illustrative fashion, and uoi a restrictive one. Combination of the above embodiments, ' nd other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description. The scope of the various embodiments of the present disclosure includes other applicat ions in which the above structures and methods are used. Therefore, the scope of variousembodiments of the pfeseot disclosure should be determined with reference to the appended claims- along with the full range of equivalents to which such claims are entitled, '0090'j In the foregoing Detailed Description, various features are grouped together in a single embodi ment for the purpose of streamlining the disclosure. This method of disclosure is not io be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporaied into the Detailed Description, with each claim standing on its own as a separate embodiment.
A computing device (200, 400) has a plurality of subsystems (413, 424) located in subsections that are moveable with respect to each other. Communication between the subsections is accomplished with wireless transceivers (104, 106) transmitting over the air gap (126, 127) interface separating the subsections. Data from multiple communicating subsystems (413, 424) in the subsections is multiplexed (102) into a single data stream and encoded (103) into the communication protocol of the wireless transceivers. The encoded data stream is transmitted to a compatible transceiver (106) where it is decoded (107). The decoded data stream is demultiplexed (108) into individual data streams for each of the communicating subsystems. The wireless transceivers (104, 106) include multiple communication protocols and transmission frequencies from radio frequencies to optical frequencies. Optical fibers, transmission lines or waveguides may be used to transmit signals within each subsection depending on the wireless technology and protocol.
CLAIMS: 1. A method of communicating comprising the steps of : hingedly coupling first and second sections of a computing device; positioning a first wireless transceiver having a communication protocol in said first section (301); positioning a second transceiver having said communication protocol in said second section (302); multiplexing data from one or more first subsystems in said first section forming a first data stream (303); encoding said first data stream into said communication protocol forming a first encoded data stream (304); and transmitting said first encoded data stream from said first transceiver to said second transceiver across an air gap separating a first surface of said first section and a second surface of said second section (305). 2. The method of claim 1 further comprising the steps of : decoding said first encoded data stream received by said second transceiver forming a third data stream (307); demultiplexing said third data stream forming system data for said one or more second subsystems (308); and coupling said system data to a corresponding one of said second subsystems (309). 3. The method of claim 1, further comprising the steps of : multiplexing data from one or more second subsystems in said second section forming a second data stream; encoding said second data stream into said communication protocol forming a second encoded data stream; and transmitting said second encoded data stream from said second transceiver to said first transceiver across said air gap separating said first surface of said first section and said second surface of said second section. 4. The method of claim 3 further comprising the steps of : decoding said second encoded data stream received by said second transceiver forming a fourth data stream; demultiplexing said fourth data stream forming system data for said one or more first subsystems; and coupling said system data to a corresponding one of said first subsystems. 5. The method of claim 1, wherein said first and second surfaces are substantially orthogonal to an axis of rotation corresponding to a hinging means coupling said first and second section. <Desc/Clms Page number 8> 6. The method of claim 1, wherein said first and second surfaces are substantially parallel to an axis of rotation of corresponding to a hinging means coupling said first and second section. 7. The method of claim 5, wherein said first surface comprises a first optical fiber coupled to said first transceiver and said second surface comprises a second optical fiber coupled to said second transceiver. 8. The method of claim 6, wherein said first surface comprises a first optical fiber coupled to said first transceiver and said second surface comprises a second optical fiber coupled to said second transceiver. 9. The method of claim 1, wherein said first and second surfaces have an angular relative position and are orthogonal to an axis of rotation of corresponding to a hinging means coupling said first and second sections. 10. The method of claim 9, wherein said first surface comprises a first electromagnetic transducer coupled to said first transceiver and said second surface comprises a second electromagnetic transducer coupled to said second transceiver. 11. The method of claim 1, wherein said one or more first subsystems comprise a wireless local area network device, a universal serial port device and a liquid crystal display device. 12. The method of claim 1, wherein said one or more second subsystems comprise a wireless local area network device, a universal serial port device and a storage device coupled to a system motherboard. 13. A computing device comprising: a first section having one or more first subsystems; a second section hingedly coupled to said first section and having one or more second subsystems; a first wireless transceiver integral to said first section and having a communication protocol; a second transceiver integral to said second section and having said communication protocol; a first multiplexing circuit for multiplexing data from said one or more first subsystems forming a first data stream; a first encoding circuit for encoding said first data stream into said communication protocol forming a first encoded data stream; a first demultiplexing circuit for demultiplexing data from said one or more second subsystems forming a third data stream; a means for coupling said third data stream to said one or more first subsystems ; and an air gap separating a first surface of said first section and a second surface of said second section, said first surface having a first communication path coupled to said first transceiver and said second surface having a second communication path coupled to said second transceiver, wherein said first transceiver transmits said first encoded data stream to said second transceiver. <Desc/Clms Page number 9> 14. The computing device of claim 13 further comprising: a second multiplexing circuit for multiplexing data from said one or more second subsystems forming a second data stream ; a second demultiplexing circuit for demultiplexing data from said one or more first subsystems forming a fourth data stream ; a means for coupling said fourth data stream to said one or more second subsystems; and a second encoding circuit for encoding said second data stream into said communication protocol forming a second encoded data stream, wherein said second transceiver transmits said second encoded data stream to said first transceiver. 15. The computing device of claim 13, wherein said first and second surfaces are substantially orthogonal to an axis of rotation corresponding to a hinging means coupling said first and second section. 16. The computing device of claim 13, wherein said first and second surfaces are substantially parallel to an axis of rotation corresponding to a hinging means coupling said first and second section. 17. The computing device of claim 15, wherein said first surface comprises a first optical fiber coupled to said first transceiver and said second surface comprises a second optical fiber coupled to said second transceiver. 18. The computing device of claim 16, wherein said first surface comprises a first optical fiber coupled to said first transceiver and said second surface comprises a second optical fiber coupled to said second transceiver. 19. The computing device of claim 13, wherein said first and second surfaces have an angular relative position and are orthogonal to an axis of rotation of corresponding to a hinging means coupling said first and second sections. 20. The computing device of claim 19, wherein said first surface comprises a first electromagnetic transducer coupled to said first transceiver and said second surface comprises a second electromagnetic transducer coupled to said second transceiver. The computing device of claim 13, wherein said one or more first subsystems comprise a wireless local area network device, a universal serial port device, and a liquid crystal display device. 21. The computing device of claim 13, wherein said one or more second subsystems comprise a wireless local area network device, a universal serial port device and a storage device coupled to a system motherboard.
<Desc/Clms Page number 1> WIRELESS INTERFACE TECHNICAL FIELD The present invention relates in general to wireless communication and in particular to wireless communication between sub-assemblies in portable, laptop and handheld computers. BACKGROUND ART Portable computers (e. g. , laptop, handheld and subcompacts) achieve their small overall size and volume by folding their largest component, their display screen and supporting lid subassembly, when not in use. Unfortunately, hinging the display subsection makes it difficult to communicate with or power devices in the lid portion. This occurs because the wires used to connect devices in the lid with devices in the base must often snake within the hinge itself and are exposed to constant bending and unbending. Further, connections to devices in the lid (e. g. , LCD graphic displays, USB cameras and other devices) may require many signal wires which must be compressed into a tiny area of the hinge width. Compressing these wires into a small area requires small signal trace sizes which in turn creates problems of signal cross-talk between traces and other types of interference. The problem is exacerbated when wireless local area network (WLAN) and wireless wide area network (WWAN) connectivity is added to these types of portable computers. WLAN and WWAN employ radio technologies and each require the inclusion of specialized antennas that work best when positioned in the highest location possible within a unit, usually the lid area of the portable computer. The interconnects required between the motherboard of the portable computer and a WLAN and/or WWAN radio subsystem located in its lid further complicate the hinge wiring problem by adding more signals with higher data rates. In addition, marketing requirements may dictate that all parts of a portable computer system be as thin as possible. Therefore, the wiring system elements (e. g. , flexible circuits, connectors, shielding and wires) may ultimately limit the marketability of a particular portable computer by limiting the thickness of the lid or in some cases even the main case itself. Even if the hinge wiring harness is enlarged to include WLAN or WWAN (or both) radio interface signals, the signals may interfere or be interfered with by the other signals such as for the LCD display. Finally, the cost of the wiring system elements for the various subsystems in a portable computer is non-trivial. It has been found that only specialized flexible circuit substrates are able to carry the required number of signal lines with the flexibility and durability needed in the demanding environment of the hinge area. Such wiring subassemblies are costly to build and to assemble within the body of the computer case itself. Further, wiring subsystems subject to movement (folding, sliding, etc. ) are a major contributor to original equipment manufacturer (OEM) customer service costs due to increased calls and product returns. To enable modern portable computer systems to continue to add desired technologies and to retain their marketable physical size and weight, there is a need for a way to reduce the wiring required for communicating signals between devices in the main body and devices in the lid of portable computer systems. <Desc/Clms Page number 2> DISCLOSURE OF INVENTION A solution to the problem of low-cost portable computer interconnection resides with electromagnetic communications technology: using radio, magnetic, or optical methods to wirelessly communicate between moveable sections that are mechanically connected. The present invention uses wireless teclmology to communicate between subsystems in the case and the lid of a portable computer. A variety of electromagnetic communications may be used to span a short air gap distance between these moveable sections. The lid and base are hingedly coupled so that they may be moved relative to each other. The data for one or more subsystems in the lid are multiplexed into a single data stream which is then encoded into the protocol for a base transceiver. The data is wirelessly transmitted across an air gap separating the base transceiver and a lid transceiver. The received data is decoded and demultiplexed and then coupled to subsystems in the lid section. Likewise, data for one or more subsystems in the base are multiplexed into a single data stream and encoded into the protocol for the lid transceiver. Data may be transmitted bi-directionally on a single link or transmitted on multiple wireless links. The wireless links may employ a variety of protocols and electromagnetic spectra in communicating across the air gap separating the lid and base sections. The foregoing has outlined rather broadly the features and technical advantages of the present invention in order that the detailed description of the invention that follows may be better understood. Additional features and advantages of the invention will be described hereinafter which form the subject of the claims of the invention. BRIEF DESCRIPTION OF THE DRAWINGS For a more complete understanding of the present invention, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings in which: FIG. 1 is a block diagram illustrating processing of signals which are transmitted from a subsystem in a first section to a subsystem in a second section according to embodiments of the present invention; FIG. 2 is a view of portions of a computing device according to one embodiment of the present invention; FIG. 3 is a flow diagram of method steps used in embodiments of the present invention; FIG. 4A and FIG. 4B are views of a computing device illustrating transmitting data across an air gap according to one embodiment of the present invention; and FIG. 5 illustrates a side view of a computing device where the surfaces of the two sections of the computing device containing wireless communication devices have angular relative positions. MODE (S) FOR CARRYING OUT THE INVENTION In the following description, numerous specific details are set forth to provide a thorough understanding of the present invention. However, it will be obvious to those skilled in the art that the present invention may be practiced without such specific details. In other instances, well-known circuits may be shown in block diagram form in order not to obscure the present invention in unnecessary detail. For the most part, <Desc/Clms Page number 3> details concerning timing considerations and the like have been omitted in as much as such details are not necessary to obtain a complete understanding of the present invention and are within the skills of persons of ordinary skill in the relevant art. Refer now to the drawings wherein depicted elements are not necessarily shown to scale and wherein like or similar elements are designated by the same reference numeral through the several views. In general, the present invention utilizes an air gap interface to couple needed signals between subsystems in the lid and the motherboard in the base. The air gap interface comprises the area where the lid and the base move relative to each other. Since the lid may have multiple subsystems with different communication protocols, it is desirable to consolidate the multiple data paths into a single or at most a dual communication path, though more communications paths are within the scope of the present invention. This allows the physical interface to be likewise consolidated. FIG. 1 is a block diagram illustrating the consolidation of communication paths and the elements needed to allow an air gap interface as the main communication link. Exemplary lid 101 comprises a wireless local area network (WLAN), a universal serial port (USB) device and a liquid crystal display (LCD). The actual devices are not shown in FIG. 1, rather their input/outputs (I/O) are identified. WLAN 116 inputs signals for the WLAN. Likewise, WLAN 119 outputs signals for the WLAN. The USB communicates using input USB 117 and output USB 120. The LCD subsystem used to display information communicates using input LCD 118 and output LCD 121. Multiplexor (MUX)/demultiplexor (DMUX) 102 is used to consolidate multiple communication paths (e. g. , WLAN 119, USB 120 and LCD 121) into a single path (e. g. , 123). In the same manner, consolidated data (CD) in path 122 is de-multiplexed into WLAN 116, USB 117, and LCD 118. CD 122 is decoded and CD 123 is encoded. Data for the lid subassemblies are received in CD 122 and decoded by Encoder/Decoder 103. CD 123 is encoded compatible with the particular communication protocol used in the physical electromagnetic interface (PEI) 104. A communication protocol refers to hardware and software standards that govern data transmission between devices. The term"protocol"is very generic and is used for hundreds of different communication methods. A protocol may define the packet structure of the data transmitted or the control commands that manage the session or both. In this disclosure, protocol also includes the modulation/demodulation scheme necessary to encode and decode data relative to the electromagnetic waves associated with a particular wireless interface. CD 123 is transmitted over air gap interface 105 and is received in PEI 106. Encoder/Decoder 107 decodes CD 123 into CD 125. CD 125 is demultiplexed in MUX/DMUX 108 to produce individual data streams WLAN 113, USB 114 and LCD 115 which are received in appropriate circuits in main logic board 109. In the same manner, signals WLAN 110, USB 111, and LCD 112 are multiplexed in MUX/DMUX 108 and coupled to Encoder/Decoder 106 to produce CD 124 which is coupled to PEI 104 which transmits the data to Encoder/Decoder 103 over air gap interface 105. CD 122 is demultiplexed in MUX/DMUX 102 producing individual signals WLAN 116, USB 117 and LCD 118. PEI 104 and 106 are compatible transceiver systems <Desc/Clms Page number 4> and may operate over wide range of electromagnetic frequency spectra and use one of many possible modulation schemes. The particular communication system, PEI 104 and PEI 106, would obviously need to be compatible with the devices within a particular system and not cause undue interference or be susceptible to interference from standard devices within a portable computer. It is understood that the system of FIG. 1 may be used between any two locations in a computer which may be connected using wired technologies and still be within the scope of the present invention. One embodiment of the present invention utilizes light to communicate to the lid. In this embodiment, light may be coupled to air gap interface using a flexible light pipe such as a fiber optics. Light may also be used to directly communicate through the air gap interface in a broadcast mode. Much like light from a light bulb will fill a room, modulated light from an optical source may be used to flood an area around the portable computer employing embodiments of the present invention. Fiber optics may be the best solution since optical fibers have a large bandwidth and are manufactured in high volume. Fiber optics offers great flexibility in coupling modulated light from a point of generation or reception to or from the air gap interface area, however, placing a modulated light source and receiver in the air gap interface area is also within the scope of the present invention. FIG. 2 illustrates using light and optical fiber to couple light to an air gap interface area. Cut-outs are shown dotted (e. g. , 208) exposing elements which are normally hidden from view. FIG. 2 illustrates portions of an exemplary base 203 and lid 201 of a computing device 200. Lid 201 has a hole 206 into which a cylindrical element 204 fits. Hole 206 and element 204 operate as a hinge element. A corresponding hinge element (not shown) would be positioned on the other side of lid 201 to complete the hinge assembly. Lid 201 has as an optical fiber 207 disposed substantially coaxial with hole 206. Cylindrical element 204 also has a hole 208 so that light from optical fiber 205 may reach optical fiber 207 via an air gap (between optical fiber 205 and optical fiber 207). Lid 201 is shown closed over base 203 with separating gap 210. Lid 201 may be rotated (opened) about axis 209 on its hinge elements (e. g. , element 204 and hole 206). Optical fiber 205 in base 203 does not move. While optical fiber 207 does move, its axis of communication remains fixed and directed toward optical fiber 205. Optical fiber 205 couples to a transceiver (not identified) in a sub-assembly (e. g. , motherboard 212) in base 203. Likewise optical fiber 207 couples to a transceiver (not identified) in sub-assembly 213 in lid 201. FIG. 2 illustrates one example of how embodiments of the present invention may use an air gap interface to optically communicate without the communication elements physically touching. In computing device 200, the means of carrying the data is accomplished by placing the waveguides (in this case optical fibers 205 and 207) in the axis of rotation of the hinge, thus avoiding an actual touching a wires, tubes, or other physical devices which could be damaged by twisting or binding. Data may be carried by two separate waveguides, or bi-directional data may be carried over the same waveguide (illustrated by arrow 202), depending on the signal frequencies and modulation methods. <Desc/Clms Page number 5> FIGS. 4A and 4B illustrate views of a computing device 400 according to an embodiment of the present invention. In this example, light is used in the exemplary computing device 400 to simplify explanation of the present invention. It is understood that other electromagnetic frequencies may be used and still be within the scope of the present invention. In FIG. 4A, section A 403 and a section B 402 are coupled with a hinge means (not shown) such that section B 402 may be rotated relative to section A 403. As section B 402 is rotated from position 1 to position 3, surface 406 of section B 402 is separated by an air gap and remains substantially a fixed distance from surface 404 of section A 403. Optical fiber 412 couples transceiver (TR) 411 to surface 406 of section B 402. Likewise optical fiber 408 couples TR 401 to a desired position within base 403. Surface 404 is formed as a focusing lens for light signals 405 transmitted by TR 402. This allows light (optical signals 405) from optical fiber 412 to be focused to optical fiber 408 relatively independent of position of TR 411 as section B 402 is rotated. If TR 401 is transmitting, optical fiber 408 is constructed so that light 430 has a radial distribution. While the magnitude of light 430 as"seen"by TR 411 may vary with its radial position, proper modulation techniques would allow communication over the air gap separating section A 403 and section B 402. Vectors 407,409 and 410 illustrate that the magnitude of the signal received by TR 411 may vary with position. The material in section A 403 in and around surface 404 is substantially transparent to the frequency of communication. FIG. 4B is another view of computing device 400. Lid 402 is hinged to base 403 with hinge elements 414 and 415. Exemplary subsystem 413 is coupled to electronics 416 with communication link 420. Electronics 416 is coupled to TR 411 with communication link 419. TR 411 couples an optical signal to surface 406 with optical fiber 412. Optical data 431 is communicated perpendicular to the axis of rotation of lid 201 across the air gap separating surface 404 from surface 406. Optical fiber 408 couples the optical data 431 to TR 401. TR 401 is coupled to motherboard 417 which may contain MUXs, encoders and other signal processing circuitry. Exemplary subsystem 424 is coupled to motherboard 417. Likewise connectors 422 and 423 are coupled (e. g. , via communication link 421) and may be used to couple subsystems (not shown) external to computing device 400 to elements in lid 402. While computing device 400 is shown using light to communicate across an air gap separating surface 406 and surface 404, it is understood that other frequencies of communication may be used and still be within the scope of the present invention. FIG. 5 illustrates another embodiment of the present invention where communication between moveable elements uses what is commonly referred to as radio or microwave radiation. Lid 506 rotates about hinge 505 relative to base 501 to open notebook computer 500. A transceiver 503 may be variable positioned as shown and may have a radiation pattern illustrated by the lines 507 or 508. These radiation patterns would encompass a transceiver 502 in either illustrated position in lid 506. The frequency of the electromagnetic radiation (e. g. , 507 and 508) may have a wide range and still be usable to transmit and receive data needed between a transceiver in the lid 506 and base 501 ; either carried over a single frequency (direct frequency or amplitude modulation) or multiple frequencies (e. g. , spread spectrum <Desc/Clms Page number 6> modulations with various encoding schemes such as orthogonal frequency division multiplexing). Wave patterns other than 507 and 508 shown in FIG. 5 may be used as long as transceivers in the lid 506 and base 501 are positioned to receive the transmitted data through the desired rotation angle 504. In another embodiment, transceivers 502 and 503 utilize a large continuous band of radiation frequencies allowing a less intrusive and lower powered radio to be constructed. This technique is known as ultra wide band (UWB) radio and utilizes a very wide range of frequencies (at low-power) to carry fast pulses between the two sections of the computer. The embodiments employing radio transceivers have flexibility in where the transceivers are placed within the respective subsystems; the nature of the radio radiation allows the transmitters and receivers to be placed within the hinging/moving sections in whatever way is most convenient to the designer of the portable computer system. FIG. 3 is a flow diagram of method steps used in embodiments of the present invention. In step 301, a first wireless transceiver having a communication protocol (e. g. , transceiver 106) is positioned in the first section (e. g. , the base 109) of a computing device. In step 302, a second wireless transceiver (e. g., transceiver 104) having the communication protocol is positioned in the second section (e. g. , the lid 101) which is hingedly coupled to the base 109. In step 303, data from one or more subsystems in the base 109 is multiplexed in 108 forming a first data stream (e. g. , 124). In step 304, the first data stream 124 is encoded (e. g. , by encoder/decoder 107) into the communication protocol forming a first encoded data stream (e. g. , 126). The first encoded data stream 126 is transmitted from transceiver 106 to transceiver 104 across air gap interface 105 separating surfaces of the lid 101 and the base 109. In step 306, the first encoded data stream 126 is received in transceiver 104. The first encoded data stream 126 is decoded (e. g. , by encoder/decoder 103) into a third data stream (e. g. , decoded data 122). In step 308, decoded data 122 is demultiplexed in 102 forming system data for one or more subsystems (e. g., WLAN 116 devices, USB 117 devices or LCD 118). In step 309, the system data is coupled to the corresponding subsystems. Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. INDUSTRIAL APPLICABILITY Removal of, or the limitation of physical data communication connections between the base device to devices in the lid (e. g. , LCD graphic displays, USB cameras and other devices) may lower the cost and increase the efficiency of portable computers, thus enabling manufactureres of such devices to increase the technology utilized in their products while, hopefully, limiting the increased costs related to such increase of technology.
In some embodiments, reformat logic comprises a plurality of registers and translation logic that accesses the registers. The translation logic receives a memory access targeting an application data structure that has a different format than accesses permitted to be provided to a device, which may be a display. The translation logic reformats the request to a format compatible with the device based on values stored in the registers. <IMAGE>
A data processing system, comprising: a processor (102) for executing application code (150), the application code operating upon an n-bit addressable data structure (52); a memory (106) comprising: memory locations for storing n-bit addressable data structures; and an allocated buffer portion (156) for storing m-bit addressable data structures, m being different from n; and a peripheral device (114), operable according to contents of the allocated buffer portion of the memory; and reformat logic (154) coupled to the processor and memory, for converting n-bit addressable data structures into m-bit addressable data structures for storage in the allocated buffer portion of the memory, comprising translation logic (159) for converting physical addresses of the n-bit data structures into physical addresses of the m-bit data structures. The system of claim 1 wherein the peripheral device comprises a display. The system of claim 1 or claim 2 wherein the n-bit addressable structure comprises an array. The system of claim 3 wherein the array comprises a multidimensional array. The system of claim 3 wherein array comprises a single-dimensional array. The system of any preceding claim, wherein the reformat logic comprises: configuration register locations, for storing parameters indicating mapping between the n-bit addressable data structures and the m-bit addressable data structures; and translation logic for converting a physical address for an n-bit addressable data structure to a physical address for an m-bit addressable data structure. The system of claim 6 wherein n is larger than m and is not an integer multiple of m and wherein the reformat logic further comprises: alignment logic for implementing a read-modify-write operation to write a value from the n-bit addressable data structure across byte boundaries of the m-bit addressable data structure in the allocated buffer portion of the memory. The system of any preceding claim wherein m is less than n. The system of claim 6 or claim 7, wherein the parameters stored by the configuration register locations comprise: the starting address, in the memory, of the allocated buffer portion; and at least one parameter indicating the value of nand m. The system of claim 9 wherein the parameters stored by the configuration register locations further comprise: starting and ending addresses of the application data structure. The system of any of claim 6 to 10, wherein the parameters stored by the configuration register locations are programmed by a virtual machine. The system of claim 6 wherein the reformat logic further comprises: a multiplexer for applying a selected one of a physical address for an n-bit addressable data structure and a physical address for an m-bit addressable data structure. The system of claim 12 wherein the reformat logic controls the multiplexer to select the selected memory address. The system of claim 13 wherein the processor supplies an address to the multiplexer and to the reformat logic and asserts a signal to the multiplexer to cause the multiplexer to select the selected memory address. The system of claim 1, wherein the reformat logic is also for converting data of the n-bit addressable data structure from a first associated data representation to a second data representation that is associated with the m-bit addressable data structure.
The present invention relates generally to reformat logic that permits a device buffer to be accessed through a high level programming language. Many types of devices require device drivers for the operation of the device. Such devices may include displays and keyboards. A device driver is executable software that provides a programming interface between a high level programming language and the device. A device driver typically requires a portion of memory to be allocated for its use in providing data to or receiving data from the device it controls. With regard to at least some high level languages (e.g., Java), such languages typically require a "call" to a device driver that may be written in a "native" language such as C. The high level application generally uses a data structure to provide data to, or receive data from, a corresponding data structure in the device driver memory. The two data structures may not be directly compatible and thus, a mapping between the two may be needed. Mapping a data structure from a high level language to the data structure in the device driver memory can be computationally intensive. Additionally, the calls that permit the context change between the high level application and the device driver undesirably introduce latency. BRIEF SUMMARY OF THE INVENTION In some embodiments, reformat logic comprises a plurality of registers and translation logic that accesses the registers. The translation logic receives a memory access targeting data associated with a device (e.g., a peripheral device), whose data have a different representation format within an application program versus the targeted device. The translation logic dynamically reformats the request to a format compatible with the device based on values stored in the registers. The translation logic dynamically changes the associated address of the original data viewed by the application program to a different address corresponding to the data within the device. In the example of the application program being a Java program, the Java program may access devices in Java, instead of through a costly native method calls to device drivers. Prior art document US 5 680 161 discloses an apparatus for the display of graphics data comprising a memory comprising an array of locations for storing n-bit addressable data structures; a buffer memory for storing m-bit addressable data structures; a peripheral device operable according to the contents of the buffer memory; and reformat logic for translating the n-bit addressable data structures into m-bit addressable data structures for storage in the buffer memory. NOTATION AND NOMENCLATURE Certain terms are used throughout the following description and claims to refer to particular system components. As one skilled in the art will appreciate, various companies may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In the following discussion and in the claims, the terms "including" and "comprising" are used in an open-ended fashion, and thus should be interpreted to mean "including, but not limited to...". Also, the term "couple" or "couples" is intended to mean either an indirect or direct connection. Thus, if a first device couples to a second device, that connection may be through a direct connection, or through an indirect connection via other devices and connections. BRIEF DESCRIPTION OF THE DRAWINGS For a more detailed description of the preferred embodiments of the present invention, reference will now be made to the accompanying drawings, wherein: Figure 1 shows a diagram of a system in accordance with preferred embodiments of the invention and including reformat logic to permit a processor directly manage memory associated with a hardware device; Figure 2 further illustrates the system of Figure 1 ; Figure 3 illustrates the operation of the compressor to permit an application to manage the memory of the hardware device; Figure 4 further illustrates the operation of the reformat logic; Figures 5A and 5B show various embodiments illustrating constraints on the system; Figures 6A and 6B illustrate the operation of the reformat logic under data alignment and non-alignment conditions; Figure 7 illustrates the use of the system when the application software operates on objects that include metadata; Figure 8 illustrates the operation of the system operating on non-contiguous data structures; and Figures 9 and 10 illustrate various embodiments of the system operating on multi-dimensional data structures. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The following discussion is directed to various embodiments of the invention. Although one or more of these embodiments may be preferred, the embodiments disclosed should not be interpreted, or otherwise used, as limiting the scope of the disclosure, including the claims, unless otherwise specified. In addition, one skilled in the art will understand that the following description has broad application, and the discussion of any embodiment is meant only to be exemplary of that embodiment, and not intended to intimate that the scope of the disclosure, including the claims, is limited to that embodiment. The subject matter disclosed herein is directed to logic that interfaces a processor to memory to permit a peripheral device (e.g., a display) to be managed by application software running on the processor without the use of a device driver. Merely by way of example, the embodiments described herein are directed to a Java application that manages a display device, although the principles discussed herein have applicability apart from the Java language and display devices. Referring now to Figure 1 , a system 100 is shown in accordance with a preferred embodiment of the invention. As shown, the system includes at least two processors 102 and 104. Processor 102 is referred to for purposes of this disclosure as a Java Stack Machine ("JSM") and processor 104 may be referred to as a Main Processor Unit ("MPU"). System 100 may also include memory 106 coupled to both the JSM 102 and MPU 104 and thus accessible by both processors. Reformat logic 154 couples to the JSM 102 to the memory 106. The use of the reformat logic and associated software will be described in greater detail below. Referring still to Figure 1 , system 100 also includes a Java Virtual Machine ("JVM") 108, compiler 110, and a display 114. The JSM 102 preferably includes an interface to one or more input/output ("I/O") devices such as a keypad to permit a user to control various aspects of the system 100. In addition, data streams may be received from the I/O space into the JSM 102 to be processed by the JSM 102. Other components (not specifically shown) may include, without limitation, a battery and an analog transceiver to permit wireless communications with other devices. System 100 may be representative of, or adapted to, a wide variety of electronic systems, and an exemplary electronic system may comprise a battery-operated, mobile cell phone. The Java code executed in system 100 comprises a plurality of "Bytecodes" 112. The Bytecodes 112 are provided to the JVM 108, compiled by compiler 110 and provided to the JSM 102 and/or MPU 104 for execution therein. The JVM 108 generally comprises a combination of software and hardware. The software may include the compiler 110 and the hardware may include the JSM 102. The JVM may include a class loader, bytecode verifier, garbage collector, and a bytecode interpreter loop to interpret the bytecodes that are not executed on the JSM processor 102. Figure 2 shows various components related to the management of the display 114. As shown, an application software 150 (e.g., Java application) includes an application data structure 152. The application data structure 152 may comprise any suitable type of structure such as an array or an object and is mapped in memory 106. In the context of a display memory buffer, the data structure fits best with a Java array. Consequently, the data structure 152 is described below as an array, but more broadly can be other types of structures. The application array 152 links to reformat logic 154 which, in turn, can access a display buffer 156. Display buffer 156 preferably comprises a portion of memory 106 allocated for use by the display 114. More specifically, information to be shown on the display 114 preferably is stored in the display buffer 156. A display interface 160 extracts display data from the display buffer 156 and provides an appropriate electrical interface to cause the desired information to be shown correctly on the display 114. This display buffer 156 also may comprise an intermediate buffer allocated for a particular Java application and managed by a global operating system ("O/S") display management symbolized here by the display interface 160 that preferably would be running on the MPU 104 and that would enable multiple applications (Java or other) to share a full screen appliance. As noted above, the software application 150 includes an application array 152. In general, a Java application may include more than one application array, but for purposes of explaining the preferred embodiments of the invention, the software application 150 includes at least one application array 152 usable for managing the display 114. The application array 152 preferably is a Java array and thus comports with the applicable requirements of the Java programming language. In Java, the smallest format representation for data processing is 32 bits. Consequently, in accordance with the preferred embodiment, the representation of a display in the Java application comprises an n-bit array 152, where nequals 32. The display buffer 156, however, may be formatted differently than the Java array 152. For example, while the application array 152 may be an n-bit addressable data structure, the display buffer 156 may comprise an m-bit addressable data structure where mis different than n. In some embodiments, for example, mcould be 8, but mcould also be any number of bits appropriate to the display color definition, while n may be 32 bits. In accordance with a preferred embodiment of the invention, the Java application 150 accesses the display buffer 156 through application array 152 to manage the display 114 without the use of a display driver. The Java application 150 can cause text and/or graphics data ("display data") to be shown on display 114 by writing such display data to the application array 152. As noted above, the application array 152 is n-bit addressable and the display buffer is m-bit addressable, where nmay be different (e.g., greater) than m. Thus, the application array is formatted differently than the display buffer. With nbeing different than m, the display data from the application array 152 cannot be copied directly into the display buffer 156 without being re-formatted. When the data within the application array 152 is accessed by the application software 150, the data is automatically reformatted by reformat logic 154 into a format compatible with the display buffer 156 on a write and from a display buffer format to a format compatible with the application array on a read. The m dimension of display buffer might or might not fit with the memory minimum access granularity, causing a write within the display buffer to be replaced by a read-modify-write by the reformat logic 154 when accesses to the compressed physical area do not match in size with the memory access granularity. Because the process of reformatting the display data from the application array 152 may comprise reducing a wider n-bit wide data value to a narrower m-bit wide data value, the reformat logic 154 is referred to as a "compressor," although this disclosure and claims are not limited to compressing data per se. Further, as explained below, the compressor 154 also alters the address of the JSM's display transaction to comport with a valid address associated with the display buffer 156. Figure 3 further illustrates the functionality of the compressor 154. Virtual address space 160 associated with application array 152 includes display data from application 150 to be written to a compressed, real physical address space 166 that is stored in display buffer 156. In the example of Figure 3 , the virtual memory space 160 has a starting address of 0xA000 and is shown as being mapped onto a physical address space 162 that starts at address 0xC000. In the preferred embodiment, some or all of the physical address space 162 does not exist because the targeted real memory is the compressed memory space 166. To enable the compression, the compressor 154 preferably maps the high level representation (32-bit-based memory block) in virtual address space 160 on to a low level representation (8-bit-based memory block) in compressed address space 166. The data content of the virtual address space 160 preferably does not exceed the low-level representation maximum value. For example, if the compressed address space 166 is 8-bits wide, then the virtual address space 160 associated with the application array 152 stores meaningful data in chunks of eight bits. As shown in Figure 3 , meaningful data chunks are shown at addresses 0xA003, 0xA007, 0xA00B, and so on, with the remaining portions of the address space (e.g., 0xA000-0xA003, 0xA004-0xA006, and so on) are set to a predetermined value of 0. Other embodiments may compress the value of the data by truncating the least significant bits of the data or any other processing on the data such as filtering. As discussed above, the preferred embodiments of the invention include the use of a Java application array 152 to be used as the Java representation of the device's memory mapped memory. An exemplary Java method is shown below in which the application array 152 is the array labeled "VGA." ("Video Graphics Adapter"). Class DisplayBitmap public int VGA [320x200]; DisplayBit () //mapping the array on the device driver mapArrayOn(VGA, 0xC000) void driverSetPixel(int X, int Y, int value) VGA[X+Y*320] = value; ... ... } To fully implement the mapping, an application programming interface ("API") is implemented that makes first a mapping of the base of the array on an address. The method "mapArrayOn" is called the "constructor" of the object DisplayBitmap. The Java array VGA is first mapped on to the display buffer at address 0xC000, which corresponds to the area that is going to be compressed in the real compressed physical area 166 during an initialization phase of the Java program or before the Java program uses the display to output data. Other methods provide means to output or retrieve data from the display. The Java code "DriverSetPixel" method shown above may write a value of a pixel at a location X, Y in the display buffer using the instruction implementing "VGA[X+Y*320] = value" which may correspond to an "iastore" Java bytecode. Referring now to Figure 4 , various components of the system 100 are shown including the JSM 102, compressor 154, and memory 106. As shown, the JSM 102 executes the Java application 150. The compressor 154 preferably includes a plurality of programmable registers 157 and translation logic 159 that is coupled to the registers 157 or otherwise is capable of accessing the contents of the registers. As described below, the registers 157 are programmed with various values pertaining to the mapping between the application array 152 and the display buffer 156. As depicted in Figure 4 for a write transaction, the translation logic 159 uses the values stored in the registers 157 to convert a physical address ("PA"), such as a physical address 162 ( Figure 3 ) from the JSM 102 to a compressed physical address ("CPA") corresponding to the display buffer 156. Conversely, for a read transaction from buffer 156, the PA from the JSM 102 corresponding to the read address is converted by translation logic 159 to an appropriate CPA address so that the requested read data can be returned through the compressor 154 to the JSM 102. The compressor 154 also may convert the data itself between the formats corresponding to the application array 152 and the display buffer 156. The conversions of the target addresses and the data preferably are performed by the translation logic 159 using values (explained below) stored in the compressor's configuration registers 157. The system shown in Figure 4 also includes a multiplexer 155. The multiplexer 155 receives address inputs from the JSM 102 and the compressor 154. A control signal (CTL) from the compressor controls which of the inputs are provided as the output of the multiplexer 155 to the memory 106. As shown, the multiplexer 155 selectively permits accesses from the application 150 to be provided to the memory 106 without being reformatted by the compressor 154 and permits accesses from the application to be reformatted by the compressor 154 before being provided to the memory 106. In other embodiments, the signal controlling the multiplexer 155 may come from processor 102 along with the address removing the detection functionality from the compressor 154. The detection functionality indicates whether an address coming from the processor core belongs to the area that needs to be compressed. Figures 5A and 5B illustrate various constraints that may be applicable to the use of the compressor 154. In Figure 5A , if the operating system running on the MPU 104 ( Figure 1 ) uses a flat (or linear) addressing mode or segmentation mode, the virtual memory space 160 associated with the application array 152 preferably comprises a contiguous virtual memory range. The contiguous virtual memory range 160 is viewed as being mapped on physical memory 162, which itself is translated to compressed physical memory 166. In Figure 5B , if the operating system uses page-mode addressing, the virtual memory space 160 is divided into a plurality of individual virtual pages (VP 0, VP 1,...,VP N-1). In accordance with the operation of the compressor 154, the virtual memory space 160 for page-mode addressing comprises a contiguous virtual memory range as shown. The physical mapping of the virtual space pages is viewed as mapping the pages on to physical memory space 162, where physical pages PP 0 to PP N-1 are contiguous. The pages, in fact, are compressed on to compressed physical memory 166 (display buffer). In general, no constraints are placed on the starting address of the compressed physical memory space 166. As explained above, the compressor 154 includes multiple registers that hold one or more programmable values to translate a physical address into another compressed physical address. The programmable values preferably are under the control of the JVM 108 and comprise, or are otherwise indicative or representative of, the starting address of the non-compressed memory area containing the physical addresses to convert (SAPB), the end address of the non-compressed area (EAPB) or its overall size, the starting address of the compressed target display buffer 156 (SAPCB), the number of bits (" n") per element in the array in the application software and the number of bits (" m") per element in the display buffer or the ratio m/ n.The address calculation resulting from reformatting the data typically will be equivalent to CPA = SAPCB+(SAPB-PA)* m/ n. Other information such as the memory granularity access (e.g., 8 bits, 16 bits, 32 bits) may be included to manage unaligned write accesses. The compressor 154 facilitates the JVM 108 to efficiently access device memory mapped memories. Figures 6A and 6B illustrate the operation of the reformat logic in various situations. Referring first to Figure 6A , the reformat logic 154 is illustrated as operating in the situation in which nis 32, mis 4 and wis 8, where nand mare defined above and wrepresents the memory 106 access granularity. In this example, the application running on the JSM 102 comprises a Java application in which accesses are 32 bit accesses, but memory accesses to memory 106 are single byte accesses. Further, each element in the device buffer comprises a 4-bit value (a "nibble"). In this example, the accessible elements in the device buffer are "aligned" with the width of the memory 106. As such, each 4-bit nibble is accessible within a single byte from memory 106. Alternatively stated, a single 4-bit nibble does not span across byte boundaries. The reformat logic 154 preferably includes bit address calculation 215 and data alignment management 217 which may be implemented as part of the translation logic 159 ( Figure 4 ). The values of n, m and w may be stored into one or more of the registers 157 and are provided to the bit address calculation 215 for calculation of the address that is provided to memory 106 (which comprises a memory controller in addition to random access memory). The bit address calculation 215 determines whether the target 4-bit nibble is in the lower four bits or upper four bits of an addressable byte in the memory 106. In the example of Figure 6A , the target 4-bit nibble is the upper four bits of a byte beginning at bit position 4. As such, the bit address calculation 215 provides the 3-bit index "100" (binary for "4") to the data alignment management 217. Because a write from the JSM 102 targets 4-bit nibbles in a byte addressable buffer, a "read-modify-write" operation is implemented to read the target byte, modify the relevant 4-bit nibble and write the modified back to memory. This operation is illustrated beginning at 219 in which the initial byte read is performed based on an address computed by the bit address calculation 215. The byte read from memory comprises the 8-bits b 7...b 0. This value is read by the reformat logic 154 and loaded into the data alignment management logic 217. At 221, the upper four bits of the byte (i.e., the target nibble) are replaced by the four bits a3...a0 from the initial 32 bit value from the JSM at 222. After modifying the byte, the modified byte is written back to memory by the reformat logic 154 as shown at 223. Figure 6B illustrates the situation in which the number of bits for each display element ( m) is not an integer multiple of the n. This is an example of a "non-alignment" condition. In the example of Figure 6B , mis 6 and thus, any one 6-bit display value may span more than one byte in memory 106. In the example of Figure 6B , the JSM 102 is attempting to write the 6-bit value a 5... a 0to the display buffer. The target 6-bit value in memory corresponds to the value of c 5...c 0which spans across bytes 225 and 227. The bit address calculation unit 215 calculates the addresses corresponding to byte bytes 225 and 227 and initiates a read 219 of both bytes to load the bytes into the data alignment management unit 217. The addresses are calculated based on the values of n, m, w,SAPB, EAPB, and SAPCB. The bit address calculation logic 215 determines that the target 6-bit value begins at bit position 6 in byte 227 and thus generates and provides a three bit value of "110" (binary equivalent of 6) to the data alignment management 217. The data alignment management 217 uses the data from the JSM 102 and the value 110 from the bit address calculation unit 215 to modify bytes 225 and 227 to replace at 221 the upper two bits of byte 227 and the first four bits of byte 225 with the desired value a 5...a 0. At 223, the modified bytes are written back to memory. Figure 6 illustrates how Java "metadata" is treated in a preferred embodiment in which the application array 152 is a single dimension array. Each object in Java includes metadata that may be used to manage the object. The metadata may include information such as object type, object size, and other object-specific parameters. As shown, an application array 152 comprises a data structure 168 that includes a "head" metadata 170, a tail metadata 174, and object fields 172. The head metadata 170 precedes the object fields 174 and the tail metadata 174 may follow the object fields 172. In the preferred embodiments, the metadata fields 170 and 174 are not compressed by compressor 154. That is, the compressor 154 preferably compresses the object fields 172, but not head and tail metadata fields 170 and 174. Referring still to Figure 7 , if a flat or a segment addressing mode is implemented by the operating system and if head and/or tail metadata exists as in Figure 7 , the memory preceding (162a) and following (162c) the physical memory 162b that is compressed into physical memory 166 preferably exists for the process described herein to work in accordance with at least some embodiments while the physical address 162b may not exist. In page mode addressing, head or tail metadata may be mapped onto separate pages 160a (VP 0), 160c (VP N) in the virtual address space 160. As such, head metadata 170, object fields 172, and tail metadata 174 are stored in contiguous virtual address blocks as shown in Figure 7 while in the physical space, they may be mapped onto areas that are compressed (162b) and not compressed (162a, 162c). For this configuration, the frontier of the beginning and the ending of the compressible memory space preferably is page aligned. Some embodiment may have only metadata within a header. Referring now to Figure 8 , another embodiment is shown with a non-contiguous object configuration of a single dimension array. Head metadata 182, pointer field 184, and tail metadata 186 preferably are contiguous in memory. The pointer field 184 includes a reference to an area that may comprise a pointer value that refers to object fields 188. A systematic indirection is used to access the object fields using the pointer 184. Java permits the creation and use of multi-dimensional arrays. Figure 9 depicts the use of application array 152 storing a multi-dimensional data structure 190. Virtual addressable blocks 192 comprise one dimension of the multi-dimensional data structure 190 and the second dimension comprises virtual addressable blocks 194, 196 and 198 as shown. Block 192 comprises pointers to blocks 194, 196 and 198. According to mapping constraints, in flat, segment or page-based addressing, all object fields representing the last dimension of the array (blocks 163, 165, and 167) are physically mapped on to contiguous compressed physical memory 166. Figure 10 represents a non-contiguous (as in Figure 8 ) two-dimensional array 190 with one dimension comprising block 192 and the other dimension comprising blocks 194, 196, and 198. In this configuration of Figure 10 , all object fields representing the last dimension of the array (blocks 202, 204, and 206 are physically mapped on to compressed physical memory 166 (display buffer) that is contiguous. The preferred embodiments of the invention provide substantial benefits over other device management paradigms. For example, a high level language typically requires the use of calls to a display driver to cause information to be shown on a display. In the preferred embodiments, the MPU 104 need not be interrupted to run a native device driver. In some Java implementations on a single processor with a software JVM, the processor needs to switch its context from a JVM execution to the display driver execution. The context switch is performed through the execution of a native method through the standard Java Native interface ("JNI"). There are no function calls or interrupt service handlers to be used in the preferred embodiment and thus, there is no switch from Java to a native method through JNI. As a result, latency is reduced. Further, the calculation to translate the address used within the application to the corresponding address used within the display buffer is performed by hardware rather than software, thereby freeing processor resources for other tasks. While the preferred embodiments of the present invention have been shown and described, modifications thereof can be made by one skilled in the art without departing from the spirit and teachings of the invention. The embodiments described herein are exemplary only, and are not intended to be limiting. Many variations and modifications of the invention disclosed herein are possible and are within the scope of the invention. Accordingly, the scope of protection is not limited by the description set out above. Each and every claim is incorporated into the specification as an embodiment of the present invention.
Embodiments described herein may relate to apparatuses, processes, and techniques related to a transistor structure that includes a buried power rail (BPR) within the transistor structure at a level below a height of one or more of the fins of the transistor structure. The BPR may be positioned proximate to a bottom substrate of the transistor structure. In embodiments, the transistor structure includes a protective layer over the BPR, which may include one or more dielectric layers to protect the BPR during stages of manufacturing of the transistor structure. In an embodiment, portions of the protective layer may also be used to constrain epitaxial growth during a fabrication phase of the transistor structure. Other embodiments may be described and/or claimed.
1. A transistor structure comprising:Substrate;a plurality of fins substantially parallel to each other and substantially perpendicular to the substrate; andA power rail between the plurality of fins.2. The transistor structure of claim 1, wherein the power rail is below a height of the plurality of fins.3. The transistor structure of claim 1, wherein the plurality of fins comprises NMOS and PMOS fins.4. The transistor structure of claim 1, wherein the power rail is surrounded by oxide.5. The transistor structure of claim 1, wherein the power rail is proximate to the substrate.6. The transistor structure of claim 1 , 2, 3, 4 or 5, wherein the power rail is located between a barrier layer and the substrate, wherein the barrier layer is during the protection of the power rail.7. The transistor structure of claim 6, wherein the barrier layer comprises a dielectric.8. The transistor structure of claim 6, further comprising an electrical contact electrically coupled to the power rail and extending through the barrier layer.9. The transistor structure of claim 8, wherein the electrical contact is a through via structure or a through via bus structure.10. A transistor structure comprising:Substrate;a plurality of fins substantially parallel to each other and substantially perpendicular to the substrate;A barrier layer above and substantially parallel to the substrate, wherein a portion of the barrier layer is substantially perpendicular to the substrate along sides of the plurality of fins, wherein along the The barrier layer on the sides of the plurality of fins surrounds portions of the NMOS epitaxy or the PMOS epitaxy, respectively, along a portion of the plurality of fins.11. The transistor structure of claim 10, further comprising a dielectric material coupled to a portion of the barrier layer perpendicular to the substrate, wherein the dielectric material on opposite sides of the fin supports the The portion of the barrier layer is perpendicular to the substrate.12. The transistor structure of claim 11 , wherein the dielectric on the opposite side of the fin is during growth of the NMOS epitaxy or the PMOS epitaxy during transistor structure fabrication. A material supports the portion of the barrier layer perpendicular to the substrate.13. A transistor structure according to claim 10, 11 or 12, further comprising:A contact etch stop layer (CESL) over and substantially parallel to the barrier layer.14. The transistor structure of claim 13, wherein a portion of the CESL is not parallel to the barrier layer and extends along a side of the NMOS epitaxy or a side of the PMOS epitaxy.15. The transistor structure of claim 13, further comprising an electrical contact extending from above the CESL through the CESL and through the barrier layer towards the substrate.16. The transistor structure of claim 15, wherein the electrical contact is electrically coupled to a power rail on the substrate and below the barrier layer.17. A method for building a transistor structure, the method comprising:identifying a substrate having a first side and a second side opposite the first side;forming a plurality of fins on the first side of the substrate, the plurality of fins being substantially parallel to each other and substantially perpendicular to the substrate; andA power rail coupled to the first side of the substrate is formed, the power rail being below the height of the plurality of fins.18. The method of claim 17, further comprising:enclosing the power rail within an oxide; andA barrier layer is applied on the oxide and over the power rail, the barrier layer protecting the power rail during subsequent fabrication of the transistor structure.19. The method of claim 18, further comprising:growing NMOS epitaxy or PMOS epitaxy on top of the plurality of fins, respectively; andA contact etch stop layer (CESL) is deposited over the barrier layer, the CESL or the barrier layer at least partially surrounding the grown NMOS epitaxy or PMOS epitaxy.20. The method of claim 19, further comprising:forming a conductive through via extending from above the CESL through the CESL and through the barrier layer toward the substrate; andThe conductive through via is electrically coupled to the power rail.
Power rails between the fins of the transistor structuretechnical fieldEmbodiments of the present disclosure relate generally to the field of semiconductor packaging and, more particularly, to power rail placement within transistor structures.Background techniqueIncreased adoption of mobile computing devices will continue to drive demands for increased logic transistor density in integrated circuits.Description of drawingsFIG. 1 shows a cross-section of a buried power rail (BPR) within a transistor structure according to various embodiments.2A-2B illustrate prior art cross-sections of transistor structures, and cross-sections of transistor structures with various locations for BPRs within the transistor structure according to various embodiments.3 illustrates a cross-sectional view of a BPR with a protective layer protecting the BPR during the manufacturing stages, according to various embodiments.4A-4T illustrate various fabrication stages for a BPR within a transistor structure including a protective layer to protect the BPR and constrain epitaxial growth, according to various embodiments.FIG. 5 shows a cross-section of a transistor structure with a BPR and a protective layer, according to various embodiments.FIG. 6 illustrates an exemplary process for fabricating a transistor structure with a BPR, according to various embodiments.Figure 7 illustrates an interposer 700 incorporating one or more embodiments of the present invention.detailed descriptionEmbodiments described herein may relate to apparatus, processes, and techniques for transistor structures that include buried Power Rail (BPR). In an embodiment, the BPR may be located between the fins of the transistor structure, near the bottom substrate of the transistor structure. In an embodiment, the transistor structure includes a protective layer over the BPR, which may include one or more dielectric layers, to protect the BPR during the fabrication stages of the transistor structure. In an embodiment, portions of the protective layer may also be used to constrain epitaxial growth during the fabrication stages of the transistor structure, e.g. NMOS epitaxy or PMOS epitaxy. In an embodiment, the NMOS epitaxy may be referred to as a phosphorus doped epitaxy or SiP epitaxy, and the PMOS epitaxy may be referred to as a silicon germanium epitaxy or SiGe epitaxy.In an embodiment, the flow of stages of fabrication of the transistor structure can simultaneously constrain the epitaxial growth of the gate all around (GAA) source/drain and protect the BPR from exposure during the epitaxial pre-clean process. In an embodiment, a capping layer may be formed on top of the first sidewall spacer stack. The capping layer holds the bottom of the first sidewall spacer stack in place and protects the BPR from exposure during the pre-clean process. In an embodiment, the flow of the fabrication stage will cap the BPR as part of the integration flow and may use thin films deposited during the source/drain fabrication process to provide epitaxy confinement.BPRs formed before forming transistors must not be exposed during the front-end-of-line (FEOL) fabrication process, otherwise the BPRs may be damaged, or metal contamination may occur. Some metals, such as copper, are known to create defects within silicon transistors if the metal diffuses in the silicon lattice. Manufacturing processes such as front-end-of-line (FEOL) processes or back-end-of-line (BEOL) processes (which may also be referred to as copper back-end-of-line processes) typically occur sequentially and involve separate sections of the fabrication plant to avoid metal contamination. Embodiments described herein address changes in these processes to prevent contamination of the BPR during fabrication, including one or more protective layers over the BPR.Furthermore, GAA devices at scaled diffusion-to-diffusion spaces require constrained source/drain epitaxy. Also, the BPR may require minimum height vias (which may also be referred to as conductive vias) to electrically couple the BPR with other parts of the transistor structure in order to reduce the adverse effects of resistance due to very long conductive vias. Furthermore, embodiments should enable variable control over the length of the conductive vias, e.g., the length should not be affected by isolating the oxide etch from the epitaxial pre-clean process.In the following detailed description, reference is made to the accompanying drawings, which form a part hereof, in which like reference numerals indicate like parts throughout, and in which are shown by way of illustrations the subject matter in which the present disclosure may be practiced the embodiment. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description should not be interpreted in a limiting sense, and the scope of the embodiments is only defined by the appended claims and their equivalents.For the purposes of this disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of this disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A , B and C).The description may use perspective-based descriptions such as top/bottom, inside/outside, above/below, etc. Such description is for convenience of discussion only and is not intended to limit the application of the embodiments described herein to any particular orientation.The description may use the phrase "in an embodiment," which may refer to one or more of the same or different embodiments. In addition, the terms "including", "having", etc. used in conjunction with the embodiments of the present disclosure are synonymous.The term "coupled with," along with its derivatives, may be used herein. "Coupling" may mean one or more of the following. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements are in indirect contact with each other, yet still co-operate or interact with each other, and may mean that one or more other elements are coupled or connected between the elements said to be coupled to each other. The term "directly coupled" may mean that two or more elements are in direct contact.Each operation has been described as multiple discrete operations in turn, in a manner that may be most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent.As used herein, the term "module" may refer to or include an ASIC, electronic circuit, processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that executes one or more software or firmware programs , combinational logic circuits, and/or other suitable components that provide the described functionality.The figures herein may depict one or more layers of one or more packaged components. The layers depicted herein are depicted as examples of relative positions of layers of different packaged components. The layers are depicted for purposes of illustration and are not drawn to scale. Accordingly, relative dimensions of the various layers should not be assumed from the drawings, and for some embodiments, dimensions, thicknesses or dimensions may be assumed only where specifically indicated or discussed.Various embodiments may include any suitable combination of the above-described embodiments, including alternative (or) embodiments to the embodiments described above in conjunction with (and) (eg, "and" may be "and/or"). Additionally, some embodiments may include one or more articles of manufacture (e.g., a non-transitory computer-readable medium) having stored thereon instructions that, when executed, cause the actions of any of the above-described embodiments. Furthermore, some embodiments may include a device or system having any suitable means for performing the various operations of the above-described embodiments.FIG. 1 shows a cross-section of a buried power rail (BPR) within a transistor structure according to various embodiments. The transistor structure diagram 100 shows a layer 102 coupled with a plurality of transistor fins 104 , 106 extending into a metal gate 108 . In an embodiment, epitaxial layer 110 may be grown onto fins 104, 106 between a plurality of metal gates (not shown), which may be similar to metal gate 108. The first set of trench connectors 112 may be coupled with the second set of trench connectors 114 , 116 within the oxide layer 118 . In an embodiment, the second set of trench connectors 114 , 116 may be separated by a spacer 120 . A buried power rail (BPR) 130 is located between the fins 106 and under the metal gate 108 . In an embodiment, oxide 132 may surround BPR 130 . This may also be referred to as a BPR 130 buried within a shallow trench isolation (STI).Through via 134 may be electrically coupled with one of the second set of trench connectors 116 . Through vias 134 may also be referred to as conductive vias 134. In other embodiments, the through vias 134 may be bus-shaped structures (not shown) extending along the top of the BPR 130 along the plurality of metal gates 108 . In such an embodiment, the bus-like structure may include a dielectric (not shown) or other electrical separation feature along the bus-like structure to prevent the individual metal gates 108 from becoming electrically coupled.During the design and fabrication of the transistor structure 100, the first distance 136 between the BPR 130 and the bottom of the metal gate 108 and the distance between the BPR 130 and the bottom of one of the second set of trench connectors 116 The second distance 138 is important. In an embodiment, the first distance 136 (which may pass through the oxide material) needs to be large enough to electrically isolate the BPR 130 from the gate 108 . Additionally, in an embodiment, the second distance 138 should be small enough to prevent excessive resistive losses between the BPR 130 and the TCN 116 . In an embodiment, the nominal first distance 136 may be set in the range of 5nm to 30nm, or more narrowly in the range of 10nm to 20nm, so that the nominal first distance 136 may be minimized while allowing The worst-case scenario of change is isolated. Similarly, the nominal second distance 138 may be set in the range of 50nm to 100nm, or more narrowly between 70nm and 80nm.Note that in an embodiment, layer 102 may be part of or may be coupled to a substrate layer (not shown). In an embodiment, the substrate layer (not shown), layer 102 and fins 104 , 106 may comprise silicon. Note also that the cross-section shown in transistor structure 100 is a cross-section taken through the trench between the metal gates. It should be understood that there may be one or more metal gate 108 structures entering or exiting corresponding planes parallel to the plane shown in FIG. Source and drain electrodes are grown epitaxially on top of the fins.2A-2B illustrate prior art cross-sections of transistor structures, and cross-sections of transistor structures with various locations for BPRs within the transistor structure according to various embodiments. 2A-2B illustrate cross-sections of transistor structures, which may be similar to transistor structure 100 of FIG. 1 . 2A-2B show fin 206, gate 208, extension 210, first trench connector 212, and second trench connector 216, which may be similar to fin 106, gate 108 , extension 110 , first trench connector 112 and second trench connector 116 .As shown, during the establishment of legacy transistor structures using the epitaxial spacer process, there is STI loss 217 caused by the epitaxial pre-clean process. As a result, as shown in FIG. 2B , in legacy implementations, epitaxial pre-cleaning may cause BPR 231 (which may be similar to BPR 130 of FIG. 1 ) to be pushed lower away from the bottom of gate 208 and into substrate 202 . In an embodiment, BPR 230 (which may be similar to BPR 130 of FIG. 1 ) is positioned closer to the bottom of gate 208 while still being electrically isolated from gate 208 .3 illustrates a cross-sectional view of a BPR with a protective layer protecting the BPR during the manufacturing stages, according to various embodiments. Transistor structure 300 may be similar to transistor structure 100 of FIG. 1 , fins 104 , 106 , epitaxial portion 110 , BPR 130 , conductive via 134 and second trench connector 116 . Note also that oxide 332 may surround BPR 330 and may also be located between fins 304 , 306 . In an embodiment, conductive via 334 may be made of a low resistivity metal including ruthenium and molybdenum. In an embodiment, the lateral distance between two fins 304, or between two fins 306, or between a fin 304 and a fin 306 may be no greater than 28 nm.In an embodiment, barrier layer 340 may be placed on top of oxide layer 332 . In an embodiment, this barrier layer 340 will protect the metal structure of the BPR 330 during the fabrication stages of the epitaxial portion 310. Without the barrier layer 340, the BPR 330 would be damaged during the epitaxial cleaning process and other processes that may expose the unprotected BPR 330 to damage. Additionally, a contact etch stop layer (CESL) 342 may be applied over the barrier layer 340 during fabrication, as described further below.In an embodiment, conductive vias 334 are created to electrically couple BPR 330 with top trench connector 316 after an epitaxial cleaning process and/or other transistor structure 300 fabrication stages. Conductive vias 334 will penetrate barrier layer 340 and CESL 342 to establish electrical coupling with BPR 330 .In embodiments regarding the barrier layer 340 , portions of the barrier layer 340 may extend at least partially along the sides of the epitaxial portion 310 grown on top of the one or more fins 304 , 306 . In an embodiment, these portions of barrier layer 340 will serve to constrain the growth of crystals during epitaxial portion 310 formation. In an embodiment, constrained crystal growth has the added benefit of allowing the fins 304, 306 to be positioned closer together without the epitaxy 310 being in direct contact with each other. Additionally, it is beneficial to constrain the growth of the crystal relative to the desired compressive or tensile properties of the channel, thereby altering the carrier mobility in the device channel. For example, a static random access memory (SRAM) cell might benefit from having a weak PMOS transistor, and processing constrained SiGe epitaxy to produce a smaller epitaxy volume, and correspondingly less strain in the PMOS channel would thus benefit SRAM. In an embodiment, CESL 342 may partially surround epitaxial portion 310.4A-4T illustrate various fabrication stages for a BPR within a transistor structure including a protective layer to protect the BPR and constrain epitaxial growth, according to various embodiments.Figure 4A shows a cross-section of a transistor structure including layer 402 coupled to a plurality of fins 404, 406 within an oxide layer 431, layer 402, fins 404, 406, oxide layer 431 may be similar layer 102 , fins 104 , 106 and oxide 132 in FIG. 1 . A cap 405 made of nitride may be placed on top of the fins 404 , 406 . In an embodiment, fins 404, 406 and layer 402 may comprise silicon. At this stage of fabrication, a cut 433 is made through the oxide layer 431 and through the layer 402 .FIG. 4B shows the diagram of FIG. 4A with a metallization layer 435 applied to the top of the transistor structure to fill the cutout 433 .FIG. 4C shows the results of chemical metal polishing (CMP) used to planarize the surface of the transistor structure and remove excess metallization layer 435 from the top of the oxide layer 431 and the top of the nitride cap 405 .FIG. 4D shows the result of metal etching of the metallization within cutout 433 to form BPR 430, which may be similar to BPR 130 of FIG. 1 or BPR 330 of FIG. As described above, based on the distance between the BPR 430 and the second trench connector, such as the second trench connector 116, and the distance between the BPR 430 and the gate, such as the metal gate 108 of FIG. Design the height of the BPR 430. The resulting cross section of the BPR 430 is also important from both a resistive standpoint and the resulting current carrying capacity of the track. If a larger cross-section of the BPR is required, the cutout 433 can be made deeper, since the position of the recess of the metal 435 placed on top of the track is dictated by the required minimum dielectric spacing from the gate: the height of the BPR 430 is important such that Good electrical isolation between BPR and gate.FIG. 4E shows refilling of cutout 433 with more dielectric 432 (which can be similar to oxide layer 431), polishing this dielectric using a nitride fin cap as a polish stop, and then using a non-selective etch (which also The removal of the nitride cap 405) will result in recessing the dielectric layer 432. In an embodiment, the dielectric layer 432 is not completely etched away, but is recessed such that it still caps the BPR 430: the dielectric layer 432 is found at this stage between the fins 404, 406 and above the BPR 430. Note that in a future manufacturing stage, the level of dielectric layer 432 above BPR 430 will determine the distance between BPR 430 and a metal gate, such as metal gate 108 of FIG. 1 , in an embodiment.Figure 4F shows the results of a phase of the gate patterning scheme that includes depositing a layer 437 of amorphous silicon, which can then be polished. A silicon nitride (SiN) cap 439 may then be placed on the amorphous silicon layer 437 . A photolithographic process is then used to define a grid of parallel openings for patterning groups of different gates into the amorphous silicon.Figure 4G shows a cross-section similar to that of Figure 4F, except that the cross-section is at a different Y depth of the transistor structure between the gates. For clarity, the outlines of the amorphous silicon layer 437 and the SiN cap 439 will be carried forward for reference in subsequent figures.FIG. 4H shows that a barrier layer 441 has been deposited, which may be similar to barrier layer 340 of FIG. , the side and top of 406. In an embodiment, this barrier layer 441 will form a protective layer to protect the BPR 430.In an embodiment, barrier layer 441 may be a dielectric, or may be a multilayer stack of different dielectrics. In an embodiment, these different dielectrics can be chosen to optimize the spacer structure for various functions, for example, to ensure the final gate length by covering a layer that is resistant to corrosion caused by cleaning directly contacting the amorphous silicon gate. the key scale of . In an embodiment, a carbon-rich oxide may be used to reduce the overall dielectric constant of the final spacer stack and limit the parasitic Miller capacitance of the device. In an embodiment, the dielectric layer may include SiN, SiO2, or silicon oxycarbide (SiOC). Other dielectrics including boron doping may be used.FIG. 41 shows an oxide layer 451 deposited to fill the gap between fins 404 , 406 . In an embodiment, this deposition may be accomplished using flowable chemical vapor deposition (FCVD) dielectrics. Note that this material will flow in the third dimension over the respective layers of the amorphous silicon layer 437 and the SiN cap 439.FIG. 4J shows the results of dry isotropic depression of oxide layer 451 . The top of the fin is now exposed, as is the gate cap.FIG. 4K shows the result of selectively etching a portion of barrier layer 440 that is on top of fins 404 , 406 . Note that the resulting barrier layer 440 no longer covers the tops of the fins 404 , 406 . Although selective to the oxide dielectric 451, this etch can recess the SiN cap 439 of the amorphous silicon layer 437.FIG. 4L shows the result of cavity etching 457 into fins 404 , 406 . Note that this stage may include selective etching of silicon or silicon germanium (SiGe). Note that in an embodiment, this etch phase will not etch the dielectric. This means that gates made of amorphous silicon but capped with dielectric on all exposed sides will not be etched. Also, in an embodiment, an oxide dielectric 451 remains between the walls of the barrier layer 440.Figure 4M shows the deposition of a second spacer 459 over the top layer of the transistor structure. In embodiments, the second spacers may be referred to as spacers, liners, or hard masks. This stage is the initial stage of establishing the source and drain as discussed further below.4N shows placing a photoresist layer 462 over NMOS fins 404, and removing second spacers 459 over PMOS fins 406 using a selective etch, where the photoresist has been be removed.FIG. 40 shows the results of the removal of photoresist layer 462, and the results of the pre-clean etch performed to prepare the tops of PMOS fins 406 for epitaxial growth. Note that in an embodiment, portion 455 of oxide layer 451 may be etched away during the pre-clean etch process. This pre-clean etch process is very important to thoroughly clean the silicon surface for proper epitaxial growth.SiGe epitaxy 461 may then be grown on top of PMOS fin 406 . Note that the walls of barrier layer 440 reinforced by oxide dielectric layer 451 will constrain the growth of epitaxial portion 461.FIG. 4P shows the result of depositing a silicon nitride (SiN) hard mask 465 . In an embodiment this may be 2nm to 3nm silicon nitride.Figure 4Q shows the applied photoresist mask 464, where the silicon nitride hard mask 465 is removed and the second spacer is also removed where it is not covered by the photoresist mask 464 459. This opens up the area above the NMOS fin 404 for epitaxial growth.FIG. 4R shows that photoresist 464 has been removed and silicon 468 epitaxy has been grown on NMOS fin 404 . Note that the SiGe epitaxial portion 461 has been protected by the silicon hard mask 465 during the growth of the silicon 468 epitaxial portion. Epitaxial silicon may be doped with phosphorus (P) and may be referred to as SiP.FIG. 4S shows that the silicon hard mask 465 is selectively removed. Note that the BPR 430 is already fully protected by the barrier layer 440 during the successive steps involved in the epitaxy module.Figure 4T may be similar to Figure 3, showing a final stage in which material over barrier layer 440 has been removed and a contact etch stop layer (CESL) 442 has been placed. Note that portions of CESL 442 may wrap around epitaxially grown regions where epitaxial portions have grown over the walls of barrier layer 440 . Through-via electrical connector 434 (which may be similar to conductive via 334 of FIG. 3 ) is drilled through barrier layer 440 and CESL layer 442 to connect BPR 430 to trench connector 416 (which may be similar to The second groove connector 316) is electrically coupled.FIG. 5 illustrates an exemplary process for fabricating a transistor structure with a BPR, according to various embodiments. Process 500 may be implemented by any of the techniques or processes described herein, or particularly described with respect to Figures 1-4.At block 502, the process may include identifying a substrate having a first side and a second side opposite the first side.At block 504, the process may also include forming a plurality of fins on the first side of the substrate, the plurality of fins being substantially parallel to each other and substantially perpendicular to the substrate.At block 506, the process may also include forming a power rail coupled to the first side of the substrate, the power rail being below a level of the plurality of fins.Implementations of embodiments of the invention may be formed or carried out on a substrate, such as a semiconductor substrate. In one embodiment, the semiconductor substrate may be a crystalline substrate formed using bulk silicon or silicon-on-insulator substructures. In other embodiments, the semiconductor substrate may be formed using alternative materials (which may or may not be combined with silicon) including, but not limited to, germanium, indium antimonide, lead telluride, arsenide Indium, indium phosphide, gallium arsenide, indium gallium arsenide, gallium antimonide, or other combinations of III-V or IV materials. Although several examples of materials from which a substrate may be formed are described herein, any material that may serve as the basis upon which a semiconductor device may be built falls within the spirit and scope of the present invention.Multiple transistors, such as Metal-Oxide-Semiconductor Field Effect Transistors (MOSFETs, or simply MOS transistors), can be fabricated on the substrate. In various embodiments of the invention, the MOS transistors may be planar transistors, non-planar transistors, or a combination of both. Non-planar transistors include FinFET transistors such as double-gate and tri-gate transistors, and wraparound or all-around gate transistors such as nanoribbon and nanowire transistors. Although the embodiments described herein may only illustrate planar transistors, it should be noted that the invention may also be practiced using non-planar transistors.Each MOS transistor includes a gate stack formed of at least two layers, a gate dielectric layer and a gate electrode layer. The gate dielectric layer may comprise a single layer or a stack of layers. One or more layers may include silicon oxide, silicon dioxide (SiO2), and/or high-k dielectric materials. High-k dielectric materials may include elements such as hafnium, silicon, oxygen, titanium, tantalum, lanthanum, aluminum, zirconium, barium, strontium, yttrium, lead, scandium, niobium, and zinc. Examples of high-k materials that can be used in the gate dielectric layer include, but are not limited to, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide, barium strontium titanium oxide, Barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide and lead zinc niobate. In some embodiments, an anneal process may be performed on the gate dielectric layer to improve its quality when high-k materials are used.A gate electrode layer is formed on the gate dielectric layer and may be composed of at least one P-type work function metal or N-type work function metal, depending on whether the transistor will be a PMOS transistor or an NMOS transistor. In some embodiments, the gate electrode layer may consist of a stack of two or more metal layers, where one or more metal layers are work function metal layers and at least one metal layer is a fill metal layer.For PMOS transistors, metals that can be used for the gate electrode include, but are not limited to, ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides such as ruthenium oxide. The P-type metal layer will enable the formation of a PMOS gate electrode with a work function between about 4.9 eV and about 5.2 eV. For NMOS transistors, metals that can be used for the gate electrode include but are not limited to hafnium, zirconium, titanium, tantalum, aluminum, alloys of these metals, and carbides of these metals, such as hafnium carbide, zirconium carbide, titanium carbide, carbide Tantalum and Aluminum Carbide. An N-type metal layer will enable the formation of an NMOS gate electrode with a work function between about 3.9 eV and about 4.2 eV.In some embodiments, the gate electrode may consist of a "U"-shaped structure including a bottom portion substantially parallel to the substrate surface and two sidewall portions substantially perpendicular to the substrate top surface. In another embodiment, at least one of the metal layers forming the gate electrode may simply be a planar layer substantially parallel to the top surface of the substrate and not include sidewall portions substantially perpendicular to the top surface of the substrate. In other embodiments of the present invention, the gate electrode may be composed of a combination of a U-shaped structure and a planar non-U-shaped structure. For example, the gate electrode may consist of one or more U-shaped metal layers formed on top of one or more planar non-U-shaped layers.In some embodiments of the present invention, a pair of sidewall spacers may be formed on opposite sides of the gate stack to sandwich the gate stack. The sidewall spacers may be formed of materials such as silicon nitride, silicon oxide, silicon carbide, carbon-doped silicon nitride, and silicon oxynitride. Processes for forming sidewall spacers are well known in the art and generally include deposition and etch process steps. In alternative embodiments, multiple spacer pairs may be used, for example, two or four pairs of sidewall spacers may be formed on opposite sides of the gate stack.As is known in the art, source and drain regions are formed within the substrate adjacent to the gate stack of each MOS transistor. The source and drain regions are typically formed using an implantation/diffusion process or an etch/deposition process. In the former process, dopant ions such as boron, aluminum, antimony, phosphorus, or arsenic may be implanted into the substrate to form source and drain regions. The ion implantation process is usually followed by an annealing process which activates the dopants and diffuses them further into the substrate. In the latter process, the substrate may first be etched to form recesses at the locations of the source and drain regions. An epitaxial deposition process may then be performed to fill the recesses with the material used to fabricate the source and drain regions. In some embodiments, silicon alloys such as silicon germanium or silicon carbide can be used to fabricate the source and drain regions. In some embodiments, the epitaxially deposited silicon alloy may be doped in situ with dopants such as boron, arsenic, or phosphorous. In other embodiments, one or more alternative semiconductor materials such as germanium or III-V materials or alloys may be used to form the source and drain regions. And in other embodiments, one or more layers of metal and/or metal alloys may be used to form the source and drain regions.One or more interlayer dielectrics (ILDs) are deposited over the MOS transistors. The ILD layer can be formed using dielectric materials known to be suitable for use in integrated circuit structures, such as low-k dielectric materials. Examples of dielectric materials that can be used include, but are not limited to, silicon dioxide (SiO2), carbon doped oxide (CDO), silicon nitride, organic polymers such as perfluorocyclobutane or polytetrafluoroethylene, fluorosilicate Glass (FSG), and organosilicates such as silsesquioxane, siloxane or organosilicate glass. The ILD layer may include pores or air gaps to further reduce its dielectric constant.FIG. 6 illustrates a computing device 600 according to one embodiment of the invention. Computing device 600 houses board 602 . Board 602 may include several components including, but not limited to, a processor 604 and at least one communication chip 606. Processor 604 is physically and electrically coupled to board 602. In some implementations, at least one communication chip 606 is also physically and electrically coupled to board 602 . In other implementations, the communications chip 606 is part of the processor 604 .Depending on its application, computing device 600 may include other components that may or may not be physically and electrically coupled to board 602 . These other components include, but are not limited to, volatile memory (e.g., DRAM), nonvolatile memory (e.g., ROM), flash memory, graphics processors, digital signal processors, cryptographic processors, chipsets, antennas, Displays, touch-screen displays, touch-screen controllers, batteries, audio codecs, video codecs, power amplifiers, global positioning system (GPS) devices, compasses, accelerometers, gyroscopes, speakers, cameras, and mass storage devices such as , hard drive, compact disk (CD), digital versatile disk (DVD), etc.).Communications chip 606 enables wireless communications for transferring data to and from computing device 600 . The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communication channels, etc. that can communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. Communication chip 606 may implement any of several wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 series), WiMAX (IEEE 802.16 series), IEEE 802.20, Long Term Evolution (LTE), Ev-DO , HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, its derivatives, and any other wireless protocol designated as 3G, 4G, 5G and later. Computing device 600 may include multiple communication chips 606. For example, the first communication chip 606 may be dedicated to short-range wireless communication such as Wi-Fi and Bluetooth, and the second communication chip 606 may be dedicated to long-range wireless communication such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, or others Wireless communication.Processor 604 of computing device 600 includes an integrated circuit die packaged within processor 604 . In some embodiments of the invention, an integrated circuit die of a processor includes one or more devices (eg, MOS-FET transistors) constructed in accordance with embodiments of the invention. The term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.Communication chip 606 also includes an integrated circuit die packaged within semiconductor chip 606 . According to another embodiment of the invention, an integrated circuit die of a communication chip includes one or more devices (eg, MOS-FET transistors) constructed in accordance with embodiments of the invention.In other implementations, another component housed within computing device 600 may comprise an integrated circuit die comprising one or more devices, such as MOS-FET transistors, constructed in accordance with implementations of the invention.In various implementations, computing device 600 may be a laptop computer, netbook, notebook, ultrabook, smart phone, tablet computer, personal digital assistant (PDA), ultramobile PC, cell phone, desktop computer, server, printer, Scanners, monitors, set-top boxes, entertainment control units, digital cameras, portable music players, or digital video recorders. In other implementations, computing device 600 may be any other electronic device that processes data.Figure 7 illustrates an interposer 700 incorporating one or more embodiments of the present invention. The interposer 700 is an intervening substrate for bridging the first substrate 702 to the second substrate 704 . The first substrate 702 may be, for example, an integrated circuit die. The second substrate 704 may be, for example, a memory module, a computer motherboard, or another integrated circuit die. Typically, the purpose of interposer 700 is to extend a connection to a wider pitch or to reroute a connection to a different connection. For example, interposer 700 may couple an integrated circuit die to a ball grid array (BGA) 706 , which in turn may be coupled to a second substrate 704 . In some embodiments, the first and second substrates 702/704 are attached to opposite sides of the interposer 700. In other embodiments, the first and second substrates 702 / 704 are attached to the same side of the interposer 700 . And in other embodiments, three or more substrates are interconnected by the interposer 700 .Interposer 700 may be formed from epoxy, glass fiber reinforced epoxy, ceramic material, or polymer material such as polyimide. In other embodiments, interposer 700 may be formed from alternating rigid or flexible materials, which may include the same materials described above for use in semiconductor substrates, such as silicon, germanium, and other III-V and Group IV materials.Interposer 700 may include metal interconnects 708 and vias 710 including, but not limited to, through-silicon vias (TSVs) 712 . Interposer 700 may also include embedded devices 714, which may include both passive and active devices. Such devices include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, sensors, and electrostatic discharge (ESD) devices. More complex devices such as radio frequency (RF) devices, power amplifiers, power management devices, antennas, arrays, sensors, and MEMS devices may also be formed on the interposer 700 . According to an embodiment of the present invention, the apparatus or process disclosed herein may be used in the manufacture of interposer 700.The above description of illustrated embodiments, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific embodiments are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the embodiments, those skilled in the relevant art will recognize.These modifications may be made to the embodiments in view of the above detailed description. The terms used in the following claims should not be construed to limit the embodiments to the specific implementations disclosed in the specification and claims. Rather, the scope of the invention is to be determined solely by the appended claims, which are to be construed in accordance with established principles of claim interpretation.The following paragraphs describe examples of various embodiments.exampleExample 1 is a transistor structure comprising: a substrate; a plurality of fins substantially parallel to each other and substantially perpendicular to the substrate; and a power rail between the plurality of fins.Example 2 includes the transistor structure of example 1, wherein the power rail is below a height of the plurality of fins.Example 3 includes the transistor structure of Example 1, wherein the plurality of fins includes NMOS and PMOS fins.Example 4 includes the transistor structure of example 1, wherein the power rail is surrounded by oxide.Example 5 includes the transistor structure of Example 1, wherein the power rail is proximate to the substrate.Example 6 includes the transistor structure of any of Examples 1-5, wherein the power rail is located between the barrier layer and the substrate, wherein the barrier layer protects the power rail during fabrication of the transistor structure.Example 7 includes the transistor structure of Example 6, wherein the blocking layer includes a dielectric.Example 8 includes the transistor structure of Example 6, further comprising an electrical contact electrically coupled to the power rail and extending through the barrier layer.Example 9 includes the transistor structure of Example 8, wherein the electrical contact is a through via structure or a through via bus structure.Example 10 is a transistor structure comprising: a substrate; a plurality of fins substantially parallel to each other and substantially perpendicular to the substrate; a barrier layer above the substrate and substantially parallel to the substrate, wherein the barrier layer A portion of the plurality of fins is substantially perpendicular to the substrate along sides of the plurality of fins, wherein the barrier layer along sides of the plurality of fins surrounds portions of the NMOS or PMOS epitaxy along a portion of the plurality of fins, respectively.Example 11 includes the transistor structure of Example 10, further comprising a dielectric material coupled to a portion of the barrier layer perpendicular to the substrate, wherein the dielectric material on the opposite side of the fin supports a portion of the barrier layer perpendicular to the substrate the part.Example 12 includes the transistor structure of Example 11, wherein the dielectric material on the opposite side of the fin supports the barrier layer perpendicular to the substrate during growth of the NMOS epitaxy or the PMOS epitaxy during fabrication of the transistor structure. the part.Example 13 includes the transistor structure of any of Examples 10-12, further comprising: a contact etch stop layer (CESL) above and substantially parallel to the barrier layer.Example 14 includes the transistor structure of Example 13, wherein a portion of the CESL is not parallel to the barrier layer and extends along a side of the NMOS epitaxy or a side of the PMOS epitaxy.Example 15 includes the transistor structure of Example 13, further comprising an electrical contact extending from above the CESL through the CESL and through the barrier layer toward the substrate.Example 16 includes the transistor structure of Example 15, wherein the electrical contact is electrically coupled to a power rail on the substrate and below the barrier layer.Example 17 is a method for creating a transistor structure, the method comprising: identifying a substrate having a first side and a second side opposite the first side; forming a plurality of fins on the first side of the substrate a plurality of fins substantially parallel to each other and substantially perpendicular to the substrate; and forming a power rail coupled to the first side of the substrate below the height of the plurality of fins.Example 18 includes the method of Example 17, further comprising: enclosing the power rail within an oxide; and applying a barrier layer on the oxide and over the power rail, the barrier layer protecting the power rail during subsequent fabrication of the transistor structure.Example 19 includes the method of Example 18, further comprising: growing NMOS epitaxy or PMOS epitaxy on top of the plurality of fins, respectively; and depositing a contact etch stop layer (CESL), CESL or barrier layer over the barrier layer. The layer at least partially surrounds the grown NMOS epitaxy or PMOS epitaxy.Example 20 includes the method of Example 19, further comprising: forming a conductive through via extending from above the CESL through the CESL and through the barrier layer toward the substrate; and electrically coupling the conductive through via to the power rail.Example 21 is a transistor structure comprising: a substrate; a plurality of fins substantially parallel to each other and substantially perpendicular to the substrate; a power rail between the plurality of fins, wherein the power rail is between the plurality of fins below the height of the fin and surrounded by oxide; and wherein the plurality of fins includes NMOS and PMOS fins, and wherein the power rail is located between a barrier layer and the substrate, wherein the barrier layer is between the transistor structure protect the power rails during manufacture.Example 22 includes the transistor structure of example 21, wherein the power rail is proximate to the substrate.Example 23 includes the transistor of example 21, wherein the vertical separation between the top of the power rail and the underside of the barrier layer is in the range of 5 nm to 30 nm, or more narrowly in the range of 10 nm to 20 nm.Example 24 includes the transistor of example 21, wherein the vertical separation between the top of the power rail and the metal gate is in the range of 50 nm to 100 nm, or more narrowly in the range of 70 nm to 80 nm, the power rail is placed under the metal gate.Example 25 includes the transistor structure of Example 21, wherein the barrier layer protecting the power rail also constrains epitaxial growth of the transistor source and drain during fabrication of the transistor structure.Example 26 includes the transistor structure of Example 25, further comprising an electrical contact electrically coupled to the power rail and extending through the barrier layer.Example 27 includes the transistor structure of Example 25, wherein the barrier layer is a silicon nitride dielectric film.Example 28 includes the transistor structure of Example 25, wherein the conductive via comprises ruthenium or molybdenum.Example 29 includes the transistor structure of Example 25, wherein a minimum separation between fins is less than 28 nm.Example 30 is a transistor structure comprising: a substrate; a plurality of fins substantially parallel to each other and substantially perpendicular to the substrate; a barrier layer above the substrate and substantially parallel to the substrate, wherein the barrier layer A portion of the plurality of fins is substantially perpendicular to the substrate along sides of the plurality of fins, wherein the barrier layer along sides of the plurality of fins surrounds portions of the NMOS or PMOS epitaxy along a portion of the plurality of fins, respectively.Example 31 includes the transistor structure of Example 30, further comprising a dielectric material coupled to a portion of the barrier layer perpendicular to the substrate, wherein the dielectric material on the opposite side of the fin supports a portion of the barrier layer perpendicular to the substrate the part.Example 32 includes the transistor structure of Example 30, wherein the dielectric material on the opposite side of the fin supports the barrier layer perpendicular to the substrate during growth of the NMOS epitaxy or the PMOS epitaxy during fabrication of the transistor structure. the part.Example 33 includes the transistor structure of Example 30, further comprising: a contact etch stop layer (CESL) above and substantially parallel to the barrier layer.Example 34 includes the transistor structure of example 33, wherein a portion of the CESL is not parallel to the barrier layer and extends along a side of the NMOS epitaxy or a side of the PMOS epitaxy.Example 35 includes the transistor structure of Example 33, further comprising an electrical contact extending from above the CESL through the CESL and through the barrier layer toward the substrate.Example 36 includes the transistor structure of example 35, wherein the electrical contact is electrically coupled to a power rail on the substrate and below the barrier layer.Example 37 includes the transistor structure of example 35, wherein the electrical contacts are made of low resistivity metal ruthenium or molybdenum.Example 38 is a method for creating a transistor structure, the method comprising: identifying a substrate having a first side and a second side opposite the first side; forming a plurality of fins on the first side of the substrate a plurality of fins substantially parallel to each other and substantially perpendicular to the substrate; and forming a power rail coupled to the first side of the substrate below the height of the plurality of fins.Example 39 includes the method of example 38, further comprising: enclosing the power rail within an oxide; and applying a barrier layer on the oxide and over the power rail, the barrier layer protecting the power rail during subsequent fabrication of the transistor structure.Example 40 includes the method of Example 39, further comprising: growing NMOS epitaxy or PMOS epitaxy, respectively, on top of the plurality of fins; and depositing a contact etch stop layer (CESL), CESL or barrier layer over the barrier layer. The layer at least partially surrounds the grown NMOS epitaxy or PMOS epitaxy.Example 41 includes the method of Example 40, further comprising: forming a conductive through via extending from above the CESL through the CESL and through the barrier layer toward the substrate; and electrically coupling the conductive through via to the power rail.
An instruction set architecture (ISA) for application specific signal processor (ASSP) is tailored to digital signal processing applications. The ISA implemented with the ASSP, is adapted to DSP algorithmic structures. The ISA of the present invention includes flexible data typing, permutation, and type matching operations (1101, 1102, 1104 and 1106). The flexible data typing, permutation and type matching of operands provides programming flexibility to support different filtering and DSP algorithms having different types of filter coefficients or data samples. A data typer and aligner within each signal processing unit within the ASSP supports flexible data typing, permutation, and type matching of operands of the instruction set architecture.
CLAIMSWhat is claimed is:1. A signal processor for performing digital signal processing instructions with operands having flexible data types, the signal processor comprising: at least one signal processing unit having, a first adder configured to add a pair of operands together; a first multiplier configured to multiply a pair of operands together; and a data typer and aligner configured to align and selectively select a set of data bits on a first data bus as a first operand for coupling into the first multiplier or the first adder, the alignment and selection of the set of data bits on the first data is in response to a data type field. 2. The signal processor of claim 1, wherein, the data typer and aligner includes, a first multiplexer having an input coupled to the first data bus and an output coupled to a first input of the first adder, the first multiplexer to select the set of data bits on the first data bus for coupling into the first adder as the first operand; and a second multiplexer having an input coupled to the first data bus and an output coupled to a first input of the first multiplier, the second multiplexer to select the set of data bits on the first data bus for coupling into the first multiplier as the first operand. 3. The signal processor of claim 1 wherein, the alignment and selection of the set of data bits on the first data bus is further responsive to a permute field. 4. The signal processor of claim 1 wherein, the data typer and aligner is further configured to align and selectively select a set of data bits on a second data bus as a second operand for coupling into the first multiplier or the first adder, the alignment and selection of the set of data bits on the second data bus is in response to a data type field. 5. The signal processor of claim 4 wherein, the alignment and selection of the set of data bits on the first and second data bus is further responsive to a permute field. 6. The signal processor of claim 4, wherein, the data typer and aligner includes, a first multiplexer having an input coupled to the first data bus and an output coupled to a first input of the first adder, the first multiplexer to select the set of data bits on the first data bus for coupling into the first adder as the first operand; a second multiplexer having an input coupled to the first data bus and an output coupled to a first input of the first multiplier, the second multiplexer to select the set of data bits on the first data bus for coupling into the first multiplier as the first operand; a third multiplexer having an input coupled to the second data bus and an output coupled to a second input of the first adder, the third multiplexer to select the set of data bits on the second data bus for coupling into the first adder as the second operand; and a fourth multiplexer having an input coupled to the second data bus and an output coupled to a second input of the first multiplier, the fourth multiplexer to select the set of data bits on the second data bus for coupling into the first multiplier as the second operand. 7. The signal processor of claim 1, wherein, the data type field is in an access control register. 8. The signal processor of claim 1, wherein, the data type field is in a digital signal processing instruction. 9. A method of performing digital signal processing (DSP) operations using flexible data type operands, the method comprising: fetching a first and second operand for a DSP instruction; decoding settings of a data type field to determine the data types of the first and second operand; determining if the data types of the first and second operand match; and in response to the first and second operand having matching data types, executing the DSP instruction using the first and second operand. 10. The method of claim 9 wherein, the data type field is in an access control register. 11. The method of claim 9 wherein, the data type field is in the DSP instruction. 12. The method of claim 9, further comprising: in response to the first and second operand not having matching data types, performing a type matching to find a matched data type for the first and second operand; and, executing the DSP instruction using the first and second operand in response to finding a matched data type for the first and second operand. 13. The method of claim 12 wherein, the first operand has a data type of N1 x S1 and the second operand has a data type of N2 x S2, and the matched data type is found by selecting the maximum of N1 or N2 and the maximum of Sl or S2 as the matched data type. 14. The method of claim 12 wherein, the first operand has a data type of N1 x S1 and the second operand has a data type of N2 x S2, and the matched data type is found by selecting and discarding the minimum of N1 or N2 and the minimum ofS1 or S2 so that the matched data type remains. 15. The method of claim 9, further comprising: decoding a permute field to determine the permutation of operands to a plurality of signal processors to execute the digital signal processing instruction. 16. A method of executing complex digital signal processing (DSP) instructions in a digital signal processor, the method comprising: reading a pair of memory locations specified by a data type indicator to contain a real value and an imaginary value in the pair of memory locations, the pair of memory locations being a first operand; reading at least one more memory location as a second operand; and executing a DSP operation using the first operand and the second operand to obtain a result having a real value and an imaginary value. 17. The method of claim 16, wherein the DSP operation is one of the set of operations of multiplication, addition, extremum, and no operation.
METHOD AND APPARATUSFORFLEXIBLE DATA TYPESCROSS REFERENCE TO RELATED APPLICATIONSThis application is a continuation-in-part application and claims the benefit of U. S. ApplicationNo. 09/427,174, Attorney Docket No. 004419. P001, filedOctober 25,1999 by inventors Ganapathy et al, the disclosure of which prior application is hereby incorporated by reference, verbatim and with the same effect as though it were fully and completely set forth herein, both of which are to be assigned to Vxtel, Inc. This application is also a continuation-in-part application and claims the benefit of U. S. ApplicationNo. 09/494,608, Attorney Docket No. 004419. P002, filedJanuary 31,2000 by inventors Ganapathy et al, the disclosure of which prior application is hereby incorporated by reference, verbatim and with the same effect as though it were fully and completely set forth herein, both of which are to be assigned to Vxtel, Inc. FIELD OF THE INVENTIONThis invention relates generally to the instruction set architectures (ISA) of processors.More particularly, the invention relates to operand data types for digital signal processors. BACKGROUND OF THE INVENTIONTo process data in a computing device, an instruction set is defined. An instruction set having one or more instructions are required for computing devices such as microprocessors, computers or single chip DSP devices. In defining an instruction set for a computing device, the data type of operands that will be computed is usually predefined based on the number representation to be utilized and the type of hardware that is provided. The data type of the instruction set architecture (ISA) in essence is defined by how and what type of numeric data the computing device will process. The number representation utilized for data types includes the radix or base of a number, whether or not it is to be encoded (binary coded such as BCD), and the numeric format. The radices ordinarily used in computers is binary or a radix of two. Other radices that may be used in computers is octal (radix of eight), decimal (radix of ten), and hexadecimal (radix of sixteen). If a radix other than two is selected, it ordinarily needs to be binarily coded so that it is recognizable by digital logic. For example, if a radix of ten is used the numbers are binary coded using a four bit binary number which is referred to as binary coded decimal (BCD). The numeric format is associated with whether the number is to have a fixed point or floating point representation, an integer or fractional format and their associated representations, a normalized or unnormalized format, and whether the bits representing the number are packed or unpacked. In a floating point representation an exponent number is usually included. In a fixed point representation, the radix point (decimal point for radix of ten) is in a fixed position with respect to the bits or numbers of the data. If the radix point is to the right of all numbers it is an integer format. If the radix point is to the left of all numbers it is a fractional format. An example of floating point data types is the single and double precision floating point data types defined in the IEEE 754 specification. The normalized and unnormalized formats are specific to floating point representations and a fractional format. If a number is to be normalized, the number is to be represented in fractional form and the bit to the immediate right of the radix point is a one. If it is an unnormalized format, the number is to be represented in fractional form and the bit to the immediate right of the radix point can be either a one or a zero. If the numbers which are to be processed can be positive or negative, the numeric representation needs to have an encoding scheme to provide the representation of both positive and negative values.Typical encoding methods for integer formats are signmagnitude, diminished-radix complement (one's complement for binary or a radix of two) and radix complement (two's complement for binary or a radix of two). If a floating format is used, both the fraction value and the exponent value may be encoded similar to the integer encoding methods. Furthermore depending upon the range of values and/or accuracy desired, the number of bits (i. e. digits), bytes and words for the numeric representation needs to be considered. For example, the number of bits representing a number may be fixed to one thirty two bit value or four eight bit bytes. As another example, the number of bits representing a number may be thirty-two bits for the fractional format and three bits for the exponent. Additionally, besides a numeric representation, the data type of an instruction set architecture may include character strings or text type of data. The characters in this case are usually encoded into a binary form such as the American Standard Code forInformation Interchange (ASCII) code. Another form of encoding is Extended Binary Coded Decimal InterchangeCode (EBCDIC). These encoded forms may also be packed from their binary forms into a packed decimal form in order to reduce the number of bits necessary for their representation. The data type for an instruction set architecture of a digital signal processor (DSP) is important. DSPs generally are distinguished from general purpose microprocessors in that DSPs typically support accelerated arithmetic operations by including a dedicated multiplier and accumulator (MAC) for performing multiplication of digital numbers. The instruction set for a typical DSP device usually includes only one DSP instruction, a MAC instruction, for performing multiplication of new operands and addition with a prior accumulated value stored within an accumulator register. The data type for the operands of the MAC instruction in prior art DSP devices is usually dependent upon the multiplier hardware performing its portion of the MAC operation.Typically the data type is fixed for the DSP. If it is desirable to perform a MAC operation on operands of data having a format that does not conform to the data type, other instructions need be executed to format the data so that it can be processed by the given MAC instruction with the given data type. These other instructions may included reading and writing data into a memory in order to select the appropriate bits of data of the operand upon which to perform the MAC instruction. One area where DSPs may be utilized is in telecommunication systems. One use of DSPs in telecommunication systems is digital filtering. In this case a DSP is typically programmed with instructions to implement some filter function in the digital or time domain. The mathematical algorithm for a typical finite impulse response (FIR) filter may look like the equation Yn = hoXo + hiXi + h2X2 +... + hNXN where hn are fixed filter coefficients numbering from 1 to N and Xnare the data samples. The equation Yn may be evaluated by using a software program. However in some applications, it is necessary that the equation be evaluated as fast as possible. One way to do this is to perform the computations using hardware components such as a DSP device programmed to compute the equationYn. In order to further speed the process, it is desirable to vectorize the equation and distribute the computation amongst multiple DSPs such that the final result is obtained more quickly. The multiple DSPs operate in parallel to speed the computation process.In this case, the multiplication of terms is spread across the multipliers of the DSPs equally for simultaneous computations of terms. The adding of terms is similarly spread equally across the adders of the DSPs for simultaneous computations. In vectorized processing, the order of processing terms is unimportant since the combination is associative. If the processing order of the terms is altered, it has no effect on the final result expected in a vectorized processing of a function. In a DSP device that is used to perform vectorized processing, it is desirable to consider the type of vectorized processing within the data type of the instruction set architecture to improve data processing efficiency. Oftentimes the type of filtering used in communication systems differs. The different types of filtering systems may use differing types of operands and filter coefficients. In these cases it is desirable to have flexibility in how DSP instructions process differing operands. It is also desirable to improve the efficiency of using computing resources to speed the execution of DSP instructions. BRIEF SUMMARY OF THE INVENTIONThe present invention is briefly summarized in the claims and includes a method, an apparatus and a system as described therein. BRIEF DESCRIPTIONS OF THE DRAWINGSFigure 1A is a block diagram of a system utilizing the present invention. Figure 1B is a block diagram of a printed circuit board utilizing the present invention within the gateways of the system in Figure 1A. Figure 2 is a block diagram of the ApplicationSpecific Signal Processor (ASSP) of the present invention. Figure 3 is a block diagram of an instance of the core processors within the ASSP of the present invention. Figure 4 is a block diagram of the RISC processing unit within the core processors of Figure 3. Figure 5A is a block diagram of an instance of the signal processing units within the core processors of. Figure 3. Figure 5B is a more detailed block diagram ofFigure 5A illustrating the bus structure of the signal processing unit. Figure 6 is the general data type format for an operand of the instruction set architecture of the present invention. Figure 7 is an exemplary bitmap for a control register illustrating data typing and permuting of operands. Figure 8 is an exemplary chart of possible data types of operands that can be selected. Figure 9 is an exemplary chart of possible permutations of operands and their respective orientation to the signal processing units. Figure 10 is a block diagram of a cross sectional block diagram of the data typer and aligner of each signal processing unit of Figure 3. Figure 11 is a block diagram of the bus multiplexers included in the data typer and aligner of each signal processing unit of Figure 10. Figure 12A is a chart of real data types and their alignment for the adders of the signal processing units. Figure 12B is a chart of real data types and their alignment for the multipliers of the signal processing units. Figure 12C is a first chart of complex data types and their alignment for the adders of the signal processing units. Figure 12D is a second chart of complex data types and their alignment for the adders of the signal processing units. Figure 12E is a chart of complex data types and their alignment for the multipliers of the signal processing units. Figure 12F is a second chart of complex data types and their alignment for the multipliers of the signal processing units. Figure 13A is a chart illustrating data type matching for a real pair of operands. Figure 13B is a chart illustrating data type matching for a complex pair of operands. Figure 13C is a chart illustrating data type matching for a real operand and a complex operand. Figure 14 is an exemplary chart illustrating data type matching for the multipliers of the signal processing units. Figure 15A is an exemplary chart illustrating data type matching for the adders of the signal processing units for scalar addition. Figure 15B is an exemplary chart illustrating data type matching for the adders of the signal processing units for vector addition.Figure 16 is a block diagram of the control of the bus multiplexers included in the data typer and aligner of each signal processing unit. Like reference numbers and designations in the drawings indicate like elements providing similar functionality. A letter after a reference designator number represents an instance of an element having the reference designator number. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTIn the following detailed description of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be obvious to one skilled in the art that the present invention may be practiced without these specific details. In other instances well known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure aspects of the present invention. Furthermore, the present invention will be described in particular embodiments but may be implemented in hardware, software, firmware or a combination thereofMultiple application specific signal processors (ASSPs) having the instruction set architecture of the present invention include flexible data typing, permutation, and type matching of operands. The flexible data typing, permutation and type matching of operands provides programming flexibility to support different filtering and DSP algorithms having different types of filter coefficients or data samples. The flexibility to support different DSP algorithms within gateways of communication systems can provide improved voice and data communication over a packetized network. Each ASSP includes a serial interface, a buffer memory and four core processors in order to simultaneously process multiple channels of voice or data. Each core processor preferably includes a reduced instruction set computer (RISC) processor and four signal processing units (SPs). Each SP includes multiple arithmetic blocks to simultaneously process multiple voice and data communication signal samples for communication over IP, ATM, Frame Relay, or other packetized network. The four signal processing units can execute digital signal processing algorithms in parallel. Each ASSP is flexible and can be programmed to perform many network functions or data/voice processing functions, including voice and data compression/decompression in telecommunication systems (such as CODECs), particularly packetized telecommunication networks, simply by altering the software program controlling the commands executed by the ASSP. An instruction set architecture for the ASSP is tailored to digital signal processing applications including audio and speech processing such as compression/decompression and echo cancellation. The instruction set architecture implemented with the ASSP, is adapted to DSP algorithmic structures. This adaptation of the ISA of the present invention to DSP algorithmic structures balances the ease of implementation, processing efficiency, and programmability of DSP algorithms. The instruction set architecture may be viewed as being two component parts, one (RISC ISA) corresponding to the RISC control unit and another (DSP ISA) to the DSP datapaths of the signal processing units 300. The RISC ISA is a register based architecture including 16-registers within the register file 413, while the DSP ISA is a memory based architecture with efficient digital signal processing instructions. The instruction word for theASSP is typically 20 bits but can be expanded to 40bits to control two instructions to the executed in series or parallel, such as two RISC control instruction and extended DSP instructions. The instruction set architecture of the ASSP has four distinct types of instructions to optimize the DSP operational mix. These are (1) a 20-bit DSP instruction that uses mode bits in control registers (i. e. mode registers), (2) a 40-bit DSP instruction having control extensions that can override mode registers, (3) a 20-bit dyadic DSP instruction, and (4) a 40 bit dyadic DSP instruction. These instructions are for accelerating calculations within the core processor of the type where D = [ (A opl B) op2 C] and each of"opl"and"op2"can be a multiply, add or extremum (min/max) class of operation on the three operands A, B, and C. The ISA of the ASSP which accelerates these calculations allows efficient chaining of different combinations of operations. All DSP instructions of the instruction set architecture of the ASSP are dyadic DSP instructions to execute two operations in one instruction with one cycle throughput. A dyadic DSP instruction is a combination of two DSP instructions or operations in one instruction and includes a main DSP operation (MAINOP) and a sub DSP operation (SUB OP). Generally, the instruction set architecture of the present invention can be generalized to combining any pair of basic DSP operations to provide very powerful dyadic instruction combinations. The DSP arithmetic operations in the preferred embodiment include a multiply instruction (MULT), an addition instruction (ADD), a minimize/maximize instruction (MIN/MAX) also referred to as an extrema instruction, and a no operation instruction (NOP) each having an associated operation code ("opcode"). The present invention efficiently executes these dyadic DSP instructions by means of the instruction set architecture and the hardware architecture of the application specific signal processor. Referring now to Figure 1A, a voice and data communication system 100 is illustrated. The system 100 includes a network 101 which is a packetized or packet-switched network, such as IP, ATM, or frame relay. The network 101 allows the communication of voice/speech and data between endpoints in the system 100, using packets. Data may be of any type including audio, video, email, and other generic forms of data.At each end of the system 100, the voice or data requires packetization when transceived across the network 101. The system 100 includes gateways 104A, 104B, and 104C in order to packetize the information received for transmission across the network 101. A gateway is a device for connecting multiple networks and devices that use different protocols. Voice and data information may be provided to a gateway 104 from a number of different sources in a variety of digital formats. In system 100, analog voice signals are transceived by a telephone 108. In system 100, digital voice signals are transceived at public branch exchanges (PBX) 112A and 112B which are coupled to multiple telephones, fax machines, or data modems.Digital voice signals are transceived between PBX 112A and PBX 112B with gateways 104A and 104C, respectively. Digital data signals may also be transceived directly between a digital modem 114 and a gateway 104A.Digital modem 114 may be a Digital Subscriber Line (DSL) modem or a cable modem. Data signals may also be coupled into system 100 by a wireless communication system by means of a mobile unit 118 transceiving digital signals or analog signals wirelessly to a base station 116. Base station 116 converts analog signals into digital signals or directly passes the digital signals to gateway 104B. Data may be transceived by means of modem signals over the plain old telephone system (POTS) 107B using a modem 110. Modem signals communicated over POTS 107B are traditionally analog in nature and are coupled into a switch 106B of the public switched telephone network (PSTN). At the switch 106B, analog signals from the POTS 107B are digitized and transceived to the gateway 104B by time division multiplexing (TDM) with each time slot representing a channel and one DSO input to gateway 104B. At each of the gateways 104A, 104B and 104C, incoming signals are packetized for transmission across the network 101.Signals received by the gateways 104A, 104B and 104C from the network 101 are depacketized and transcoded for distribution to the appropriate destination. Referring now to Figure 1B, a network interface card (NIC) 130 of a gateway 104 is illustrated. TheNIC 130 includes one or more application-specific signal processors (ASSPs) 150A-150N. The number ofASSPs within a gateway is expandable to handle additional channels. Line interface devices 131 of NIC 130 provide interfaces to various devices connected to the gateway, including the network 101. In interfacing to the network. 101, the line interface devices packetize data for transmission out on the network 101 and depacketize data which is to be received by theASSP devices. Line interface devices 131 process information received by the gateway on the receive bus 134 and provides it to the ASSP devices. Information from the ASSP devices 150 is communicated on the transmit bus 132 for transmission out of the gateway.A traditional line interface device is a multi-channel serial interface or a UTOPIA device. The NIC 130 couples to a gateway backplane/network interface bus 136 within the gateway 104. Bridge logic 138 transceives information between bus 136 and NIC 130.Bridge logic 138 transceives signals between the NIC 130 and the backplane/network interface bus 136 onto the host bus 139 for communication to either one or more of the ASSP devices 150A-150N, a host processor 140, or a host memory 142. Optionally coupled to each of the one or more ASSP devices 150A through 150N (generally referred to as ASSP 150) are optional local memory 145A through 145N (generally referred to as optional local memory 145), respectively. Digital data on the receive bus 134 and transmit bus 132 is preferably communicated in bit wide fashion. While internal memory within each ASSP may be sufficiently large to be used as a scratchpad memory, optional local memory 145 may be used by each of the ASSPs 150 if additional memory space is necessary. Each of the ASSPs 150 provide signal processing capability for the gateway. The type of signal processing provided is flexible because each ASSP may execute differing signal processing programs. Typical. signal processing and related voice packetization functions for an ASSP include (a) echo cancellation ;, (b) video, audio, and voice/speech compression/decompression (voice/speech coding and decoding); (c) delay handling (packets, frames); (d) loss handling; (e) connectivity (LAN and WAN); (f) security (encryption/decryption); (g) telephone connectivity; (h) protocol processing (reservation and transport protocols, RSVP, TCP/IP, RTP, UDP for IP, and AAL2, AAL1, AAL5 for ATM); (i) filtering; (j) Silence suppression; (k) length handling (frames, packets} ; and other digital signal processing functions associated with the communication of voice and data over a communication system. Each ASSP 150 can perform other functions in order to transmit voice and data to the various endpoints of the system 100 within a packet data stream over a packetized network. Referring now to Figure 2, a block diagram of theASSP 150 is illustrated. At the heart of the ASSP 150 are four core processors 200A-200D. Each of the core processors 200A-200D is respectively coupled to a data memory 202A-202D through buses 203A-203D. Each of the core processors 200A-200D is also respectively coupled to a program memory 204A-204D through buses 205A-205D respectively. Each of the core processors 200A-200D communicates with outside channels through the multichannel serial interface 206, the multi-channel memory movement engine 208, buffer memory 210, and data memory 202A-202D. The ASSP 150 further includes an external memory interface 212 to couple to the external optional local memory 145. The ASSP 150 includes an external host interface 214 for interfacing to the external host processor 140 of Figure 1B. Further included within the ASSP 150 are timers 216, clock generators and a phase-lock loop 218, miscellaneous control logic 220, and a Joint Test Action Group (JTAG) test access port 222 for boundary scan testing. The multi-channel serial interface 206 may be replaced with a UTOPIA parallel interface for some applications such as ATM.The ASSP 150 further includes a microcontroller 223 to perform process scheduling for the core processors 200A-200D and the coordination of the data movement within the ASSP as well as an interrupt controller 224 to assist in interrupt handling and the control of theASSP 150. Referring now to Figure 3, a block diagram of the core processor 200 is illustrated coupled to its respective data memory 202 through buses 203 and program memory 204 through buses 205. Core processor 200 is the block diagram for each of the core processors 200A-200D. Data memory 202 and program memory 204 refers to a respective instance of data memory 202A-202D and program memory 204A-204D, respectively. Buses 203 and 205 refers to a respective instance of buses 203A-203D and 205A-205D, respectively. The core processor 200 includes four signal processing units SPO 300A, SP1 300B, SP2 300C and SP3 300D. The core processor 200 further includes a reduced instruction set computer (RISC) control unit 302 and a pipeline control unit 304. The signal processing units 300A-300D perform the signal processing tasks on data while the RISC control unit 302 and the pipeline control unit 304 perform control tasks related to the signal processing function performed by the SPs 300A-300D. The control provided by the RISC control unit 302 is coupled with the SPs 300A-300D at the pipeline level to yield a tightly integrated core processor 200 that keeps the utilization of the signal processing units 300 at a very high level. Program memory 204 couples to the pipe control 304 which includes an instruction buffer that acts as a local loop cache. The instruction buffer in the preferred embodiment has the capability of holding four instructions. The instruction buffer of the pipe control 304 reduces the power consumed in accessing the main memories to fetch instructions during the execution of program loops. The signal processing tasks are performed on the datapaths within the signal processing units 300A-300D. The nature of the DSP algorithms are such that they are inherently vector operations on streams of data, that have minimal temporal locality (data reuse).Hence, a data cache with demand paging is not used because it would not function well and would degrade operational performance. Therefore, the signal processing units 300A-300D are allowed to access vector elements (the operands) directly from data memory 202 without the overhead of issuing a number of load and store instructions into memory, resulting in very efficient data processing. Thus, the instruction set architecture of the present invention having a 20 bit instruction word, which can be expanded to a 40 bit instruction word, achieves better efficiencies thanVLIW architectures using 256-bits or higher instruction widths by adapting the ISA to DSP algorithmic structures. The adapted ISA leads to very compact and low-power hardware that can scale to higher computational requirements. The operands that the ASSP can accommodate are varied in data type and data size. The data type may be real or complex, an integer value or a fractional value, with vectors having multiple elements of different sizes. The data size in the preferred embodiment is 64 bits but larger data sizes can be accommodated with proper instruction coding. Referring now to Figure 4, a detailed block diagram of the RISC control unit 302 is illustrated.RISC control unit 302 includes a data aligner and formatter 402, a memory address generator 404, three adders 406A-406C, an arithmetic logic unit (ALU) 408, a multiplier 410, a barrel shifter 412, and a register file 413. The register file 413 points to a starting memory location from which memory address generator 404 can generate addresses into data memory 202. The RISC control unit 302 is responsible for supplying addresses to data memory so that the proper data stream is fed to the signal processing units 300A-300D. The RISC control unit 302 is a register to register organization with load and store instructions to move data to and from data memory 202. Data memory addressing is performed by RISC control unit using a 32-bit register as a pointer that specifies. the address, postmodification offset, and type and permute fields. The type field allows a variety of natural DSP data to be supported as a"first class citizen"in the architecture. For instance, the complex type allows direct operations on complex data stored in memory removing a number of bookkeeping instructions. This is useful in supporting QAM demodulators in data modems very efficiently. Referring now to Figure 5A, a block diagram of a signal processing unit 300 is illustrated which represents an instance of the SPs 300A-300D. Each of the signal processing units 300 includes a data typer and aligner 502, a first multiplier M1 504A, a compressor 506, a first adder Al 510A, a second adderA2 510B, an accumulator register 512, a third adder A3 510C, and a second multiplier M2 504B. Adders 510A510C are similar in structure and are generally referred to as adder 510. Multipliers 504A and 504B are similar in structure and generally referred to as multiplier 504. Each of the multipliers 504A and 504B have a multiplexer 514A and 514B respectively at its input stage to multiplex different inputs from different busses into the multipliers. Each of the adders 510A, 510B, 510C also have a multiplexer 520A, 520B, and 520C respectively at its input stage to multiplex different inputs from different busses into the adders. These multiplexers and other control logic allow the adders, multipliers and other components within the signal processing units 300A-300C to be flexibly interconnected by proper selection of multiplexers. In the preferred embodiment, multiplierM1 504A, compressor 506, adder Al 510A, adder A2 510B and accumulator 512 can receive inputs directly from external data buses through the data typer and aligner 502. In the preferred embodiment, adder 510C and multiplier M2 504B receive inputs from the accumulator 512 or the outputs from the execution units multiplierMl 504A, compressor 506, adder Al 510A, and adder A2 510B. Referring now to Figure 5B, a more detailed block diagram of the functional blocks and the bus structure of the signal processing unit 300 is illustrated.Flexible data typing is possible because of the structure and functionality provided in each signal processing unit. The buses 203 to data memory 202 include a Z output bus 532 and an X input bus 531 and aY input bus 533. Output signals are coupled out of the signal processor 300 on the Z output bus 532 through the data typer and aligner 502. Input signals are coupled into the signal processor 300 on the X input bus 531 and Y input bus 533 through the data typer and aligner 502. Two operands can be loaded in parallel together from the data memory 202 into the signal processor 300, one on each of the X bus 531 and the Y bus 533. Internal to the signal processor 300, the SXM bus 552 and the SYM bus 556 couple between the data typer and aligner 502 and the multiplier M1 504A for two sources of operands from the X bus 531 and the Y bus 533 respectively. The SXA bus 550 and the SYA bus 554 couple between the data typer and aligner 502 and the adder Al 510A and between the data typer and aligner 502 and the adder A2 510B for two sources of operands from the X bus 531 and the Y bus 533 respectively. In the preferred embodiment, the X bus 531 and the Y bus 533 is sixty four bits wide while the SXA bus 550 and the SYA bus 554 is forty bits wide and the SXM bus 552 and the SYM bus 556 is sixteen bits wide. Another pair of internal buses couples between the data typer and aligner 502 and the compressor 506 and between the data typer and aligner 502 and the accumulator register AR 512. While the data typer and aligner 502 could have data busses coupling to the adder A3 510C and the multiplier M2 504B, in the preferred embodiment it does not in order to avoid extra data lines and conserve area usage of an integrated circuit. Output data is coupled from the accumulator register AR 512 into the data typer and aligner 502 over yet. another bus.Multiplier MI 504A has buses to couple its output into the inputs of the compressor 506, adder Al 510A, adderA2 510B, and the accumulator registers AR 512.Compressor 506 has buses to couple its output into the inputs of adder Al 510A and adder A2 510B. Adder Al 510A has a bus to couple its output into the accumulator registers 512. Adder A2 510B has buses to couple its output into the accumulator registers 512.Accumulator registers 512 has buses to couple its output into multiplier M2 504B, adder A3 510C, and data typer and aligner 502. Adder A3 510C has buses to couple its output into the multiplier M2 504B and the accumulator registers 512. Multiplier M2 504B has buses to couple its output into the inputs of the adderA3 510C and the accumulator registers AR 512. INSTRUCTION SET ARCHITECTUREThe instruction set architecture of the ASSP 150 is tailored to digital signal processing applications including audio and speech processing such as compression/decompression and echo cancellation. In essence, the instruction set architecture implemented with the ASSP 150, is adapted to DSP algorithmic structures. The adaptation of the ISA of the present invention to DSP algorithmic structures is a balance between ease of implementation, processing efficiency, and programmability of DSP algorithms. The ISA of the present invention provides for data movement operations, DSP/arithmetic/logical operations, program control operations (such as function calls/returns, unconditional/conditional jumps and branches), and system operations (such as privilege, interrupt/trap/hazard handling and memory management control). The instruction set architecture of the ASSP 150 can be viewed as being two component parts, one (RISCISA) corresponding to the RISC control unit and another (DSP ISA) to the DSP datapaths of the signal processing units 300. The RISC ISA is a register based architecture including sixteen registers within the register file 413, while the DSP ISA is a memory based architecture with efficient digital signal processing instructions. The instruction word for the ASSP is typically 20 bits but can be expanded to 40-bits to control two RISC or DSP instructions to be executed in series or parallel, such as a RISC control instruction executed in parallel with a DSP instruction, or a 40 bit extended RISC or DSP instruction. The instruction set architecture of the ASSP 150 has 4 distinct types of instructions to optimize theDSP operational mix. These are (1) a 20-bit DSP instruction that uses mode bits in control registers (i. e. mode registers), (2) a 40-bit DSP instruction having control extensions that can override mode registers, (3) a 20-bit dyadic DSP instruction, and (4) a 40 bit dyadic DSP instruction. These instructions are for accelerating calculations within the core processor 200 of the type where D = [ (A opl B) op2 C] and each of"opi"and"op2"can be a multiply, add or extremum (min/max) class of operation on the three operands A, B, and C. The ISA of the ASSP 150 which accelerates these calculations allows efficient chaining of different combinations of operations. These type of operations require three operands, which need to be made available to the processor which is to perform the operation. The size of the integrated circuit places limits on the bus structure which limits the bandwidth to two vector reads and one vector write each cycle into and out of data memory 202. Thus one of the three operands, such as B or C, needs to come from another source within the core processor 200. The third operand can be placed into one of the registers of the accumulator 512 or theRISC register file 413. In order to accomplish this within the core processor 200 there are two subclasses of the 20-bit DSP instructions which are (1) A and B specified by a 4-bit specifier, and C and D by a 1-bit specifier and (2) A and C specified by a 4-bit specifier, and B and D by a 1 bit specifier. Instructions for the ASSP are always fetched 40bits at a time from program memory with bit 39 and 19 indicating the type of instruction. After fetching, the instruction is grouped into two sections of 20 bits each for execution of operations. In the case of 20-bit control instructions with parallel execution (bit 39=0, bit 19=0), the two 20-bit sections are control instructions that are executed simultaneously. In the case of 20-bit control instructions for serial execution (bit 39=0, bit 19=1), the two 20-bit sections are control instructions that are executed serially.In the case of 20-bit DSP instructions for serial execution (bit 39=1, bit 19=1), the two 20-bit sections are DSP instructions that are executed serially. In the case of 40-bit DSP instructions (bit 39=1, bit 19=0), the two 20 bit sections form one extended DSP instruction which are executed simultaneously. The ISA of the ASSP 150 is fully predicated providing for execution prediction. Within the 20-bitRISC control instruction word and the 40-bit extendedDSP instruction word there are 2 bits of each instruction specifying one of four predicate registers within the RISC control unit 302. Depending upon the condition of the predicate register, instruction execution can conditionally change base on its contents. In order to access operands within the data memory 202 or registers within the accumulator 512 or register file 413, a 6-bit specifier is used in the DSP extended instructions to access operands in memory and registers. Of the six bit specifier used in the extended DSP instructions, the MSB (Bit 5) indicates whether the access is a memory access or register access. In the preferred embodiment, if Bit 5 is set to logical one, it denotes a memory access for an operand. If Bit 5 is set to a logical zero, it denotes a register access for an operand. If Bit 5 is set to 1, the contents of a specified register (rX where X: 07) are used to obtain the effective memory address and post-modify the pointer field by one of two possible offsets specified in one of the specified rX registers. If Bit 5 is set to 0, Bit 4 determines what register set has the contents of the desired operand. If Bit-4 is set to 0, then the remaining specified bits 3: 0 control access to the registers within the register file 413 or to registers within the signal processing units 300. DSP INSTRUCTIONSThere are four major classes of DSP instructions for the ASSP 150 these are : 1) Multiply (MULT): Controls the execution of the main multiplier connected to data buses from memory.Controls: Rounding, sign of multiplyOperates on vector data specified through type field in address registerSecond operation: Add, Sub, Min, Max in vector or scalar mode 2) Add (ADD): Controls the execution of the main-adderControls: absolute value control of the inputs, limiting the resultSecond operation: Add, add-sub, mult, mac, min, max 3) Extremum (MIN/MAX) : Controls the execution of the main-adderControls: absolute value control of the inputs, Global or running max/min with T register, TR register recording controlSecond operation: add, sub, mult, mac, min, max 4) Misc: type-match and permute operations. The ASSP 150 can execute these DSP arithmetic operations in vector or scalar fashion. In scalar execution, a reduction or combining operation is performed on the vector results to yield a scalar result. It is common in DSP applications to perform scalar operations, which are efficiently performed by the ASSP 150. The 20-bit DSP instruction words have 4-bit operand specifiers that can directly access data memory using 8 address registers (r0-r7) within the register file 413 of the RISC control unit 302. The method of addressing by the 20 bit DSP instruction word is regular indirect with the address register specifying the pointer into memory, post-modification value, type of data accessed and permutation of the data needed to execute the algorithm efficiently. All of the DSP instructions control the multipliers 504A-504B, adders 510A-510C, compressor 506 and the accumulator 512, the functional units of each signal processing unit 300A300D. In the 40 bit instruction word, the type of extension from the 20 bit instruction word falls into five categories: 1) Control and Specifier extensions that override the control bits in mode registers 2) Type extensions that override the type specifier in address registers 3) Permute extensions that override the permute specifier for vector data in address registers 4) Offset extensions that can replace or extend the offsets specified in the address registers 5) DSP extensions that control the lower rows of functional units within a signal processing unit 300 to accelerate block processing. The 40-bit control instructions with the 20 bit extensions further allow a large immediate value (16 to 20 bits) to be specified in the instruction and powerful bit manipulation instructions. Efficient DSP execution is provided with 2x20-bit DSP instructions with the first 20-bits controlling the top functional units (adders 501A and 510B, multiplier 504A, compressor 506) that interface to data buses from memory and the second 20 bits controlling the bottom functional units (adder 510C and multiplier 504B) that use internal or local data as operands. Efficient DSP execution is also improved by the hardware architecture of the present invention. In this case, efficiency is improved in the manner that data is supplied to and from data memory 202 to feed the four signal processing units 300 and the DSP functional units therein. The data highway is comprised of buses 203 including the X bus 531 and Y bus 533 for X and Y source operands respectively and the Z bus 532 for a result write. All buses, including X bus 531, Y bus 533, and Z bus 532, are preferably 64 bits wide.The buses are uni-directional to simplify the physical design and reduce transit times of data. In the preferred embodiment when in a 20 bit DSP mode, if theX and Y buses are both carrying operands read from memory for parallel execution in a signal processing unit 300, the parallel load field can only access registers within the register file 413 of the RISC control unit 302. Additionally, the four signal processing units 300A-300D in parallel provide four parallel MAC units (multiplier 504A, adder 510A, and accumulator 512) that can make simultaneous computations. This reduces the cycle count from 4 cycles ordinarily required to perform four MACs to only one cycle. DATA TYPING, ALIGNING AND PERMUTINGIn order for the present invention to adapt to the different DSP algorithmic structures, it provides for flexible data typing and aligning, data type matching, and permutation of operands. Different DSP algorithms may use data samples having varying bit widths such as four bits, eight bits, sixteen bits, twenty four bits, thirty two bits, or forty bits. Additionally, the data samples may be real or complex. In the preferred embodiment of the present invention, the multipliers in the signal processing units are sixteen bits wide and the adders in the signal processing units are forty bits wide. The operands are read into the signal processing units from data memory across the X or Y data bus each of which in the preferred embodiment are sixty four bits wide. The choice of these bit widths considers the type of DSP algorithms being processed, the operands/data samples, the physical bus widths within an integrated circuit, and the circuit area required to implement the adders and multipliers. In order to flexibly handle the various data types, the operands are automatically adapted (i. e. aligned) by the present invention to the adder or multiplier respectively. If the data type of the operands differs, than a type matching is required. The present invention provides automatic type matching to process disparate operands. Furthermore, various permutations of the operands may be desirable such as for scaling a vector by a constant. In which case, the present invention provides flexible permutations of operands. Referring now to Figure 6, the general format for the data type of an operand for the present invention is illustrated. In the present invention, the data type for an operand may be represented in the format ofN x SR for a real data type or N x SC for a complex or imaginary data type. N refers to the number of signal processing units 300 to which this given operand should be routed. S indicates the size in bits of the operand. R refers to a real data type. C refers to a complex or imaginary data type having a real and imaginary numeric component. In one embodiment of the present invention, the size of the multiplication units is sixteen bits wide and the size of the adders is forty bits wide. In one embodiment of the present invention, the memory bus is sixty four bits wide so that an operand being transferred from memory may have a width in the range of zero to sixty four bits. For multiplicands, the operands preferably have a bit width of multiplies of 4,8,16, and 32. For minuend, subtrahends and addends, the forty bit adders preferably have operands having a bit width of multiplies of 4,8,16,32, and 40. In the case that the data type is a complex operand, the operand has a real operand and an imaginary operand. In order to designate the type of operand selected, control registers and instructions of the instruction set architecture include a data type field for designating the type of operand being selected by a user. Referring now to Figure 7, an exemplary control register of the instruction set architecture of the present invention is illustrated. In Figure 7, a memory address register 700 is illustrated for controlling the selection of operands from the data memory 202 to the signal processing units 300. The memory address register 700 illustrates a number of different memory address registers which are designated in an instruction by a pointer rX. Each of the memory address registers 700 includes a type field 701, a CB bit 702 for circular and bit-reversed addressing support, a permute field 703, a first address offset 704, a second zero address offset 705, and a pointer 706. The type field 701 designates the data type of operand being selected. The permute field 703 of the memory address register 700 is explained in detail below. Referring now to Figure 8, an exemplary set of data types to be selected for operands is illustrated. The data type is encoded as a four bit field in either a control register, such as the memory address register 700, or a DSP instruction directly selecting an operand from a register or memory location. For example, for the data type field 701 having a value of 0000, the operand has a data type of 1 x 16 real. As another example, for the data type field 701 having a value of 0111, the operand has a 2 x 16 complex data type. As yet another example, for the data type field 701 having a value of 1001, the data type of the operand is a 2 x 32 complex operand. The data type field 701 is selected by a user knowing the number of operations that are to be processed together in parallel by the signal processing units 300 (i. e. N of the data type) and the bit width of the operands (i. e. S of the data type}. The permute field in control registers, such as the memory address register 700, and instructions allows broadcasting and interchanging operands between signal processing units 300. Referring momentarily back to Figure 3, the X data bus 531, the Y data bus 533, and the Z data bus 532 between the data memory 202 and signal processing units 300 are sixty four bits wide.Because there are four signal processing units 300A300D, it is often times desirable for each to receive an operand through one memory access to the data memory.202. On other occasions, it maybe desirable for each signal processing unit 300A-300D to have access to the same operand such that it is broadcast to each. Referring now to Figure 9, an exemplary set of permutations to select operands for the signal processing units is illustrated. The permutation in the preferred embodiment is encoded as a five bit field in either a control register, such as permute field 702 in the memory address register 700, or a DSP instruction. The permute field provides the capability of designating how 16-bit increments of the 64-bit data bus are coupled into each of the signal processing units 300A-300D. In Figure 9, the sixty four bits of the X data bus 531/Y data bus 533 (labeled data busses 203 in Figures 2-3) can be designated at the top from right to left as 0-15,16-31,32-47, and 48-63.The permutation of operands on the data bus for the given permute field is in the center while the permutation type is listed to the right. The data bus permutations in the center are labeled permutations 203A through 203L. While the data on the respective data bus does not change position, the five bit permute field illustrated to the left of the 64-bit data bus re-arranges how a sixteen bit data field (labeled A, B, C, and D) on the respective data bus is received by each of the signal processing units 300A-300D. This is how the desired type of permutation is selected. That is the right most sixteen bit column can be considered as being coupled into SP3 300D over the permutations. The second column from the right can be considered as being coupled into the signal processing unit SP2 300C over the permutations. The third column from the right can be considered as being coupled into the signal processing unit SP1 300B over the permutations. The left most, fourth column from the right, can be considered as being coupled into the signal processing unit SPO 300A over the permutations. In a regular access without any permutation corresponding to data bus permutation 203A, bits 0-15 of the data bus are designated as D, bits 16-31 are designated as C, bits 32-47 are designated as B, and bits 48-63 are designated as A. This corresponds to the permute field being 00000 in the first row, permutation 203A, of the chart in Figure 9. With regular access chosen for each of the signal processing units 300A-300D to the sixty four bit data bus, the sixteen bits labeled A are coupled into SP3 300D for example. The sixteen bits labeled D are coupled into the signal processing unit SP2 300C. The sixteen bits labeled C are coupled into the signal processing unit SP1 300B. The sixteen bits labeled D are coupled into the signal processing unit SPO 300A. In the permute field, the most significant bit (Bit 26 in Figure 9) controls whether the bits of the upper half word and the bits of the lower half word of the data bus are interchangeably input into the signal processing units 300. For example as viewed from the point of view of the signal processing units 300A300D, the data bus appears as data bus permutation 203B as compared to permutation 203A. In this case the combined data fields of A and B are interchanged with the combined data fields C and D as the permutation across the signal processing units. The next two bits of the permute field (Bits 25 and 24 of permute field 702) determine how the data fields A and B of the upper half word are permuted across the signal processing units. The lowest two bits of the permute field (Bits 23 and 22 of the permute field 702) determine how the data fields C and D of the lower half word are to be permuted across the signal processing units. Consider for example the case where the permute field 703 is a 00100, which corresponds to the permutation 203C. In this case the type of permutation is a permutation on the half words of the upper bits of the data fields A and B. As compared with permutation 203A, signal processing unit SP1 300B receives the A data field and signal processing unit SPO 300A receives the B data field in permutation 203C. Consider another example where the permute field 703 is a 00001 bit pattern, which corresponds to the permutation 203D. In this case the type of permutation is a permutation on the half words of the lower bits of the data fields of C and D. the data bus fields of C and D are exchanged to permute half words of the lower bits of the data bus. As compared with permutation 203A, signal processing unit SP3 300D receives the C data field and signal processing unit SP2 300C receives the D data field in permutation 203D. In accordance with the present invention, both sets of upper bits and lower bits can be permuted together. Consider the case where the permute field 703 is a 00101 bit pattern, corresponding to the permutation 203E. In this case, the permute type is permuting half words for both the upper and the lower bits such that A and B are exchanged positions and C and D are exchanged positions. As compared with permutation 203A, signal processing unit SP3 300D receives the C data field, signal processing unit SP2 300C receives the D data field, signal processing unitSP1 300B receives the A data field and signal processing unit SPO 300A receives the B data field in permutation 203E. Permutations of half words can be combined with the interchange of upper and lower bits as well in the present invention. Referring now to permutation 203F, the permute field 703 is a 10100 bit pattern. In this case, the upper and lower bits are interchanged and a permutation on the half word of the upper bits is performed such that A and B and C and D are interchanged and then C and D is permuted on the half word. As compared with permutation 203A, signal processing unit SP3 300D receives the B data field, signal processing unit SP2 300C receives the A data field, signal processing unit SP1 300B receives the C data field and signal processing unit SPO 300A receives the D data field in permutation 203F. Referring now to permutation 203G, the permute field 703 is a 10001 bit pattern. In this case the data bus fields are interchanged and a permutation of the half word on the lower bits is performed resulting in a re-orientation of the data bus fields as illustrated in permutation 203G. Referring now to permutation 203H, the permute field 703 is a 10101 bit pattern. In this case, the data bus fields are interchanged and a permutation of half words on the upper bits and the lower bits has occurred resulting in a re-orientation of the data bus fields as illustrated in permutation 203H. Broadcasting is also provided by the permute field as illustrated by permutations 203I, 203J, 203K, and 203L. For example consider permutation 203I corresponding to a permute field 703 of a 01001 bit pattern. In this case, the data field A is broadcasted to each of the signal processing units 300A-300D. That is each of the signal processing units 300A-300D read the data field A off the data bus as the operand. For the permutation 203J having the permute field of 01100 bit pattern, the data field B is broadcast to each of the signal processing units. For permutation 203K having the permute field of a 00010 bit pattern, the data field C is broadcast to each of the signal processing units 300A-300D. For permutation 203L, the permute field is a 00011 combination and the data field D is broadcast to each of the signal processing units 300A-300D. In this manner various combinations of permutations and interchanging of data bus fields on the data bus can be selected for re-orientation into the respective signal pressing units 300A through 300D.The Z output bus 532 carries the results from the execution units back to memory. The data on the Z output bus 532 is not permuted, or typed as it goes back to memory. The respective signal processing units 300A-300D drive the appropriate number of data bits (16,32 or 64) onto the Z output bus 532 depending upon the type of the operations. The memory writes the data received from the Z output bus 532 using halfword strobes which are driven with the data to indicate the validity. Referring now to Figure 10, a cross-sectional block diagram illustrates the data type and aligners 502A, 502B, 502C and 502D of the signal processing blocks 300A, 300B, 300C and 300D respectively. Each of the data type and aligners 502A, 502B, 502C and 502D includes an instance of a bus multiplexer 1001 for theX bus 531 and a bus multiplexer 1002 for the Y bus 533. For example, the data typer and aligner 502A of signal processing unit SPO 300A includes the bus multiplexer 1001A and the bus multiplexer 1002A. The multiplexer 1001A has an input coupled to the X bus 531 and an output coupled to the SXO bus 1005A. The bus multiplexer 1002A has an input coupled to the Y bus 533 and an output coupled to the SYO bus 1006A. A control bus 1011 is coupled to each instance of the bus multiplexers 1001 which provides independent control of each to perform the data typing alignment and any permutation selected for the X bus 531 into the signal processing units. A control signal bus 1011 is coupled into each of the bus multiplexers 1001A-1001D. A control signal bus 1012 is coupled into each of the bus multiplexers 1002A-1002D. The control signal buses 1011 and 1012 provide independent control of each bus multiplexer to perform the data typing alignment and any permutation selected for the X bus 531 and the Y bus 533 respectively into the signal processing units 300. The outputs SXO bus 1005 and SYO bus 1006 from each of the bus multiplexers 1001 and 1002 couple into the multiplexers of the adders and multipliers within the respective signal processors 300 for selection as the X and Y operands respectively. Referring now to Figure 11, an instance of each of the bus multiplexer 1001 and 1002 are illustrated labeled 1001 and 1002 respectively. Each instance of the bus multiplexer 1001 includes multiplexers 1101 and 1102 to multiplex data from the X bus 531 onto each SXA bus 550 and SXM bus 552 respectively within each signal processing unit 300. Each instance of the bus multiplexer 1002 includes multiplexers 1104 and 1106 to multiplex data from the Y bus 533 onto each SYA bus 554 and each SYM bus 556 respectively within each signal processing unit 300. In the preferred embodiment, theX bus 531 is sixty four bits wide all of which couple into the multiplexers 1101 and 1102 for selection. In the preferred embodiment, the Y bus 533 is sixty four bits wide all of which couple into the multiplexers 1104 and 1106 for selection. The output SXA 550 of multiplexer 1101 and the output SYA 554 of multiplexer 1104 in the preferred embodiment are each forty bits wide for coupling each into the adder Al 510A and adderA2 510B. The output SXM 552 of multiplexer 1102 and the output SYM 556 of multiplexer 1106 in the preferred embodiment are each sixteen bits wide for coupling each into the multiplier M1 504A. The output buses SXA 550 and SXM 552 form the SX buses 1005 illustrated inFigure 10 for each signal processing unit 300. The output buses SYA 554 and SYM 556 form the SY buses 1006 illustrated in Figure 10 for each signal processing unit 300.The control signal bus 1011 has a control signal bus 1011A which couples into each multiplexer 1101 and a control signal bus 1011B which couples into each multiplexer 1102 for independent control of each. The control signal bus 1012 has a control signal bus 1012A which couples into each multiplexer 1104 and a control signal bus 1012B which couples into each multiplexer 1106 for independent control of each. Multiplexers 1101 and 1102 in each of the data typer and aligners 502 of each signal processing unit receive the entire data bus width of the X bus 531.Multiplexers 1104 and 1106 in each of the data typer and aligners 502 of each signal processing unit receive the entire data bus width of the Y bus 533. With all bits of each data bus being available, the multiplexers 1101,1102,1104, and 1106 can perform the flexible data typing, data alignment, and permutation of operands. In response to the control signals on the control signal buses 1011 and 1012, each of the multiplexers 1101,1102,1104, and 1106 independently picks which bits of the X bus 531 or the Y bus 533 to use for the respective operand for their respective signal processor 300, align the bits into proper bit positions on the output buses SXA 550, SXM 552, SYA 554, and SYM 556 respectively for use by sixteen bit multipliers (M1 504A) and forty bit adders (Al 510A andA2 510B). In the alignment process, the multiplexers 1101, 1102, 1104, and 1106 also insert logical zeroes and/or ones into appropriate bit positions to properly align and provide for sign and guard bit extensions. For example multiplexer 1101A of signal processing unit 300A may select bits 0-15 of the sixty four bits of theX bus 531 as the operand for an adder and multiplex those bits into bit positions 31-16 and insert zeroes in-bit positions 0-15 and sign-extend bit 31 into bit positions 32-39 to make up a forty bit operand on theSXA bus 550. To perform permutations, the multiplexers select which sixteen bits (A, B, C, or D) of the sixty four bits of the X bus and Y bus is to be received by the respective signal processing unit 300. For example consider a broadcast of A on the Y bus 533 for a multiplication operation, each of the multiplexers 1106 for each signal processing unit 300 would select bits 0-15 (corresponding to A) from the Y bus 533 to be received by all signal processing units 300 on their respective SYM buses 556. The multiplexers 1101,1102,1104, and 1105 in response to appropriate control signals, automatically convert the number of data bits from the data bus into the appropriate number of data bits of an operand which the adder can utilize. Furthermore in response to appropriate control signals, the multiplexers 1101, 1102,1104, and 1105 select the appropriate data off the X bus and the Y bus. In order to do so, the multiplexers 1101,1102,1104, and 1105 in each signal processing unit operate more like cross point switches where any bit of the X or Y bus can be output into any bit of the SXA, SXM, SYA or SYM buses and logical zeroes/ones can be output into any bit of the SXA, SXM,SYA or SYM buses. In this manner the multiplexers 1101,1102,1104,1106 can perform a permute functionality and align the bits accordingly for use by a 40-bit adder or a 16-bit multiplier. Referring now to Figures 12A-12G, charts of alignment of real and imaginary flexible data types are illustrated for the sixteen bit multipliers and the forty bit adders of the preferred embodiment of the present invention. In each row of each chart, the data type is illustrated in the left most column, the output onto one or more of the SXA, SYA, SXM or SYM data buses is illustrated in the center column and the right most column illustrates the equivalent signal processing configuration of the signal processors 300A-300D of a core processor 200 to perform one operation. The data type is illustrated in a vectorized format using the variable N to signify the number of vectors or times that the operand will be used. When the variable N is one, it is expected that one operation will be performed with one set of X and Y operands. When the variable N is two, it is expected that two operations will be performed together in one cycle on two sets ofX and Y operands. In any case, two operand data types need to be specified and if there is a mismatch, that is the data types do not match, data type matching needs to occur which is discussed below with reference to Figures 13A-13C, 14, and 15. Data types of lx4R, lx8R, lx16R, lx32R, 2x4R, 2x8R, 2x16R, lx4C, lx8C, lxl6C, lx32C, 2x4C, 2x8C, and 2x16C for example can all be loaded in parallel into the signal processing units across a 64-bit X and/or Y bus by being packed in four or eight sixteen-bit fields. The full bit width of the data types of 2x32R, lx40R, and lx40C can be loaded into the signal processing units together in one cycle if both sixtyfour bits of the X and Y bus are used to load two operands during the same cycle. Data types of 2x32C or a higher order may require multiple cycles to load the operands across the 64-bit X and/or Y buses.Additionally, an upper halfword (i. e. sixteen bits) of a 32 or 40 bit operand may be used to match a sixteen bit multiplier for example. In this case the lower bits may be discarded as being insignificant to the operation. Other bit widths of a halfword can be accommodated to match other hardware components of a given bit width. Using halfwords, allows the data types of 2x32R, lx40R and lx40C allows the operands to be loaded into fewer signal processing units and avoid carry paths that might otherwise be needed. Referring now to Figure 12A, an exemplary chart of the alignment of data types lx4R, lx8R, lx16R, lx32R, and lx40R into a forty bit adder is illustrated. The sign bit in each case, with the exception of the forty bit data type of lx40R, is located in bit 31 of the forty bit data word and coupled into the forty bit adders. The data field in each case is from memory on the X or Y bus or from a register off a different bus. The four bit data field of a lx4R data type from the X or Y bus is aligned into bit positions 28-31 with the sign bit in bit 31 of the SXA or SYA bus. The sign bit is included as the most significant bit (MSB) in a 4,8,16, or 32 bit word of an operand. Zeros are packed or inserted into the lower significant bits (LSBs) of bits 0-27 of the SXA bus or SYA bus in order to fill in. Guard bits, which contain the extended sign bit 31, are allocated to bits 32-39 ofSXA or SYA. In this manner, the lx4R data type is converted into a forty bit word which is utilized by one of the forty bit adders in a signal processing unit 300 for an addition, subtraction or a min/max operation. The eight bit data field of the lx8R data type from the X or Y bus is aligned into bits 24-31 of SXA or SYA with a sign bit in bit 31. Zeros are packed or inserted into the LSBs of bits 0-23. Guard bits, which contain extended sign bit 31, are allocated to bits 32-39. In this manner the lx8R data type is converted into a forty bit word which is utilized by one of the forty bit adders in a signal processing unit 300 for an addition, subtraction or a min/max operation. For an lx16R data type, the 16 bit data field from the X or Y bus is aligned into bits 16-31 with the sign bit being included in bit 31 onto the SXA or SYA bus. Zeros are packed or inserted into the LSBs of bits 0-15 while guard bits are allocated to bits 3239. In this manner the lx16R data type is converted into a forty bit word which is utilized by one of the forty bit adders in a signal processing unit 300 for an addition, subtraction or a min/max operation. For an lx32R data type, the thirty two bit data field from the X or Y bus is aligned into bits 0-31 with the sign bit included as bit 31. Guard bits, which contain extended sign bit 31, are packed together into bits 32-39 to complete the forty bit word. In this manner lx32R data type is converted is converted into a forty bit word which is utilized by one of the forty bit adders in a signal processing unit 300 for an addition, subtraction or a min/max operation. For an lx40R data type, all forty bits of its data field from the X or Y bus are allocated into bits 0- 39 of the SXA or SYA bus such that one adder of a signal processing unit can perform an addition, subtraction or a min/max operation using all forty bits of the data field at a time. As previously discussed, multiplexers 1101 and 1104 facilitate the conversion of the real data types into 40-bit fields for use by a forty bit adder in a signal processing unit. Each of these multiplexers will switch the data fields to the appropriate bit locations including the sign bit and fill zeros into the unused LSBs and allocate the guard bits as necessary for SXA bus 550 and the SYA bus 554 bus. Referring now to Figure 12B, an exemplary chart of the alignment of the real data types lx4R, lx8R, lx16R, lx32R, and lx40R into sixteen bit words for sixteen bit multipliers is illustrated. For an lx4R data type, bits 0-3 of the four bit data field from the X or Y bus is aligned into bit positions 12-15 respectively of theSXM or SYM bus. Zeros are packed or inserted into the lower significant bits (LSBs) of bits 0-11 of the SXA or SYA bus in order to fill in. In this manner, one data sample of the lx4R data type is converted into a sixteen bit word which is utilized by one of the sixteen bit multipliers in a signal processing unit 300 for a multiplication or MAC operation. For an lx8R data type, bits 0-7 of the eight bit data field from the X or Y bus are located in bits 8- 15 respectively of the SXM or SYM bus with zeros packed into bits 0-7. In this manner the lx8R data type is converted into a sixteen bit word for use by one sixteen bit multiplier of one signal processing unit 300. For an lx16R data type, bits 0-15 of the sixteen bit data field from the X or Y bus is aligned into bits 0-15 of the SXM or SYM bus such that one signal processing unit can multiply all 16 bits at a time. For a data type of lx32R, bits 0-32 of the data field from the X or Y bus are split into two sixteen bit half words. Bits 16-31 are aligned into an upper half word into bit bits 0-15 of the SXM or SYM bus of a signal processing unit 300. In one embodiment, the lower half word of bits 0-15 of the operand are discarded because they are insignificant. In this case, one signal processing unit is utilized to process the sixteen bits of information of the upper half word for each operand. In an alternate embodiment, the lower half word of bits 0-15 may be aligned into bits 0-15 of the SXM or SYM bus of another signal processing unit 300. In this case, two signal processing units are utilized in order to multiply the sixteen bits of information for each half word and the lower order signal processing unit has a carry signal path to the upper order signal processing unit in order to process the 32-bit data field. However, by using an embodiment without a carry signal path between signal processing units, processing time is reduced. For a data type of lx40R, bits 0-39 of the forty bit data field from the X or Y bus in one embodiment is reduced to a sixteen bit halfword by discarding the eight most significant bits (MSBs) and the sixteen least significant bits (LSBs). In this case bits 16-31 of the forty bits of the original operand is selected as the multiply operand for one signal processing unit. As previously discussed, multiplexers 1102 and 1106 facilitate the conversion of the real data types into sixteen bit fields for use by a sixteen bit adders in a signal processing unit. Each of these multiplexers will switch the data fields to the appropriate bit locations including the fill zeros into the unused LSBs as necessary for SXM buses 552A/552B and the SYM buses 556A/556B. Each of the multiplexers 1102 and 1106 perform the permutation operation, the alignment operation, and zero insertion for the respective multipliers in each of the signal processing units 300A-300D. Referring now to 12C, an exemplary chart of the alignment of the complex data types lx4C, lx8C, lx16C, lx32C, lx32C, and lx40C into one or more forty bit words for one or more forty bit adders is illustrated. For complex data types at least two signal processing units are utilized to perform the complex computations of the real and imaginary terms. For the forty bit adders, typically one signal processing unit receives the real data portion while another signal processing unit receives the imaginary data portion of complex data type operands. For an lx4C data type, bits 0-4 of the real data field are aligned into bits 28-31 respectively with a sign bit in bit position 31 of a first forty bit word. Guard bits are added to bit fields 32-39 while zeros are inserted into bits 0-27 of the first forty bit word. Similarly, bits 0-4 of the imaginary data field are aligned into bits 28-31 respectively with a sign bit in bit position 31 of a second forty bit word.Guard bits are allocated to bits 32-39 while zeros are packed into bits 0-27 of the second forty bit word. In this manner, lx4C complex data types are converted into two forty bit words as operands for two forty bit adders in two signal processing units. For an lx8C data type, bits 0-7 of the real data field from the X or Y bus is located into bit positions 24-31 with a sign bit in bit position 31 of a first forty bit operand on one the SXA or SYA buses. Guard bits are allocated to bit positions 32-39 while zeros are packed into bits 0-23 of the first forty bit operand. Bits 0-7 of the complex data field from the X or Y bus is aligned into bits 24-31 with a sign bit in bit position 31 of a second forty bit operand on another one of the SXA or SYA buses. Guard bits, which are also initially zeroes, are allocated to bit positions 32-39 while zeros are packed into bits 023 of the second forty bit operand. In this manner, lx8C complex data types are converted into two forty bit words as operands for two forty bit adders in two signal processing units. For an lx16C data type, bits 0-16 of the real data field from the X or Y bus are aligned into bits 16-31 with a sign bit in bit position 31 for a first forty bit operand on one of the SXA or SYA buses. Guard bits are allocated to bit positions 32-39 with zeros packed into bit positions 0-15 of the first forty bit operand. Similarly, bits 0-16 of the imaginary data field from the X or Y bus are aligned into bits 16-31 including a sign bit in bit 31 for a second forty bit operand onto another one of the SXA or SYA buses.Guard bits are allocated to bit positions 32-39 and zeros are packed into bit position 0-15 of the second forty bit operand on the SXA or SYA bus. For an lx32C data type, bits 0-31 of the 32-bits of real data are aligned into bits 0-31 respectively with a sign bit included in bit position 31 of a first forty bit operand on one of the SXA or SYA buses.Guard bits are allocated to bit positions 32-39 for the first forty bit operand. Similarly, bits 0-31 of the imaginary data field are aligned into bit positions 0-31 with the sign bit being bit position 31 of a second forty bit operand on another of the SXA or SYA buses. Guard bits are inserted into bits 32-39 of the second forty bit operand. Thus, the lx32C data type is converted into two forty bit operands for two forty bit adders of two signal processing units 300 for processing both the imaginary and real terms in one cycle. For an 1x40C complex data type, bits 0-39 of the real data field from the X or Y bus are aligned into bits 0-39 of a first forty bit operand on one of theSXA or SYA buses for use by one signal processing unit. Bits 0-39 of the imaginary data field from the X orY bus is aligned into bit positions 0-39 of a second forty bit operand on another of the SXA or SYA buses for use a second signal processing unit such that two signal processing units may be used to process both 40 bit data fields in one cycle. Referring now to Figure 12D, an exemplary chart of the alignment of the complex data types 2x16C, 2x32C, and 2x40C into four forty bit words for four forty bit adders is illustrated. In this case two sets of operands (Data 1 and Data 2) are brought in together in the same cycle having flexible bit widths. For the 2xl6C complex data type, four 16-bit data fields from the X or Y bus are aligned into four forty bit operands, one for each of the signal processing units 300A-300D. Bits 0-15 of the real data field for DATA 1 from the X or Y bus is aligned into bits 1631 respectively of a first forty bit operand including the sign bit in bit position 31 on one of the SXA orSYA buses for a first signal processing unit. Bits 015 of the complex data field for DATA 1 from the X or Y bus are aligned into bits 16-31 respectively of a second forty bit operand including the sign bit in bit position 31 on another of the SXA or SYA buses for a second signal processing unit. Bits 0-15 of the real data field for DATA 2 from the X or Y bus is aligned into bits 16-31 respectively of a third forty bit operand including the sign bit in bit position 31 on yet another one of the SXA or SYA buses for a third signal processing unit. Bits 0-15 of the complex data field for DATA 2 from the X or Y bus are aligned into bits 16-31 respectively of a fourth forty bit operand including the sign bit in bit position 31 on still another of the SXA or SYA buses for a fourth signal processing unit. Zeros are packed into bit positions 0 -15 and guard bits are allocated to bits 32-39 in each of the forty bit operands on the four SXA or fourSYA buses as shown in Figure 12D. Thus, the 2x16C complex data type is aligned into four forty bit operands for use by four forty bit adders in four signal processing units. The 2x32C complex data type and the 2x40C complex data type are aligned into four operands similar to the 2x16 data type but have different bit alignments and insertion of zeros or allocation of guard bits. These bit alignments and zero packing/insertions and guard bit allocations are shown as illustrated in Figure 12D. In this manner two 2xSC complex data types, whereS is limited by the width of the adder, can be aligned into four operands for use by four adders in four signal processing units 300 to process the complex data types in one cycle. Referring now to Figure 12E, an exemplary chart of the alignment of the complex data types lx4C, lx8C, lx16C, lx32C, and lx40C into one or more sixteen bit words for one or more sixteen bit multipliers is illustrated. For an lx4C complex data type, bits 0-3 of the real data field from the X or Y bus is aligned into bits 12-15 respectively of a first sixteen bit operand on one of the SXM or SYM buses as illustrated in Figure12E. Bits 0-3 of the imaginary data field from the X or Y bus is aligned into bits 12-15 respectively of a second sixteen bit operand on another one of the SXM or SYM buses.Bits 0-11 of each of the first and second sixteen bit operands are packed with zeros. In this manner, the each complex element of a lx4C complex data types is converted into two sixteen bit words as operands for two sixteen bit multipliers in two signal processing units. The 1 by 8C data type and the lx16C data types are similarly transformed into two sixteen bit operands as is the lx4C but with different bit alignment as shown and illustrated in Figure 12E. The complex data types lx4C, lx8C, and lx16C in Figure 12E utilize two signal processing units and align their respective data bit fields into two sixteen bit words for use by two sixteen bit multipliers in two signal processing units on one cycle. For a lx32C complex data type with operands having bits 0-31, the upper half word of bits 16-31 of the real and imaginary parts of each operand are selected and multiplexed from the buses SXM or SYM into two sixteen bit multipliers in one embodiment while the lower half word is discarded. In an-alternate 'embodiment, the upper half word and the lower half word for the real and imaginary parts are multiplexed into four sixteen bit multipliers for multiplication with a carry from the lower half word multiplier to the upper half word multiplier. For a lx40C complex data type with operands having bits 0-39, a middle half word of bits 16-31 of the real and imaginary parts of each operand are selected and multiplexed from the buses SXM or SYM into two sixteen bit multipliers in one embodiment while the upper bits 32-39 and the lower half word bits 0-15 are discarded.In an alternate embodiment, the word is separated by the multiplexers across multiple multipliers with carry from lower order multipliers to upper order multipliers for the real and imaginary terms of the complex data type. Referring now to Figure 12F, an exemplary chart of the alignment of the complex data types 2x32C or 2x40C and 2xl6C into four sixteen bit words for four sixteen bit multipliers is illustrated. For 2x32C data types, bits 0-15 of the upper half word of the real data (RHWu) of a first operand on theX or Y bus are aligned into bits 0-15 respectively of a first sixteen bit operand on one of the SXM or SYM buses for a first of the signal processing units and bits 0-15 of the upper half word of the real data field of a second operand from the X or Y bus are aligned into bits 0-15 of a second sixteen bit operand on another one of the SXM or SYM buses for the first signal processing unit. Bits 0-15 of the upper half word (IHWu) of the imaginary data of the first operand on the X or Y bus are aligned into bit positions 0-15 of a third sixteen bit operand on another one of theSXM or SYM buses for a second signal processing unit and bits 0-15. of the upper half of the imaginary data of the second operand on the X or Y bus are aligned into bits 0-15 of a fourth sixteen bit operand on another one of the SXM or SYM buses for the second signal processing unit. Thus, the 2 by 32C complex data type uses two signal-processing units and converts the 32-bit real and imaginary data fields into 16-bit operands for use by the 16-bit multipliers in two signal processing units. For 2x16C data types, two complex operands can be specified and multiplexed as one across a sixty four bit data bus into two multipliers. In this case, bits 0-15 of real data field of the first operand from the X or Y bus is aligned into bits 0-15 of a first sixteen bit operand on one of the SXM or SYM buses for one signal-processing unit while bits 0-15 of the imaginary data of the first operand on the X or Y bus is aligned into bits 0-15 of a second sixteen bit operand on another of the SXM or SYM buses for a second signalprocessing unit. Bits 0-15 of real data field of the second operand on the X or Y bus is aligned into bits 0 -15 of a third sixteen bit operand for the first signal processing unit and bits 0-15 of the imaginary data field of the second operand on the X or Y bus is aligned into bits 0-15 of a fourth sixteen bit operand on another one of the SXM or SYM buses for the second signal processing unit. Thus, the 2x16C data type uses four signal processing units to process each of four sixteen bit operands in four 16-bit multipliers in one cycle. Referring now to Figures 13A, 13B and 13C, the general rule for type matching of two-operands is illustrated. Generally,. data type matching refers to matching two different data types of two operands together so that they can be properly processed for a given digital signal processing operation. In Figure 13A, the first operand, operand 1, has a data type of Ni by S re and the second operand, operand 2, has a data type of N2 by S2 real. The general rule for operand type matching of two real data types is to determine and select the maximum of Nl or N2 and the maximum of Si or S2. Alternatively, one can determine and discard the minimum of Nl or N2 and the minimum of S1 or S2 to provide operand type matching. Operand data type matching provides an indication of the number of signal-processing units that the operands are to be processed by (maximum of N1 or N2) and the bit width of both operands (maximum of Si or S2). For the different operand types the multipliers and adders of the signal processing units are provided with the best operand type match of two different operand data types in order to obtain a result. The output results from the operation preformed on the disparate operands is in the form of the matched data type. Referring now to Figure 13B, both the first operand, operand 1, and the second operand, operand 2, are complex data types. The general rule for operand type matching of two complex types of operands is the similar for matching two real data types but resulting in a complex data type. The operand data type matching for the complex data types is to determine and select the maximum of N1 or N2 and the maximum of Si or S2. Referring now to Figure 13C, the first operand, operand 1, is a real data type while the second operand, operand 2, is a complex data type. The general rule for operand data type matching of a real data type and a complex data type is to select the maximum of Nl or N2 and the maximum of Si or S2 which has a complex data type match. The maximum of N, or N2 represents the number of signal-processing units needed for processing the real or the imaginary term and the maximum of Si or S2 represents the bit width of the operand that is to be aligned into the signalprocessing units. Multiplexers 1101 1102,1104, and 1106 in each instance of the data typer and aligner 502, perform the data type matching between operand 1 and operand 2 from the X bus 531 or the Y bus 533 in response to appropriate multiplexer control signals.Permutation and alignment is automatically selected by the respective core processor 200 to provide the data type matching for the two operands through control of the bus multiplexers into each of the signal processing units. In addition to automatic data type matching, the present invention operationally matches the data types in response to the operation to be performed (ADD, SUB,MULT, DIVIDE, etc.), the number of functional units (adders and multipliers) and their respective bit widths in each of signal processing units 300A-300D, the bit width of automatic data type match for the two operands, and whether real or complex data types are involved and scalar or vector functions are to be performed. Each of the signal processing units 300A- 300D has two multipliers and three adders. In the preferred embodiment of the present invention, each of the multipliers are sixteen bits wide and each of the adders is forty bits wide. Multiple operands of the same data type can be easily processed after setting up nominal data types and reading new data as the new operands and repeating the multiplication, addition or other type of signal processing operation. Referring now to Figures 14,15A and 15B, exemplary charts showing operational matching of data types provided by the present invention are illustrated. In each of Figures 14,15A, and 15B, a data type for a first operand is indicated along the top row and a data type for a second operand is indicated along the left most column. The matrix between the top row and the left most column in each of the figures indicates the operational matching provided by the embodiment of the present invention. In Figure 14, an exemplary chart showing the data type matching for a multiplication operation by the multipliers of the signal processing units is illustrated. Operands having data types of four and eight bits are not illustrated in Figure 14 with it being understood that these data types are converted into sixteen bit operands. In Figure 14, the empty cells are disallowed operations for the embodiment described herein. However, if the number of signal processing units is expanded from four and the data bit width of the multipliers is expanded from sixteen bits, additional operations can be performed for other operand data type combinations. In each completed cell of Figure 14, the operation requires two cycles for a vector operation and three cycles for a real data type scalar operation. Scalar multiplication of a complex operand with another operand is not performed because two values, a real and an imaginary number, always remain as the result. Each completed cell indicates the number of signal processing units used to perform the multiplication operation. For example, a multiplication of a lx16C operand with a lx16C operand indicates that four signal processing units are utilized. In the case of a complex multiplication, the operands are (ri + jil) and (r2 + ji2) where rl and r2 are the real terms and il and i2 are the imaginary terms. The result of the complex multiplication is [(rl x r2)- (il x i2)] for the real term and [(rl x i2) + (r2 x il)] for the imaginary term. Thus, four signal processing units process the multiplication of the parentheticals together in the same cycle. The remaining add and subtract operations for the real and imaginary terms respectively are then performed in two signal processing units together on the next cycle to obtain the final results. Consider as another example, a multiplication of a lx16R operand with a lx32C operand. In this case, Figure 14 indicates that four signal processing units are utilized. The operands are rl and (r2 + ji2) where rl and r2 are real numbers and i2 is an imaginary number. The result of the operation is going to be [(rl x r2)] for the real part of the result and [(rl x i2)] for the imaginary part of the result. Because the complex operand is thirty two bits wide, the real and imaginary terms are split into half words. Thus the operation becomes [ (rl x r2UHW) + (rl x r2LHW)] for the real part and [(rl x i2UHW) + (rl x i2LHW)] where UHW is the upper half word and LHW is the lower half word of each value respectively. Thus, each of four signal processing units performs the multiplication of the parentheticals together in one cycle while the addition of terms is performed in two signal processing units on the next cycle. Referring now to Figure 15A, an exemplary chart showing the data type matching for scalar addition by the adders of the signal processing units is illustrated. Operands having data types of four and eight bits are not illustrated in Figure 15A with it being understood that these data types are converted into sixteen bit operands. Note that no scalar addition is. performed using a complex operand due to the fact that two values, a real number and an imaginary number, always results in an operation involving a complex operand. In Figure 15A, the empty cells are disallowed operations for the embodiment described herein. However, if the number of signal processing units is expanded from four and the data bit width of the adders is expanded from forty bits, additional operations can be performed for other operand data type combinations. In each completed cell of Figure 15A, the scalar add operation can be completed in one cycle if both operands are readily available. Each completed cell indicates the number of signal processing units used to perform the scalar addition operation. Consider for example a lx32R operand and a 2x16R operand where rl is the first operand being 32 bits wide and r2 and r3 is the second set of operands each being sixteen bits wide. The chart of Figure 15A indicates that two signal processing units are utilized. The scalar result is [(rl + r2) + (ri + r3)]. Two signal processing units perform the addition operation in the parenthetical using their two forty bit adders in one cycle while a second addition in one of the two signal processing units combines the intermediate result in a second cycle. Referring now to Figure 15B, an exemplary chart showing the data type matching for the vector addition by the adders of the signal processing units is illustrated. Operands having data types of four and eight bits are not illustrated in Figure 15B with it being understood that these data types are converted into sixteen bit operands. In Figure 15B, the empty cells are disallowed operations for the embodiment described herein. However, if the number of signal processing units is expanded from four and the data bit width of the adders is expanded from forty bits, additional operations can be performed for other operand data type combinations. In each completed cell of Figure 15B, the vector add operation can be completed in one cycle if both operands are readily available. Each completed cell indicates the number of signal processing units used to perform the vector addition operation. Operands having complex data types can be used in performing vector addition. Consider for example a lx16R operand and a lx32C operand where rl is the first operand being 16 bits wide and r2 and i2 are the second operand each being thirty two bits wide. The chart of Figure 15B indicates that two signal processing units are utilized. The real lx16R operand is converted into lx16C complex operand with an imaginary part of zero.In one signal processing unit the real parts are added together performing (rl + r2) while in another signal processing unit the imaginary component i2 is added to zero performing (0 + i2). The vector result is [(rl + r2)] as the real component and i2 as the imaginary component. The signal processing units perform the addition operation in the parentheticals using a forty bit adder. Consider as another example a Ixl6C operand and a lx32C operand For the lx16C operand rl and il are the real and imaginary parts respectively of the first operand each being 16 bits wide and r2 and i2 are the real and imaginary terms of second operand each being thirty two bits wide. The chart of Figure 15B indicates that two signal processing units are utilized. The vector result is [(rl + r2)] as the real component and [ (il + i2)] as the imaginary component.Two signal processing units perform the addition operations in the parentheticals using forty bit adders. Referring now to Figure 16, a block diagram illustrating the control signal generation for the bus multiplexers included in each of the data typer and aligners of each signal processing unit. Control signals provided to each of the bus multiplexers of each data typer and aligner provide selective control to perform automatic data typing and alignment and user selected permutations. Control signals to multiplexers 1101 and 1102 of the bus multiplexer for the X bus in each of the data typer aligners selects the data type and alignment for one operand into each of the signal processing units. Controls signals to multiplexers 1104 and 1106 of the bus multiplexer for the Y bus in each of the data typer and aligners selects the data type and alignment for the second operand into each of the signal processing units. Automatic data type matching is provided through control of the bus multiplexers in each signal processor in response to decoding the data type fields associated with each operand from the control register or the instruction itself. The resultant operands output from each of the bus multiplexers in each signal processing unit is coupled into the multiplexer 514A of the multiplier 504A, multiplexer 520A of adder 510A, and multiplexer 520B of adder 510B in each signal processing unit as illustrated in Figure 5B. In Figure 16, one or more DSP instructions 1600 are coupled into an instruction predecoder 1602. The instruction predecoder 1602 may include one or more control registers ("CR") 1604 which include a data type field and a permute field to inform the predecoder 1602 of the data type of the operands and how they are to be read into each of the signal processing units 300 (SPO 300A, SP1 300B, SP2 300C, and SP3 300D). The one or more DSP instructions 1600 directly or indirectly through the one or more control registers 1604, indicate each data type for two operands in two data type fields and any permutation of the data bus in two permute fields. The instruction predecoder 1602 automatically determines the best data type match by comparing the two data types for each operand. The instruction predecoder 1602 also reads the permute fields of each operand. In response to the permute fields and the data types of each operand, the instruction predecoder 1602 generates predecoded, control signals 1606 for data typing multiplexing control. The predecoded control signals 1606 are accordingly for the control of the bus multiplexers 1001 and 1002 in each data typer and aligner 502 (datatyper and aligner 502A, 502B, 502C, and 502D) in each signal processing unit 300. These predecoded control signals are coupled into the final decoders 1610A in each signal processing unit to generate the multiplexer control signals 1011 and 1012 respectively for each bus multiplexer 1001 and 1002 of each data typer and aligner 502 in each signal processing unit 300. The instruction predecoder 1602 further generates predecoded control signals for other multiplexers 1620B, 1620C through 1620N of each signal processing unit 300. Final decoders 1610B, 1610C through 1610N receive the predecoded control signals to generate the multiplexer control signals for each of the multiplexers 1620B, 1620C through 1620N of each signal processing unit 300. In this manner, the operands on the X bus and the Y bus can be aligned, matched, permuted and selected for performing a digital signal processing operation. As those of ordinary skill will recognize, the present invention has many advantages. One advantage of the present invention is that operands of various data types for different digital signal processing applications can be processed in the application specific signal processor of the present invention.Another advantage of the present invention is that automatic data type matching is provided. Another advantage of the present invention is that operands can be automatically permuted through use of a permute field in an instruction or control register so that additional instructions to perform a desired permutation to the signal processors are unnecessary.Another advantage of the present invention is that the data type capabilities of the signal processing units can be easily expanded by adding additional signal processing units. The preferred embodiments of the present invention are thus described. While certain exemplary embodiments of the present invention have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this invention not be limited to the. specific constructions and arrangements shown and described,, since various other modifications may occur to those ordinarily skilled in the art. For example, the present invention of data typing and aligning has been described with reference to memory access registers for accessing operands from memory but operands can be accessed from registers and can also be appropriately data typed and aligned by the present invention. While a 16-bit multiplier is utilized in the preferred embodiment of the invention, multipliers having larger bit widths may also be utilized and provide greater data type flexibility. Additionally, the data bus between the data memory and the signalprocessing units maybe increased in size from 64-bits to 80-bits for example and provide greater data type flexibility. Furthermore, additional signal-processing units maybe provided such that larger bit widths of operands or a greater number of operands for processing together in a cycle may also be accommodated.Additionally, the present invention may be implemented in hardware, software, firmware or a combination thereof and utilized in systems, subsystems, components or sub-components thereof. When implemented in software, the-elements of the present invention are essentially the code segments to perform the necessary tasks. The program or code segments can be stored in a processor readable medium or transmitted by a computer data signal embodied in a carrier wave over a transmission medium or communication link. The "processor readable medium"may include any medium that can store or transfer information. Examples of the processor readable medium include an electronic circuit, a semiconductor memory device, a ROM, a flash memory, an erasable ROM (EROM), a floppy diskette, aCD-ROM, an optical disk, a hard disk, a fiber optic medium, a radio frequency (RF) link, etc. The computer data signal may include any signal that can propagate over a transmission medium such as electronic network channels, optical fibers, air, electromagnetic, RF links, etc. The code segments may be downloaded via computer networks such as the Internet, Intranet, etc. In any case, the present invention should not be construed as limited by such embodiments, but rather construed according to the claims that follow below.
A system and method is disclosed and includes an execution unit that can be used to count the leading zeros in a data word. During operation, the execution unit can receive a data word that has a width of 2 to the Nth powe r. Further, the execution unit can sign extend the data word to a temporary dat a word that has a width of 2 to the Mth power, wherein M is greater than N. Th e temporary data word can be input to a counter that has a width of 2 to the M th power and the counter can count the leading zeros within the temporary data word to get a result.
A method of processing a data word, the method comprising: receiving the data word; determining whether the data word is a thirty-two bit data word or a sixty- four bit data word; and after determining that the data word is a thirty-two bit data word, sign extending the thirty-two bit data word to create a temporary sixty-four bit data word. The method of claim 1, further comprising determining whether a leading zeros value or a leading ones value is to be determined. The method of claim 2, further comprising communicating the temporary sixty- four bit data word to a bit counter having a width of sixty-four bits after determining that the leading zeros value is to be determined. The method of claim 3, further comprising counting the leading zeros within the temporary sixty-four bit data word to generate a sign extended leading zeros count. The method of claim 4, further comprising subtracting a fixed value of thirty- two from the sign extended leading zeros count, provided the count is not zero, to generate a determined leading zeros count. The method of claim 5, further comprising writing the determined leading zeros count to a register. The method of claim 2, further comprising inverting the temporary sixty- four bit data word to create an inverted sixty-four bit data word when a leading ones value is to be determined. The method of claim 7, further comprising communicating the inverted temporary sixty-four bit data word to a bit counter that is 64 bits wide. The method of claim 8, further comprising counting the leading zeros within the inverted temporary sixty-four bit data word to generate a sign extended leading ones count. The method of claim 9, further comprising subtracting the fixed value of thirty- two from the sign extended leading ones count, provided the count is not zero, to generate a determined leading ones count. The method of claim 10, further comprising writing the determined leading ones count to a register. The method of claim 1, further comprising determining whether a leading zeros value or a leading ones value is to be used, after determining that the data word is a sixty-four bit data word. The method of claim 12, further comprising: communicating the sixty-four bit data word to a bit counter having a width of sixty-four bits after determining that the leading zeros value is to be used; counting the leading zeros within the sixty-four bit data word to generate a determined leading zeros count; and writing the determined leading zeros count to a register. The method of claim 12, further comprising: inverting the sixty-four bit data word to create an inverted sixty-four bit data word when a leading ones value is to be used; communicating the inverted sixty-four bit data word to a sixty-four bit counter; counting the leading zeros within the inverted sixty-four bit data word to generate a determined leading ones count; and writing the determined leading ones count to a register. A method comprising using a sixty-four bit logic counter to count zero or more leading zeros within a thirty-two bit data word. The method of claim 15, further comprising: receiving the thirty-two bit data word; and sign extending the thirty-two bit data word to create a temporary sixty-four bit data word. The method of claim 16, further comprising counting the leading zeros within the temporary sixty-four bit data word to obtain an interim result. The method of claim 17, further comprising subtracting a fixed value from the interim result, provided the count is not zero, to obtain a final result. The method of claim 18, further comprising writing the final result to a register as a leading zeros value. The method of claim 16, further comprising inverting the temporary sixty- four bit data word to generate an inverted temporary sixty-four bit data word. The method of claim 20, further comprising counting the leading zeros of the inverted temporary sixty-four bit data word to obtain an interim result. The method of claim 21, further comprising subtracting the fixed value from the interim result, provided the count is not zero, to obtain a final result. The method of claim 22, wherein the fixed value is thirty-two. The method of claim 23, further comprising writing the final result to a register as a leading ones value. An instruction execution unit for a digital signal processor, the instruction execution unit, comprising: at least one control module; at least one sign extender coupled to at least one control module; at least one inverter coupled to at least one control module; and at least one sixty-four bit wide bit counter coupled to at least one control module, wherein the at least one control module includes: logic to instruct the sixty-four bit wide bit counter to count leading zeros within one or more thirty-two bit data words received at the instruction execution unit; and logic to instruct the sixty-four bit wide bit counter to count leading zeros within one or more sixty-four bit data words received at the instruction execution unit. The instruction execution unit of claim 25, wherein the control module further comprises logic to control the sign extender to sign extend the one or more thirty-two bit data words to create a temporary sixty-four bit data word. The instruction execution unit of claim 26, wherein the control module further comprises logic to instruct the sixty-four bit wide bit counter to count the leading zeros within the temporary sixty-four bit data word to obtain an interim leading zeros count. The instruction execution unit of claim 27, wherein the control module further comprises logic to subtract a fixed value from the interim leading zeros count, provided the count is not zero, to obtain a final leading zeros count. The instruction execution unit of claim 26, wherein the control module further comprises logic to control the inverter to invert the temporary sixty-four bit data word to yield an inverted temporary sixty-four bit data word. The instruction execution unit of claim 29, wherein the control module further comprises logic to instruct the sixty-four bit wide bit counter to count the leading zeros of the inverted temporary sixty-four bit data word to obtain an interim leading ones count. The instruction execution unit of claim 30, wherein the control module further comprises logic to subtract a fixed value from the interim leading ones count, provided the count is not zero, to obtain a final leading ones count. A digital signal processor, comprising: a memory; a sequencer responsive to the memory; a register file coupled to the memory; an instruction execution unit responsive to the sequencer, wherein the instruction execution unit comprises: at least one control module; at least one sign extender coupled to the control module; at least one inverter coupled to the control module; and at least one sixty-four bit logic counter coupled to the control module, wherein the at least one control module includes: logic to control the sixty-four bit wide bit counter to count leading zeros within one or more thirty-two bit data words; and logic to control the sixty-four bit wide bit counter to count leading zeros within one or more sixty-four bit data words. A portable communication device, comprising: a digital signal processor, wherein the digital signal processor comprises: a memory; a sequencer responsive to the memory; a register file coupled to the memory; an instruction execution unit responsive to the sequencer, wherein the instruction execution unit comprises: a control module; a sign extender coupled to the control module; an inverter coupled to the control module; and a sixty-four bit wide bit counter coupled to the control module, wherein the control module includes: logic to control the sixty-four bit wide bit counter to count leading zeros within one or more thirty-two bit data words; and logic to control the sixty-four bit wide bit counter to count leading zeros within one or more sixty-four bit data words. The portable communication device of claim 33, further comprising: an analog baseband processor coupled to the digital signal processor; a stereo audio coder/decoder (CODEC) coupled to the analog baseband processor; a radio frequency (RF) transceiver coupled to the analog baseband processor; an RF switch coupled to the RF transceiver; and an RF antenna coupled to the RF switch. The portable communication device of claim 33, further comprising: a voice coder/decoder (CODEC) coupled to the digital signal processor; a Bluetooth controller coupled to the digital signal processor; a Bluetooth antenna coupled to the Bluetooth controller; a wireless local area network media access control (WLAN MAC) baseband processor coupled to the digital signal processor; an RF transceiver coupled to the WLAN MAC baseband processor; and an RF antenna coupled to the RF transceiver. The portable communication device of claim 33, further comprising: a stereo coder/decoder (CODEC) coupled to the digital signal processor; an 802.11 controller coupled to the digital signal processor; an 802.11 antenna coupled to the 802.11 controller; a Bluetooth controller coupled to the digital signal processor; a Bluetooth antenna coupled to the Bluetooth controller; a universal serial bus (USB) controller coupled to the digital signal processor; and a USB port coupled to the USB controller. A processor device, comprising: means for receiving a thirty-two bit data word; means for sign extending the thirty-two bit data word to create a temporary sixty-four bit data word; means for counting the leading zeros within the temporary sixty-four bit data word to obtain an interim leading zeros count; and means for subtracting a value from the interim leading zeros count, provided the count is not zero, to obtain a final leading zeros count. A processor device, comprising: means for receiving a thirty-two bit data word; means for sign extending the thirty-two bit data word to create a temporary sixty-four bit data word; means for inverting the temporary sixty-four bit data word to create an inverted temporary sixty-four bit data word; means for counting the leading zeros within the temporary sixty-four bit data word to obtain an interim leading ones count; and means for subtracting a value from the interim leading ones count, provided the count is not zero, to obtain a final leading ones count. A processor device, comprising: means for receiving a data word; means for determining whether the data word is a thirty-two bit data word or a sixty-four bit data word; and means for sign extending a thirty-two bit data word to create a temporary sixty- four bit data word. A method of processing a data word, comprising: receiving a data word having a width of 2 to the Nth power; sign extending the data word to a temporary data word having a width of 2 to the Mth power; and inputting the temporary data word to a counter having a width of 2 to the Mth power. The method of claim 40, further comprising counting the leading zeros within the temporary data word to get a result. The method of claim 41, further comprising setting a count equal to zero when the result is zero. The method of claim 41, further comprising subtracting a value equal of 2 to the Mth power minus 2 to the Nth power from the result to get a count. The method of claim 40, further comprising counting the leading zeros within the temporary data word to get a result having M+1 bits, wherein the result includes a bit zero as a least significant bit, a bit M as a most significant bit, and a bit N between the bit zero and the bit M. The method of claim 44, further comprising: copying bit M to the location of bit N; and replacing bit M through bit N + 1 with zero. A processor device, comprising: means for receiving a data word having a width of 2 to the Nth power; means for sign extending the data word to a temporary data word having a width of 2 to the Mth power; and means for inputting the temporary data word to a counter having a width of 2 to the Mth power. The device of claim 46, further comprising means for counting the leading zeros within the temporary data word to get a result. The device of claim 47, further comprising means for setting a count equal to zero when the result is zero. The device of claim 47, further comprising means for subtracting a value equal of 2 to the Mth power minus 2 to the Nth power from the result to get a count. The device of claim 46, further comprising means for counting the leading zeros within the temporary data word to get a result having M+1 bits, wherein the result includes a bit zero as a least significant bit, a bit M as a most significant bit, and a bit N between the bit zero and the bit M. The device of claim 50, further comprising: means for copying bit M to the location of bit N; and means for replacing bit M through bit N + 1 with zero. An audio file player, comprising: a digital signal processor; an audio coder/decoder (CODEC) coupled to the digital signal processor; a multimedia card coupled to the digital signal processor; a universal serial bus (USB) port coupled to the digital signal processor; and wherein the digital signal processor includes: a memory; a sequencer responsive to the memory; a register file coupled to the memory; an instruction execution unit responsive to the sequencer, wherein the instruction execution unit comprises: a control module; a sign extender coupled to the control module; an inverter coupled to the control module; and a sixty-four bit wide bit counter coupled to the control module, wherein the control module includes: logic to control the sixty-four bit wide bit counter to count leading zeros within one or more thirty-two bit data words; and logic to control the sixty-four bit wide bit counter to count leading zeros within one or more sixty-four bit data words.
CA 02613404 2007-12-21 WO 2007/002802 PCT/US2006/025300SYSTEM AND METHOD OF COUNTING LEADING ZEROS AND COUNTING LEADING ONES IN A DIGITAL SIGNAL PROCESSOR BACKGROUND 1. Field [0001] The present disclosure generally relates to digital signal processors and devices that use such processors. More particularly, the disclosure relates to components within a digital signal processor that count leading zeros or count leading ones within data words. H. Description of Related Art [0002] Advances in technology have resulted in smaller and more powerful personal computing devices. For example, there currently exist a variety of portable personal computing devices, including wireless computing devices, such as portable wireless telephones, personal digital assistants (PDAs), and paging devices that are small, lightweight, and easily carried by users. More specifically, portable wireless telephones, such as cellular telephones and IP telephones, can communicate voice and data packets over wireless networks. Further, many such wireless telephones include other types of devices that are incorporated therein. For example, a wireless telephone can also include a digital still camera, a digital video camera, a digital recorder, and an audio file player. Also, such wireless telephones can include a web interface that can be used to access the Internet. As such, these wireless telephones include significant computing capabilities. [0003] Some of the programs that provide the functionality of the different devices incorporated within a wireless telephone include instructions that call for a leading zeros count or a leading ones count for particular data words. Typically, multiple data word sizes are used with different programs. As such, multiple hardware components can be used to count the leading zeros and leading ones within the different data words. [0004] Accordingly it would be advantageous to provide an improved system and method for counting leading zeros and counting leading ones within a digital signal processor. CA 02613404 2007-12-21 WO 2007/002802 PCT/US2006/025300SUNIlVIARY [0005] A method of processing a data word is disclosed and includes receiving the data word and determining whether the data word is a thirty-two bit data word or a sixty-four bit data word. Moreover, the method includes sign extending the thirty-two bit data word to create a temporary sixty-four bit data word after determining that the data word is a thirty-two bit data word. [0006] In a particular embodiment, the method can include determining whether a leading zeros value or a leading ones value is to be determined. Also, in a particular embodiment, the method can include communicating the temporary sixty-four bit data word to a bit counter having width of sixty-four bits after determining that the leading zeros value is to be determined. Further, in a particular embodiment, the method can include counting the leading zeros within the temporary sixty-four bit data word to generate a sign extended leading zeros count, subtracting a fixed value of thirty-two from the sign extended leading zeros count, provided the count is not zero, to generate a determined leading zeros count, and writing the determined leading zeros count to a register. [0007] In another particular embodiment, the method can include inverting the temporary sixty-four bit data word to create an inverted sixty-four bit data word when a leading ones value is to be determined. Also, in a particular embodiment, the method can include communicating the inverted temporary sixty-four bit data word to a bit counter with a width of sixty-four bits, counting the leading zeros within the inverted temporary sixty-four bit data word to generate a sign extended leading ones count, subtracting the fixed value of thirty-two from the sign extended leading ones count, provided the count is not zero, to generate a determined leading ones count, and writing the determined leading ones count to a register. [0008] In yet another particular embodiment, the method can further include determining whether a leading zeros value or a leading ones value is to be used, after determining that the data word is a sixty-four bit data word. Additionally, in a particular embodiment, the method can include communicating the sixty-four bit data word to a bit counter with a width of sixty-four bits after determining that the leading zeros value is to be used, counting the leading zeros within the sixty-four bit data word to generate a determined leading zeros count, and writing the determined leading zeros count to a register. CA 02613404 2007-12-21 WO 2007/002802 PCT/US2006/025300[0009] In still another particular embodiment, the method can include inverting the sixty-four bit data word to create an inverted sixty-four bit data word when a leading ones value is to be used, communicating the inverted sixty-four bit data word to a sixty- four bit counter, counting the leading zeros within the inverted sixty-four bit data word to generate a determined leading ones count, and writing the determined leading ones count to a register. [0010] In another embodiment, a method is disclosed and can include using a bit counter with a width of sixty-four bits to count one or more leading zeros within a thirty-two bit data word. [0011] In yet another embodiment, an instruction execution unit for a digital signal processor is disclosed and can include a control module, a sign extender that is coupled to the control module, an inverter that is coupled to the control module, and a bit counter with a width of sixty-four bits that is coupled to the control module. In this embodiment, the control module can include logic to instruct the bit counter to count leading zeros within one or more thirty-two bit data words received at the instruction execution unit and logic to instruct the sixty-four bit logic counter to count leading zeros within one or more sixty-four bit data words received at the instruction execution unit. [0012] In still another embodiment, a digital signal processor is provided and includes a memory, a sequencer that is responsive to the memory, a register file that is coupled to the memory, and an instruction execution unit that is responsive to the sequencer. In this embodiment, the instruction execution unit can include a control module, a sign extender that is coupled to the control module, an inverter that is coupled to the control module, and a bit counter with a width of sixty-four bits that is coupled to the control module. In this embodiment, the control module can include logic to control the sixty- four bit logic counter to count leading zeros within one or more thirty-two bit data words and logic to control the bit counter with a width of sixty-four bits to count leading zeros within one or more sixty-four bit data words. [0013] In yet still another embodiment, a portable communication device is disclosed and includes a digital signal processor. In this embodiment, the digital signal processor can include a memory, a sequencer that is responsive to the memory, a register file that is coupled to the memory, and an instruction execution unit that is responsive to the sequencer. In this embodiment, the instruction execution unit can include a control module, a sign extender that is coupled to the control module, an inverter that is coupled to the control module, and a bit counter with a width of sixty-four bits that is coupled to CA 02613404 2007-12-21 WO 2007/002802 PCT/US2006/025300the control module. In this embodiment, the control module can include logic to control the sixty-four bit logic counter to count leading zeros within one or more thirty-two bit data words and logic to control the sixty-four bit logic counter to count leading zeros within one or more sixty-four bit data words. [0014] In still yet another embodiment, a processor device is disclosed and includes means for receiving a thirty-two bit data word, means for sign extending the thirty-two bit data word to create a temporary sixty-four bit data word, means for counting the leading zeros within the temporary sixty-four bit data word to obtain an interim leading zeros count, and means for subtracting a value from the interim leading zeros count, provided the count is not zero, to obtain a final leading zeros count. [0015] In another embodiment, a processor device is disclosed and includes means for receiving a thirty-two bit data word, means for sign extending the thirty-two bit data word to create a temporary sixty-four bit data word, means for inverting the temporary sixty-four bit data word to create an inverted temporary sixty-four bit data word, means for counting the leading zeros within the temporary sixty-four bit data word to obtain an interim leading ones count, and means for subtracting a value from the interim leading ones count, provided the count is not zero, to obtain a final leading ones count. [0016] In yet another embodiment, a processor device is disclosed and includes means for receiving a data word, means for determining whether the data word is a thirty-two bit data word or a sixty-four bit data word, and means for sign extending a thirty-two bit data word to create a temporary sixty-four bit data word. [0017] In still another embodiment, a method of processing a data word is disclosed and includes receiving a data word having a width of 2 to the Nth power. The method further includes sign extending the data word to a temporary data word having a width of 2 to the Mth power and inputting the temporary data word to a counter having a width of 2 to the Mth power. [0018] In yet still another embodiment, a processor device is disclosed and includes means for receiving a data word having a width of 2 to the Nth power, means for sign extending the data word to a temporary data word having a width of 2 to the Mth power, and means for inputting the temporary data word to a counter having a width of 2 to the Mth power. [0019] In another embodiment, an audio file player is disclosed and includes a digital signal processor, an audio coder/decoder (CODEC) that is coupled to the digital signal processor, a multimedia card that is coupled to the digital signal processor, and a CA 02613404 2007-12-21 WO 2007/002802 PCT/US2006/025300 universal serial bus (USB) port that is coupled to the digital signal processor. In this embodiment, the digital signal processor includes a memory, a sequencer that is responsive to the memory, a register file that is coupled to the memory, and an instruction execution unit that is responsive to the sequencer. The instruction execution unit can include a control module, a sign extender that is coupled to the control module, an inverter that is coupled to the control module, and a sixty-four bit wide bit counter that is also coupled to the control module. In this embodiment, the control module includes logic to control the sixty-four bit wide bit counter to count leading zeros within one or more thirty-two bit data words. Also, the control module can include logic to control the sixty-four bit wide bit counter to count leading zeros within one or more sixty-four bit data words. [0020] An advantage of one or more embodiments disclosed herein can include using the same resource to count leading zeros for different data word sizes. [0021] Another advantage can include using the same resource to count leading ones for different data word sizes. [0022] Still another advantage can include substantially reducing the hardware necessary to count leading zeros and to count leading ones. [0023] Other aspects, advantages, and features of the present disclosure will become apparent after review of the entire application, including the following sections: Brief Description of the Drawings, Detailed Description, and the Claims. BRIEF DESCRIPTION OF THE DRAWINGS [0024] The aspects and the attendant advantages of the embodiments described herein will become more readily apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings wherein: [0025] FIG. 1 is a general diagram of an exemplary digital signal processor; [0026] FIG. 2 is a diagram illustrating an exemplary instruction execution unit; [0027] FIG. 3 is a flow chart illustrating a method of counting leading zeros within a data word; [0028] FIG. 4 is a flow chart illustrating another method of counting leading zeros within a data word; [0029] FIG. 5 is a diagram illustrating an exemplary sixty-four bit data word and an exemplary thirty-two bit data word that is sign extended by thirty-two bits; CA 02613404 2007-12-21 WO 2007/002802 PCT/US2006/025300[0030] FIG. 6 is a flow chart illustrating a method of counting leading zeros and counting leading ones within sixty-four bit data words and thirty-two bit data words; [0031] FIG. 7 is a diagram illustrating a detailed interleaved multithreading operation of the digital signal processor shown in FIG. 1; [0032] FIG. 8 is a general diagram of a portable communication device incorporating a digital signal processor; [0033] FIG. 9 is a general diagram of an exemplary cellular telephone incorporating a digital signal processor; [0034] FIG. 10 is a general diagram of an exemplary wireless Internet Protocol telephone incorporating a digital signal processor; [0035] FIG. 11 is a general diagram of an exemplary portable digital assistant incorporating a digital signal processor; and [0036] FIG. 12 is a general diagram of an exemplary audio file player incorporating a digital signal processor. DETAILED DESCRIPTION [0037] FIG. 1 illustrates a block diagram of an exemplary, non-limiting embodiment of a digital signal processor (DSP) 100. As illustrated in FIG. 1, the DSP 100 includes a memory 102 that is coupled to a sequencer 104 via a first bus 106. As used herein, the word coupled can indicate that two or more components are directly coupled or indirectly coupled. In a particular embodiment, the first bus 106 is a sixty- four (64) bit bus and the sequencer 104 is configured to retrieve instructions from the memory 102 having a length of thirty-two (32) bits or sixty-four (64) bits. The first bus 106 is coupled to a first instruction execution unit 108, a second instruction execution unit 110, a third instruction execution unit 112, and a fourth instruction execution unit 114. FIG. 1 indicates that each instruction execution unit 108, 110, 112, 114 can be coupled to a general register file 116 via a second bus 118. The general register file 116 can also be coupled to the sequencer 104 and the memory 102 via a third bus 120. [0038] In a particular embodiment, the memory 102 includes a first instruction cache 122, a second instruction cache 124, a third instruction cache 126, a fourth instruction cache 128, a fifth instruction cache 130, and a sixth instruction cache 132. During operation, the instruction caches 122, 124, 126, 128, 130, 132 can be accessed independently of each other by the sequencer 104. Additionally, in a particular CA 02613404 2007-12-21 WO 2007/002802 PCT/US2006/025300embodiment, each instruction cache 122, 124, 126, 128, 130, 132 includes a plurality of instructions. [0039] As illustrated in FIG. 1, the memory 102 can include an instruction queue 134 that includes an instruction queue for each instruction cache 122, 124, 126, 128, 130, 132. In particular, the instruction queue 134 includes a first instruction queue 136 that is associated with the first instruction cache 122, a second instruction queue 138 that is associated with the second instruction cache 124, a third instruction queue 140 that is associated with the third instruction cache 126, a fourth instruction queue 142 that is associated with the fourth instruction cache 128, a fifth instruction queue 144 that is associated with the fifth instruction cache 130, and a sixth instruction queue 146 that is associated with the sixth instruction cache 132. [0040] During operation, the sequencer 104 can fetch instructions from each instruction cache 122, 124, 126, 128, 130, 132 via the instruction queue 134. In a particular embodiment, the sequencer 104 fetches instructions from the instruction queues 136, 138, 140, 142, 144, 146 in order from the first instruction queue 136 to the sixth instruction queue 146. After fetching an instruction from the sixth instruction queue 146, the sequencer 104 returns to the first instruction queue 136 and continues fetching instructions from the instruction queues 136, 138, 140, 142, 144, 146 in order. [0041] In a particular embodiment, the sequencer 104 operates in a first mode as a 2- way superscalar sequencer that supports superscalar instructions. Further, in a particular embodiment, the sequencer also operates in a second mode that supports very long instruction word (VLIW) instructions. In particular, the sequencer can operate as a 4-way VLIW sequencer. In a particular embodiment, the first instruction execution unit 108 can execute a load instruction, a store instruction, and an arithmetic logic unit (ALU) instruction. The second instruction execution unit 110 can execute a load instruction and an ALU instruction. Also, the third instruction execution unit can execute a multiply instruction, a multiply-accumulate instruction (MAC), an ALU instruction, a program redirect construct, and a transfer register (CR) instruction. FIG. 1 further indicates that the fourth instruction execution unit 114 can execute a shift (S) instruction, an ALU instruction, a program redirect construct, and a CR instruction. FIG. 2 shows details of the components that can be included within the fourth instruction execution unit 114. In a particular embodiment, the program redirect construct can be a zero overhead loop, a branch instruction, a jump (J) instruction, etc. CA 02613404 2007-12-21 WO 2007/002802 PCT/US2006/025300[0042] As depicted in FIG. 1, the general register 116 includes a first unified register file 148, a second unified register file 150, a third unified register file 152, a fourth unified register file 154, a fifth unified register file 156, and a sixth unified register file 158. Each unified register file 148, 150, 152, 154, 156, 158 corresponds to an instruction cache 122, 124, 126, 128, 130, 132 within the memory 102. Further, in a particular embodiment, each unified register file 148, 150, 152, 154, 156, 158 has the same construction and includes a number of data operands and a number of address operands. [0043] During operation of the digital signal processor 100, instructions can be fetched from the memory 102 by the sequencer 104 and operands can be fetched from the unified register files 148, 150, 152, 154, 156 158. Moreover, instructions and operands can be sent to designated instruction execution units 108, 110, 112, 114, and executed at the instruction execution unit 108, 110, 112, 114. Further, one or more operands are retrieved from the general register 116, e.g., one of the unified register files 148, 150, 152, 154, 156, 158 and used during the execution of the instructions. The results at each instruction execution unit 108, 110, 112, 114 can be written to the general register 116, i.e., to one of the unified register files 148, 150, 152, 154, 156, 158. [0044] Referring to FIG. 2, an exemplary, non-limiting embodiment of an instruction execution unit is shown and is generally designated 200. In a particular embodiment, the instruction execution unit 200 can be incorporated into the system 100 shown in FIG. 1. For example, the instruction execution unit 200 can replace the fourth instruction execution unit 114 shown in FIG. 1. As depicted in FIG. 2, the instruction execution unit 200 includes a sign extender 202. Moreover, as shown, an inverter 204 can be coupled to the sign extender 202. Also, a counting module 206 can be coupled to the inverter 204. In a particular embodiment, the counting module 206 includes a sixty-four bit counter. [0045] FIG. 2 also indicates that a control module 208 can be coupled to the sign extender 202, the inverter 204, and the counting module 206. In a particular embodiment, the instruction execution unit 200 can receive a plurality of instructions 210, e.g., sixty-four bit instructions and thirty-two bit instructions. Also, in an illustrative embodiment, the instructions 210 can be stored within one of the instruction queues 136, 138, 140, 142, 144, 144, 146 (FIG. 1) and directed to the execution unit 200 via the sequencer 104 (FIG. 1). Further, the instruction execution unit 200 can write the result of a counting operation performed by the counting module 206 to a register 212. CA 02613404 2007-12-21 WO 2007/002802 PCT/US2006/025300In a particular embodiment, the control module 208 can include logic to perform one or more of the method steps described herein. [0046] Referring to FIG. 3, a method of counting leading zeros for a data word is shown and commences at block 300. At block 300, an instruction execution unit receives a data word that has a width of 2 to the Nth power. Next, at block 302, a sign extender sign extends the data word to a temporary data word that has a width of 2 to the Mth power. In a particular embodiment, N and M are integers. Further, in a particular embodiment, M is greater than N. Moving to block 304, the sign extender inputs, or otherwise passes, the temporary data word to a counter that has a width of 2 to the Mth power. At block 306, the counter counts the leading zeros within the temporary data word. [0047] Proceeding to decision step 308, the controller determines whether the result from the counter is zero. If so, the method continues to block 310 and the controller sets the count equal to zero. Next, at block 312, the control module writes the count to a register. The method then ends at state 314. Returning to decision step 308, if the result of the count is not zero, the method proceeds to step 316 and a value equal of 2 to the Mth power minus 2 to the Nth power is subtracted from the result to get a count. Moving to block 312, the control module writes the count to a register. The method then ends at state 314. [0048] FIG. 4 shows another method of counting leading zeros for a data word. Commencing at block 400, an instruction execution unit receives a data word that has a width of 2 to the Nth power. At block 402, a sign extender sign extends the data word to a temporary data word that has a width of 2 to the Mth power. In a particular embodiment, N and M are integers and M is greater than N. Proceeding to block 404, the sign extender passes, or otherwise inputs, the temporary data word to a counter that has a width of 2 to the Mth power. At block 406, the counter counts the leading zeros within the temporary data word to get a result that includes M+1 bits. In a particular embodiment, the least significant bit in the result is bit zero (0) and the most significant bit in the result is bit M. Further, bit N lies between the least significant bit and the most significant bit. Continuing to block 408, bit M is copied to the location of bit N. At block 410, bits M through N + 1 are replaced with zero. Next, at block 412, the control module writes a modified result to a register. The method then ends at state 414. [0049] FIG. 5 illustrates a sixty-four bit data word 500 and a thirty-two bit data word 502. In a particular embodiment, the sixty-four bit data word 500 can be input to a CA 02613404 2007-12-21 WO 2007/002802 PCT/US2006/025300 counting module, e.g., the counting module 206 described in conjunction with FIG. 2. The counting module 206 can count the number of leading zeros in the sixty- four bit data word 500. Further, if the instruction requires a count of leading ones within the sixty-four bit data word, the sixty-four bit data word is inverted, and the resulting leading zeros of the inverted sixty-four bit data word are counted by the counting module. [0050] In another embodiment, if an instruction requires a leading zeros or leading ones count for a thirty-two bit data word, then the thirty-two bit data word 502 can be sign extended by thirty-two bits in order to create a sign extended temporary sixty- four bit data word 504. The temporary sixty-four bit data word 504 can be input to the counting module to obtain a leading zeros count or a leading ones count as described herein. [0051] FIG. 6 illustrates an exemplary, non-limiting method of counting leading zeros and counting leading ones. Commencing at block 600, the instruction execution unit receives a word associated with an instruction. At block 602, the instruction execution unit, e.g., a control module within the instruction execution unit, determines whether a leading zeros count or leading ones count of the word is required by the associated instruction. If a leading zeros count or a leading ones count is not required, the method ends at state 604. On the other hand, if a leading zeros count or a leading ones count is required, the method proceeds to decision step 606. [0052] At decision step 606, the control module determines whether the word is thirty bits long or sixty-four bits long. If the word is thirty-two bits long, the method proceeds to block 608 and a sign extender sign extends the thirty-two bit data word to create a temporary sixty-four bit data word. Thereafter, the method moves to decision step 610. Returning to decision step 604, if the word is sixty-four bits, the method proceeds directly to decision step 610. [0053] At decision step 610, the control module determines whether a leading zeros count or a leading ones count is required for the sixty-four bit data word or the temporary sixty-four bit data word. If a leading ones count is required, the method proceeds to block 612 and an inverter inverts the sixty-four bit data word or the temporary sixty-four bit data word to create an inverted sixty-four bit data word or an inverted temporary sixty-four bit data word. Moving to block 614, the inverter passes the inverted sixty-four bit data word or the inverted temporary sixty-four bit data word to the counting module. At block 616, the counting module counts the leading zeros of CA 02613404 2007-12-21 WO 2007/002802 PCT/US2006/025300the inverted sixty-four bit data word or the inverted temporary sixty-four bit data word to obtain an interim result. [0054] Returning to decision step 610, if a leading zeros count is required, the method proceeds to block 618 and the control module passes the sixty-four bit data word or the temporary sixty-four bit data word to the counting module. Thereafter, the method moves to block 616 and the counting module counts the leading zeros of the sixty-four bit data word or the temporary sixty-four bit data word to obtain an interim result. From block 616, the method continues to decision step 620 and the control module determines whether the sixty-four bit data word that is the subject of the count was previously sign extended. If not, the method proceeds to decision step 622 and the control module determines whether the count is a leading zeros count or a leading ones count. If the count is a leading zeros count, the method proceeds to block 624 and the control module writes a leading zeros count to a register. The method then ends at state 604. Conversely, at decision step 622, if the count is a leading ones count, the method proceeds to block 626 and the control module writes a leading ones count to a register. The method then ends at state 604. [0055] Returning to decision step 620, if the sixty-four bit data word that is the subject of the count was previously sign extended, the method continues to decision step 628. At decision step 628, the control module determines whether the result of the count is zero. If so, the method moves to decision step 622 and continues as described herein. On the other hand, if the result is not zero, the method proceeds to block 630 and a fixed value of thirty-two is subtracted from the interim result to yield a final result. Thereafter, the method continues to decision step 622 and continues as described herein. [0056] Referring to FIG. 7, a detailed method of interleaved multithreading for a digital signal processor is shown. FIG. 7 shows that the method includes a branch routine 700, a load routine 702, a store routine 704, and an s-pipe routine 706. Each routine 700, 702, 704, 706 includes a plurality of steps that are performed during six clock cycles for each instruction fetched from an instruction queue by a sequencer. In a particular embodiment, the clock cycles include a decode clock cycle 708, a register file access clock cycle 710, a first execution clock cycle 712, a second execution clock cycle 714, a third execution clock cycle 716, and a writeback clock cycle 718. Further, each clock cycle includes a first portion and a second portion. CA 02613404 2007-12-21 WO 2007/002802 PCT/US2006/025300[0057] FIG. 7 shows that during the branch routine 700, at block 720, a quick decode for the instruction is performed within a sequencer during a first portion of the decode clock cycle. At block 722, during the second portion of the decode clock cycle 708, the sequencer accesses a register file, e.g., starts a register file access for a first operand. The register access of block 722 finishes within the register file access clock cycle 710 and the first operand is retrieved from the register file. In a particular embodiment, the sequencer accesses the register file via a first data read port. As shown, the register file access of block 722 occurs during the second portion of the decode clock cycle 708 and the first portion of the register file access clock cycle 710. As such, the register file access overlaps the decode clock cycle 708 and the register file access clock cycle 710. [0058] At block 724, also during the decode clock cycle 708, the sequencer begins a full decode for the instruction. The full decode performed by the sequencer occurs within the second portion of the decode clock cycle 708 and the first portion of the register file access clock cycle 710. [0059] During the register file access clock cycle 710, at block 726, the sequencer generates an instruction virtual address (IVA). Thereafter, at block 728, the sequencer performs a page check in order to determine the physical address page associated with a virtual address page number. Moving to the first execution clock cycle 712, at block 730, the sequencer performs an instruction queue lookup. At block 732, the sequencer accesses an instruction cache a first time and retrieves a first double-word for the instruction. In a particular embodiment, each instruction includes three double-words, e.g., a first double-word, a second double-word, and a third double-word. At block 734, during the first execution clock cycle 712, the sequencer aligns the double- word coming from the instruction cache. [0060] Continuing to the second execution clock cycle 714, the sequencer accesses the instruction cache a second time in order to retrieve the second double-word for the instruction at block 736. Next, at block 738, the sequencer aligns the double- word retrieved from the instruction cache. [0061] Proceeding to the third execution clock cycle 716, the sequencer accesses the instruction cache a third time in order to retrieve a third double-word at block 742. After the sequencer accesses the instruction cache the third time, the sequencer aligns the third double-word, at block 744. [0062] As illustrated in FIG. 7, during the load routine 702, at block 750, the sequencer performs a quick decode for the instruction during the first portion of the decode clock CA 02613404 2007-12-21 WO 2007/002802 PCT/US2006/025300cycle 708. At block 752, during the second portion of the decode clock cycle 708, the sequencer begins a register file access. As shown, the second register access by the sequencer spans two clock cycles, i.e., including the second portion of the decode clock cycle 708 and the first portion of register file access clock cycle 710. As such, the register file access ends within the register file access clock cycle 710 and a second operand can be retrieved. Next, during the first execution cycle 712, at block 754, an address generation unit within a first instruction execution unit generates a first virtual address for the instruction based on the previously read register file content. [0063] At block 756, during the second execution clock cycle 714, a data translation look-aside buffer (DTLB) performs an address translation for the first virtual address in order to generate a first physical address. Still within the second execution clock cycle 714, at block 758, the sequencer performs a tag check. [0064] Moving to the third execution cycle 716, the sequencer accesses a data cache static random access memory (SRAM) in order to read data out of the SRAM, at block 760. Also, within the third execution cycle, at block 762, the sequencer updates the register file associated with the instruction a first time via a first data write port. In a particular embodiment, the sequencer updates the register with file the results of a post increment address. Next, during the writeback clock cycle 718, at block 764 a load aligner shifts data to align the data within the double-word. At block 766, also within the writeback clock cycle 718, the sequencer updates the register file for the instruction a second time via the first data write port with data loaded from the cache. [0065] FIG. 7 shows that during the store routine 704, at block 768, the sequencer performs a quick decode for the instruction during the decode clock cycle 708. Further, during the decode clock cycle 708, at block 770, the sequencer accesses a register file associated with the instruction a third time via a third data read port. The register access of block 770 occurs within the last portion of the decode clock cycle 708 and the first portion of the register file access clock cycle 710. As such, the register file begins within the decode clock cycle 708 and ends within the register file access clock cycle 710. In a particular embodiment, a third operand is retrieved from the register file during the register file access clock cycle 710. [0066] As depicted in FIG. 7, during the second portion of the register file access clock cycle 710, the sequencer access the register file for the instruction a fourth time via the third data read port at block 772. The fourth register file commences within the register file access clock cycle 710 and ends within the first execution clock cycle 712 wherein a CA 02613404 2007-12-21 WO 2007/002802 PCT/US2006/025300fourth operand is retrieved from the register. In a particular embodiment, the third data read port is used to access the register in order to retrieve the third operand and the fourth operand. At block 774, a portion of the data from the sequencer is multiplexed at a multiplexer. Also, during the first execution clock cycle 712, at block 776, a second address generation unit within a second instruction execution unit generates a virtual address for the instruction based on the previously read data from the register file. [0067] Proceeding to the second execution clock cycle 714, during the store routine, at block 778, the data translation look-aside buffer (DTLB) translates the previously generated virtual address for the instruction into a physical address. At block 780, within the second execution clock cycle 714, the sequencer performs a tag check. Also, during the second execution clock cycle 714, at block 782, a store aligner aligns a store data to the appropriate byte, half-word, or word boundary within a double-word before writing the data to the data cache. Moving to the third execution clock cycle 716, at block 784, the sequencer updates the data cache static random access memory. Then, at block 786, the sequencer updates the register file for the instruction a third time via a second data write port with the results of executing the instruction during the third execution clock cycle 716. [0068] As illustrated in FIG. 7, the s-pipe routine 706 begins during the decode clock cycle 708, at block 788, where a quick decode is performed for the instruction. At block 790, the sequencer accesses the register file for the instruction a fifth time via a fourth data read port. The fifth register file access also spans two clock cycles and begins within the second portion of the decode clock cycle 708 and ends within the first portion of the register file access clock cycle 710 wherein a fifth operand is retrieved. Still during the register file access clock cycle 710, a portion of the data from the register file for the instruction is multiplexed at a multiplexer. Also, during the register file access clock cycle 710, the sequencer accesses the register file for the instruction a sixth time via the fourth data read port at block 794. The sixth access to the register file begins within the second portion of the register file access clock cycle 710 and ends within the first portion of the first execution clock cycle 712. A sixth operand is retrieved during the first execution clock cycle 712. [0069] Proceeding to the second execution clock cycle 714, at block 796, data retrieved during the fifth register file access and the sixth register file access is sent to a 64-bit shifter, a vector unit, and a sign/zero extender. Also, during the first execution clock CA 02613404 2007-12-21 WO 2007/002802 PCT/US2006/025300 cycle, at block 798, the data from the shifter, the vector unit, and the sign/zero extender is multiplexed. [0070] Moving to the second execution clock cycle 714, the multiplexed data from the shifter, the vector unit, and the sign/zero extender is sent to an arithmetic logic unit, a count leading zeros unit, or a comparator at block 800. At block 802, the data from the arithmetic logic unit, the count leading zeros unit, and the comparator is multiplexed at a single multiplexer. After the data is multiplexed, the shifter shifts the multiplexed data in order to multiply the data by 2, 4, 8, etc. at block 804 during the third execution clock cycle 716. Then, at block 806, the output of the shifter is saturated. During the writeback clock cycle 718, at block 808, the register file for the instruction is updated a fourth time via a third write data port. [0071] In a particular embodiment, as illustrated in FIG. 7, the method of interleaved multithreading for the digital signal processor utilizes four read ports for each register and three write ports for each register. Due to recycling of read ports and write ports, six operands can be retrieved via the four read data ports. Further, four results can be updated to the register file via three write data ports. [0072] FIG. 8 illustrates an exemplary, non-limiting embodiment of a portable communication device that is generally designated 820. As illustrated in FIG. 8, the portable communication device includes an on-chip system 822 that includes a digital signal processor 824. In a particular embodiment, the digital signal processor 824 is the digital signal processor shown in FIG. 1 and described herein. FIG. 8 also shows a display controller 826 that is coupled to the digital signal processor 824 and a display 828. Moreover, an input device 830 is coupled to the digital signal processor 824. As shown, a memory 832 is coupled to the digital signal processor 824. Additionally, a coder/decoder (CODEC) 834 can be coupled to the digital signal processor 824. A speaker 836 and a microphone 838 can be coupled to the CODEC 834. [0073] FIG. 8 also indicates that a wireless controller 840 can be coupled to the digital signal processor 824 and a wireless antenna 842. In a particular embodiment, a power supply 844 is coupled to the on-chip system 822. Moreover, in a particular embodiment, as illustrated in FIG. 8, the display 828, the input device 830, the speaker 836, the microphone 838, the wireless antenna 842, and the power supply 844 are external to the on-chip system 822. However, each is coupled to a component of the on- chip system 822. CA 02613404 2007-12-21 WO 2007/002802 PCT/US2006/025300[0074] In a particular embodiment, the digital signal processor 824 utilizes interleaved multithreading to process instructions associated with program threads necessary to perform the functionality and operations needed by the various components of the portable communication device 820. For example, when a wireless conimunication session is established via the wireless antenna a user can speak into the microphone 838. Electronic signals representing the user's voice can be sent to the CODEC 834 to be encoded. The digital signal processor 824 can perform data processing for the CODEC 834 to encode the electronic signals from the microphone. Further, incoming signals received via the wireless antenna 842 can be sent to the CODEC 834 by the wireless controller 840 to be decoded and sent to the speaker 836. The digital signal processor 824 can also perform the data processing for the CODEC 834 when decoding the signal received via the wireless antenna 842. [0075] Further, before, during, or after the wireless communication session, the digital signal processor 824 can process inputs that are received from the input device 830. For example, during the wireless communication session, a user may be using the input device 830 and the display 828 to surf the Internet via a web browser that is embedded within the memory 832 of the portable communication device 820. The digital signal processor 824 can interleave various program threads that are used by the input device 830, the display controller 826, the display 828, the CODEC 834 and the wireless controller 840, as described herein, to efficiently control the operation of the portable communication device 820 and the various components therein. Many of the instructions associated with the various program threads are executed concurrently during one or more clock cycles. As such, the power and energy consumption due to wasted clock cycles is substantially decreased. [0076] Referring to FIG. 9, an exemplary, non-limiting embodiment of a cellular telephone is shown and is generally designated 920. As shown, the cellular telephone 920 includes an on-chip system 922 that includes a digital baseband processor 924 and an analog baseband processor 926 that are coupled together. In a particular embodiment, the digital baseband processor 924 is a digital signal processor, e.g., the digital signal processor shown in FIG. 1 and described herein. Further, in a particular embodiment, the analog baseband processor 926 can also be a digital signal processor, e.g., the digital signal processor shown in FIG. 1. As illustrated in FIG. 9, a display controller 928 and a touchscreen controller 930 are coupled to the digital baseband CA 02613404 2007-12-21 WO 2007/002802 PCT/US2006/025300processor 924. In turn, a touchscreen display 932 external to the on-chip system 922 is coupled to the display controller 928 and the touchscreen controller 930. [0077] FIG. 9 further indicates that a video encoder 934, e.g., a phase alternating line (PAL) encoder, a sequential couleur a memoire (SECAM) encoder, or a national television system(s) committee (NTSC) encoder, is coupled to the digital baseband processor 924. Further, a video amplifier 936 is coupled to the video encoder 934 and the touchscreen display 932. Also, a video port 938 is coupled to the video amplifier 936. As depicted in FIG. 9, a universal serial bus (USB) controller 940 is coupled to the digital baseband processor 924. Also, a USB port 942 is coupled to the USB controller 940. A memory 944 and a subscriber identity module (SIM) card 946 can also be coupled to the digital baseband processor 924. Further, as shown in FIG. 9, a digital camera 948 can be coupled to the digital baseband processor 924. In an exemplary embodiment, the digital camera 948 is a charge-coupled device (CCD) camera or a complementary metal-oxide semiconductor (CMOS) camera. [0078] As further illustrated in FIG. 9, a stereo audio CODEC 950 can be coupled to the analog baseband processor 926. Moreover, an audio amplifier 952 can coupled to the to the stereo audio CODEC 950. In an exemplary embodiment, a first stereo speaker 954 and a second stereo speaker 956 are coupled to the audio amplifier 952. FIG. 9 shows that a microphone amplifier 958 can be also coupled to the stereo audio CODEC 950. Additionally, a microphone 960 can be coupled to the microphone amplifier 958. In a particular embodiment, a frequency modulation (FM) radio tuner 962 can be coupled to the stereo audio CODEC 950. Also, an FM antenna 964 is coupled to the FM radio tuner 962. Further, stereo headphones 966 can be coupled to the stereo audio CODEC 950. [0079] FIG. 9 further indicates that a radio frequency (RF) transceiver 968 can be coupled to the analog baseband processor 926. An RF switch 970 can be coupled to the RF transceiver 968 and an RF antenna 972. As shown in FIG. 9, a keypad 974 can be coupled to the analog baseband processor 926. Also, a mono headset with a microphone 976 can be coupled to the analog baseband processor 926. Further, a vibrator device 978 can be coupled to the analog baseband processor 926. FIG. 9 also shows that a power supply 980 can be coupled to the on-chip system 922. In a particular embodiment, the power supply 980 is a direct current (DC) power supply that provides power to the various components of the cellular telephone 920 that require power. Further, in a particular embodiment, the power supply is a rechargeable DC CA 02613404 2007-12-21 WO 2007/002802 PCT/US2006/025300battery or a DC power supply that is derived from an alternating current (AC) to DC transformer that is connected to an AC power source. [0080] In a particular embodiment, as depicted in FIG. 9, the touchscreen display 932, the video port 938, the USB port 942, the camera 948, the first stereo speaker 954, the second stereo speaker 956, the microphone 960, the FM antenna 964, the stereo headphones 966, the RF switch 970, the RF antenna 972, the keypad 974, the mono headset 976, the vibrator 978, and the power supply 980 are external to the on- chip system 922. Moreover, in a particular embodiment, the digital baseband processor 924 and the analog baseband processor can use interleaved multithreading, described herein, in order to process the various program threads associated with one or more of the different components associated with the cellular telephone 920. [0081] Referring to FIG. 10, an exemplary, non-limiting embodiment of a wireless Internet protocol (IP) telephone is shown and is generally designated 1000. As shown, the wireless IP telephone 1000 includes an on-chip system 1002 that includes a digital signal processor (DSP) 1004. In a particular embodiment, the DSP 1004 is the digital signal processor shown in FIG. 1 and described herein. As illustrated in FIG. 10, a display controller 1006 is coupled to the DSP 1004 and a display 1008 is coupled to the display controller 1006. In an exemplary embodiment, the display 1008 is a liquid crystal display (LCD). FIG. 10 further shows that a keypad 1010 can be coupled to the DSP 1004. [0082] As further depicted in FIG. 10, a flash memory 1012 can be coupled to the DSP 1004. A synchronous dynamic random access memory (SDRAM) 1014, a static random access memory (SRAM) 1016, and an electrically erasable programmable read only memory (EEPROM) 1018 can also be coupled to the DSP 1004. FIG. 10 also shows that a light emitting diode (LED) 1020 can be coupled to the DSP 1004. Additionally, in a particular embodiment, a voice CODEC 1022 can be coupled to the DSP 1004. An amplifier 1024 can be coupled to the voice CODEC 1022 and a mono speaker 1026 can be coupled to the amplifier 1024. FIG. 10 further indicates that a mono headset 1028 can also be coupled to the voice CODEC 1022. In a particular embodiment, the mono headset 1028 includes a microphone. [0083] FIG. 10 also illustrates that a wireless local area network (WLAN) baseband processor 1030 can be coupled to the DSP 1004. An RF transceiver 1032 can be coupled to the WLAN baseband processor 1030 and an RF antenna 1034 can be coupled to the RF transceiver 1032. In a particular embodiment, a Bluetooth controller 1036 can CA 02613404 2007-12-21 WO 2007/002802 PCT/US2006/025300also be coupled to the DSP 1004 and a Bluetooth antenna 1038 can be coupled to the controller 1036. FIG. 10 also shows that a USB port 1040 can also be coupled to the DSP 1004. Moreover, a power supply 1042 is coupled to the on-chip system 1002 and provides power to the various components of the wireless IP telephone 1000 via the on- chip system 1002. [0084] In a particular embodiment, as indicated in FIG. 10, the display 1008, the keypad 1010, the LED 1020, the mono speaker 1026, the mono headset 1028, the RF antenna 1034, the Bluetooth antenna 1038, the USB port 1040, and the power supply 1042 are external to the on-chip system 1002. However, each of these components is coupled to one or more components of the on-chip system. Further, in a particular embodiment, the digital signal processor 1004 can use interleaved multithreading, as described herein, in order to process the various program threads associated with one or more of the different components associated with the IP telephone 1000. [0085] FIG. 11 illustrates an exemplary, non-limiting embodiment of a portable digital assistant (PDA) that is generally designated 1100. As shown, the PDA 1100 includes an on-chip system 1102 that includes a digital signal processor (DSP) 1104. In a particular embodiment, the DSP 1104 is the digital signal processor shown in FIG. 1 and described herein. As depicted in FIG. 11, a touchscreen controller 1106 and a display controller 1108 are coupled to the DSP 1104. Further, a touchscreen display is coupled to the touchscreen controller 1106 and to the display controller 1108. FIG. 11 also indicates that a keypad 1112 can be coupled to the DSP 1104. [0086] As further depicted in FIG. 11, a flash memory 1114 can be coupled to the DSP 1104. Also, a read only memory (ROM) 1116, a dynamic random access memory (DRAM) 1118, and an electrically erasable progranunable read only memory (EEPROM) 1120 can be coupled to the DSP 1104. FIG. 11 also shows that an infrared data association (IrDA) port 1122 can be coupled to the DSP 1104. Additionally, in a particular embodiment, a digital camera 1124 can be coupled to the DSP 1104. [0087] As shown in FIG. 11, in a particular embodiment, a stereo audio CODEC 1126 can be coupled to the DSP 1104. A first stereo amplifier 1128 can be coupled to the stereo audio CODEC 1126 and a first stereo speaker 1130 can be coupled to the first stereo amplifier 1128. Additionally, a microphone arnplifier 1132 can be coupled to the stereo audio CODEC 1126 and a microphone 1134 can be coupled to the microphone amplifier 1132. FIG. 11 further shows that a second stereo amplifier 1136 can be coupled to the stereo audio CODEC 1126 and a second stereo speaker 1138 can be CA 02613404 2007-12-21 WO 2007/002802 PCT/US2006/025300 coupled to the second stereo amplifier 1136. In a particular embodiment, stereo headphones 1140 can also be coupled to the stereo audio CODEC 1126. [0088] FIG. 11 also illustrates that an 802.11 controller 1142 can be coupled to the DSP 1104 and an 802.11 antenna 1144 can be coupled to the 802.11 controller 1142. Moreover, a Bluetooth controller 1146 can be coupled to the DSP 1104 and a Bluetooth antenna 1148 can be coupled to the Bluetooth controller 1146. As depicted in FIG. 11, a USB controller 1150 can be coupled to the DSP 1104 and a USB port 1152 can be coupled to the USB controller 1150. Additionally, a smart card 1154, e.g., a multimedia card (MMC) or a secure digital card (SD) can be coupled to the DSP 1104. Further, as shown in FIG. 11, a power supply 1156 can be coupled to the on-chip system 1102 and can provide power to the various components of the PDA 1100 via the on-chip system 1102. [0089] In a particular embodiment, as indicated in FIG. 11, the display 1110, the keypad 1112, the IrDA port 1122, the digital camera 1124, the first stereo speaker 1130, the microphone 1134, the second stereo speaker 1138, the stereo headphones 1140, the 802.11 antenna 1144, the Bluetooth antenna 1148, the USB port 1152, and the power supply 1150 are external to the on-chip system 1102. However, each of these components is coupled to one or more components on the on-chip system. Additionally, in a particular embodiment, the digital signal processor 1104 can use interleaved multithreading, described herein, in order to process the various program threads associated with one or more of the different components associated with the portable digital assistant 1100. [0090] Referring to FIG. 12, an exemplary, non-limiting embodiment of an audio file player, such as moving pictures experts group audio layer-3 (MP3) player is shown and is generally designated 1200. As shown, the audio file player 1200 includes an on-chip system 1202 that includes a digital signal processor (DSP) 1204. In a particular embodiment, the DSP 1204 is the digital signal processor shown in FIG. 1 and described herein. As illustrated in FIG. 12, a display controller 1206 is coupled to the DSP 1204 and a display 1208 is coupled to the display controller 1206. In an exemplary embodiment, the display 1208 is a liquid crystal display (LCD). FIG. 12 further shows that a keypad 1210 can be coupled to the DSP 1204. [0091] As further depicted in FIG. 12, a flash memory 1212 and a read only memory (ROM) 1214 can be coupled to the DSP 1204. Additionally, in a particular embodiment, an audio CODEC 1216 can be coupled to the DSP 1204. An amplifier CA 02613404 2007-12-21 WO 2007/002802 PCT/US2006/0253001218 can be coupled to the audio CODEC 1216 and a mono speaker 1220 can be coupled to the amplifier 1218. FIG. 12 further indicates that a microphone input 1222 and a stereo input 1224 can also be coupled to the audio CODEC 1216. In a particular embodiment, stereo headphones 1226 can also be coupled to the audio CODEC 1216. [0092] FIG. 12 also indicates that a USB port 1228 and a smart card 1230 can be coupled to the DSP 1204. Additionally, a power supply 1232 can be coupled to the on- chip system 1202 and can provide power to the various components of the audio file player 1200 via the on-chip system 1202. [0093] In a particular embodiment, as indicated in FIG. 12, the display 1208, the keypad 1210, the mono speaker 1220, the microphone input 1222, the stereo input 1224, the stereo headphones 1226, the USB port 1228, and the power supply 1232 are external to the on-chip system 1202. However, each of these components is coupled to one or more components on the on-chip system. Also, in a particular embodiment, the digital signal processor 1204 can use interleaved multithreading, described herein, in order to process the various program threads associated with one or more of the different components associated with the audio file player 1200. [0094] With the configuration of structure disclosed herein, the system and method described herein provides a way to count leading zeros and to count leading ones within sixty-four bit data words and thirty-bit data words using the same hardware within a digital signal processor. As such, the need for different sets of hardware to count leading zeros and leading ones within different sized data words is obviated. [0095] Those of skill would further appreciate that the various illustrative logical blocks, configurations, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, configurations, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure. [0096] The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software CA 02613404 2007-12-21 WO 2007/002802 PCT/US2006/025300module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, PROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a computing device or a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a computing device or user terminal. [0097] The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present disclosure. Vaiious modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features as defined by the following claims.
A method according to one embodiment may include discovering, at least in part, by an integrated circuit of at least one communication protocol via which at least one device external to the integrated circuit is capable of communicating. In this embodiment, the integrated circuit may be capable of communicating in accordance with a plurality of different communication protocols. The method according to this embodiment may also include selecting, at least in part, by the integrated circuit of the at least one communication protocol to use to communicate with the at least one device. Of course, many alternatives, variations, and modifications are possible without departing from this embodiment.
What is claimed is: CLAIMS 1. A method comprising: discovering, at least in part, by an integrated circuit of at least one communication protocol via which at least one device external to the integrated circuit is capable of communicating, the integrated circuit being capable of communicating in accordance with a plurality of different communication protocols; and selecting, at least in part, by the integrated circuit of the at least one communication protocol to use to communicate with the at least one device. 2. The method of claim 1, wherein: the discovering is based at least in part upon a predetermined signal sequence detected by the integrated circuit, the predetermined signal sequence being indicative of a protocol domain that comprises the at least one device. 3. The method of claim 2, wherein: the discovering also is based at least in part upon a failure to receive at the integrated circuit, during a communication link initialization, of a predetermined out-of- band signal sequence from the at least one device. 4. The method of claim 2, wherein: the predetermined signal sequence comprises a predetermined comma character. 5. The method of claim 1, wherein: the integrated circuit comprises processor circuitry and protocol engine circuitry; and the selecting comprises issuing from the processor circuitry to the protocol engine circuitry one or more signals that enable, at least in part, the protocol engine circuitry to communicate using the at least one communication protocol. 6. The method of claim 5, wherein: <Desc/Clms Page number 22> the integrated circuit also comprises physical interface circuitry; and the selecting also comprises issuing to the physical interface circuitry from the processor circuitry one or more other signals that select, at least in part, one or more physical signaling levels at which the physical interface circuitry is capable of issuing one or more signals, the one or more physical signaling levels being in accordance with the at least one communication protocol. 7. The method of claim 6, wherein: the plurality of different communication protocols comprise a Serial Advanced Technology Attachment protocol and Serial Attached Small Computer System Interface protocol. 8. An apparatus comprising: an integrated circuit that is capable of discovering, at least in part, at least one communication protocol via which at least one device external to the integrated circuit is capable of communicating, the integrated circuit also being capable of communicating in accordance with a plurality of different communication protocols, the integrated circuit further being capable of selecting, at least in part, the at least one communication protocol to use to communicate with the at least one device. 9. The apparatus of claim 8, wherein: the integrated circuit is capable of detecting a predetermined signal sequence indicative of a protocol domain that comprises the at least one device; and the integrated circuit also is capable of discovering, at least in part, the at least one communication protocol, based at least in part upon detection of the predetermined signal sequence. 10. The apparatus of claim 9, wherein: the integrated circuit is also capable of discovering the at least one communication protocol, based at least in part upon a failure to receive at the integrated circuit, during a communication link initialization, of a predetermined out-of-band signal sequence from the at least one device. <Desc/Clms Page number 23> 11. The apparatus of claim 9, wherein: the predetermined signal sequence comprises a predetermined comma character. 12. The apparatus of claim 8, wherein: the integrated circuit comprises processor circuitry and protocol engine circuitry; and the processor circuitry is capable of issuing to the protocol engine circuitry one or more signals that enable, at least in part, the protocol engine circuitry to communicate using the at least one communication protocol. 13. The apparatus of claim 12, wherein: the integrated circuit also comprises physical interface circuitry; and the processor circuitry is also capable of issuing to the physical interface circuitry from the processor circuitry one or more other signals that select, at least in part, one or more physical signaling levels at which the physical interface circuitry is capable of issuing one or more signals, the one or more physical signaling levels being in accordance with the at least one communication protocol. 14. The apparatus of claim 13, wherein: the plurality of different communication protocols comprise a Serial Advanced Technology Attachment protocol and Serial Attached Small Computer System Interface protocol. 15. An article comprising: a storage medium having stored thereon instructions that when executed by a machine result in the following: discovering, at least in part, by an integrated circuit of at least one communication protocol via which at least one device external to the integrated circuit is capable of communicating, the integrated circuit being capable of communicating in accordance with a plurality of different communication protocols; and selecting, at least in part, by the integrated circuit of the at least one communication protocol to use to communicate with the at least one device. <Desc/Clms Page number 24> 16. The article of claim 15, wherein: the discovering is based at least in part upon a predetermined signal sequence detected by the integrated circuit, the predetermined signal sequence being indicative of a protocol domain that comprises the at least one device. 17. The article of claim 16, wherein: the discovering also is based at least in part upon a failure to receive at the integrated circuit, during a communication link initialization, of a predetermined out-of- band signal sequence from the at least one device. 18. The article of claim 16, wherein: the predetermined signal sequence comprises a predetermined comma character. 19. The article of claim 15, wherein: the integrated circuit comprises processor circuitry and protocol engine circuitry ; and the selecting comprises issuing from the processor circuitry to the protocol engine circuitry one or more signals that enable, at least in part, the protocol engine circuitry to communicate using the at least one communication protocol. 20. The article of claim 19, wherein: the integrated circuit also comprises physical interface circuitry; and the selecting also comprises issuing to the physical interface circuitry from the processor circuitry one or more other signals that select, at least in part, one or more physical signaling levels at which the physical interface circuitry is capable of issuing one or more signals, the one or more physical signaling levels being in accordance with the at least one communication protocol. 21. The article of claim 20, wherein: the plurality of different communication protocols comprise a Serial Advanced Technology Attachment protocol and Serial Attached Small Computer System Interface protocol. <Desc/Clms Page number 25> 22. A system comprising: a circuit card including an integrated circuit, the circuit card being capable of being coupled to a bus, the integrated circuit being capable of discovering, at least in part, at least one communication protocol via which at least one device external to the integrated circuit is capable of communicating, the integrated circuit also being capable of communicating in accordance with a plurality of different communication protocols, the integrated circuit further being capable of selecting, at least in part, the at least one communication protocol to use to communicate with the at least one device. 23. The system of claim 22, further comprising : a circuit board comprising the bus and a bus interface slot, the circuit card being capable of being coupled to the bus interface slot. 24. The system of claim 22, wherein: the at least one device comprises at least one of one or more mass storage devices and one or more peripheral devices. 25. The system of claim 24, wherein: the one or more mass storage devices comprises a redundant array of independent disks (RAID). 26. The system of claim 22, wherein: the integrated circuit is capable of discovering the at least one communication protocol based at least in part upon one or more of the following: detection by the integrated circuit of a predetermined signal sequence from the at least one device; and failure to detect at the integrated circuit, during initialization of a communication link between the integrated circuit and the at least one device, of a COMSAS signal sequence from the at least one device. 27. The system of claim 26, wherein: the predetermined signal sequence comprises a K28. 5 character. <Desc/Clms Page number 26> 28. The system of claim 22, wherein: the integrated circuit is directly connected, via a communication link, to the at least one device. 29. The system of claim 28, wherein: the at least one communication protocol is one of a Serial Advanced Technology Attachment protocol and Serial Attached Small Computer System Interface protocol.
<Desc/Clms Page number 1> INTEGRATED CIRCUIT CAPABLE OF COMMUNICATING USING DIFFERENT COMMUNICATION PROTOCOLS CROSS-REFERENCE TO RELATED APPLICATION The subject application is related to co-pending U. S. Patent Application Serial No. 10/301,028 (Attorney Docket No. 42390. P14962), entitled"Integrated Circuit Having Multiple Modes Of Operation,"filed on November 20,2002. The subject application is also related to co-pending U. S. Patent Application Serial No. 10/301,027 (Attorney Docket No. 42390. P14963), entitled"Integrated Circuit Having Multiple Modes Of Operation, "filed on November 20,2002. FIELD This disclosure relates to an integrated circuit that is capable of communicating using different communication protocols. BACKGROUND In one conventional data storage arrangement, a computer node includes a host bus adapter (HBA). The HBA communicates with a data storage system via one or more communication links using a communication protocol associated with the one or more links. Typically, the HBA includes a plurality of integrated circuit chips to carry out communications between the HBA and the data storage system, and is capable of using only a single predetermined communication protocol to communicate with the data storage system. Thus, for example, in this conventional arrangement, if the data storage system is incapable of communicating with the HBA using this predetermined protocol, one or more external communication protocol converters, translators, and/or expanders may be coupled between the HBA and data storage system to permit communication between the HBA and data storage system. BRIEF DESCRIPTION OF THE DRAWINGS Features and advantages of embodiments of the claimed subject matter will become apparent as the following Detailed Description proceeds, and upon reference to the Drawings, wherein like numerals depict like parts, and in which : <Desc/Clms Page number 2> Figure 1 is a diagram illustrating a system embodiment. Figure 2 is a diagram illustrating in greater detail an integrated circuit in the system embodiment of Figure 1. Figure 3 is a diagram illustrating in greater detail interface circuitry in the integrated circuit of Figure 2. Figure 4 is a flowchart illustrating operations that may be performed according to an embodiment. Although the following Detailed Description will proceed with reference being made to illustrative embodiments, many alternatives, modifications, and variations thereof will be apparent to those skilled in the art. Accordingly, it is intended that the claimed subject matter be viewed broadly, and be defined only as set forth in the accompanying claims. DETAILED DESCRIPTION Figure 1 illustrates a system embodiment 100 of the claimed subject matter. System 100 may include a host processor 12 coupled to a chipset 14. Host processor 12 may comprise, for example, an Intel'@ Pentium'@ IV microprocessor that is commercially available from the Assignee of the subject application. Of course, alternatively, host processor 12 may comprise another type of microprocessor, such as, for example, a microprocessor that is manufactured and/or commercially available from a source other than the Assignee of the subject application, without departing from this embodiment. Chipset 14 may comprise a host bridge/hub system that may couple host processor 12, a system memory 21 and a user interface system 16 to each other and to a bus system 22. Chipset 14 may also include an input/output (I/O) bridge/hub system (not shown) that may couple the host bridge/bus system to bus 22. Chipset 14 may comprise integrated circuit chips, such as those selected from integrated circuit chipsets commercially available from the assignee of the subject application (e. g. , graphics memory and I/O controller hub chipsets), although other integrated circuit chips may also, or alternatively be used, without departing from this embodiment. User interface system 16 may comprise, e. g. , a keyboard, pointing device, and display system that may permit a human user to input commands to, and monitor the operation of, system 100. <Desc/Clms Page number 3> Bus 22 may comprise a bus that complies with the Peripheral Component Interconnect (PCI) Express Base Specification Revision 1.0, published July 22,2002, available from the PCI Special Interest Group, Portland, Oregon, U. S. A. (hereinafter referred to as a"PCI Express bus"). Alternatively, bus 22 instead may comprise a bus that complies with the PCI-X Specification Rev. 1. 0a, July 24,2000, available from the aforesaid PCI Special Interest Group, Portland, Oregon, U. S. A. (hereinafter referred to as a"PCI-X bus"). Also alternatively, bus 22 may comprise other types and configurations of bus systems, without departing from this embodiment. Controller card 20 may be coupled to and control the operation of mass storage 28. In this embodiment, mass storage 28 may comprise, e. g. , one or more redundant arrays of independent disks (RAID) 29. The RAID level that may be implemented by RAID 29 may be 0, 1, or greater than 1. RAID 29 may comprise, for example, one or more disk mass storage devices and/or one or more peripheral devices (collectively or singly shown in Figure 1 by the block referred to by numeral 52) comprised in a protocol domain 50. As used herein, a"protocol domain"means one or more apparatus that may communicate in accordance with a communication protocol. Processor 12, system memory 21, chipset 14, bus 22, and circuit card slot 30 may be comprised in a single circuit board, such as, for example, a system motherboard 32. Mass storage 28 may be comprised in one or more respective enclosures that may be separate from the enclosure in which the motherboard 32 and the components comprised in the motherboard 32 are enclosed. Card 20 may be coupled to mass storage 28 via one or more network communication links 44. As is discussed below, card 20 may exchange data and/or commands with mass storage 28, via links 44, using, e. g., Serial Advanced Technology Attachment (S-ATA) protocol and/or Serial Attached Small Computer Systems Interface (SAS) protocol. Of course, alternatively, I/O controller card 20 may exchange data and/or commands with mass storage 28 using other and/or additional communication protocols, without departing from this embodiment. In accordance with this embodiment, if an S-ATA protocol is used by controller card 20 to exchange data and/or commands with mass storage 28, it may comply or be compatible with the protocol described in"Serial ATA: High Speed Serialized AT Attachment, "Revision 1.0, published on August 29,2001 by the Serial ATA Working Group. Further alternatively, if an SAS protocol is used by controller card 20 to exchange <Desc/Clms Page number 4> data and/or commands with mass storage 28, it may comply or be compatible with the protocol described in"Information Technology-Serial Attached SCSI (SAS),"Working Draft American National Standard of International Committee For Information Technology Standards (INCITS) T10 Technical Committee, Project T10/1562-D, Revision 2b, published 19 October 2002, by American National Standards Institute (hereinafter termed the"SAS Standard") and/or later-published versions of the SAS Standard. Depending upon, for example, whether bus 22 comprises a PCI Express bus or a PCI-X bus, circuit card slot 30 may comprise, for example, a PCI Express or PCI-X bus compatible or compliant expansion slot or interface 36. Interface 36 may comprise a bus connector 37 may be electrically and mechanically mated with a mating bus connector 34 that may be comprised in a bus expansion slot or interface 35 in circuit card 20. Circuit card 20 may comprise an integrated circuit 40, operating mode selector circuitry 42, computer-readable boot code memory 39, and computer-readable memory 38. Alternatively, although not shown in the Figures, integrated circuit 40 may comprise memory 38 and/or memory 39. As used herein, an"integrated circuit"means a semiconductor device and/or microelectronic device, such as, for example, a semiconductor integrated circuit chip. Memories 38 and/or 39 each may comprise one or more of the following types of memories: semiconductor firmware memory, programmable memory, non-volatile memory, read only memory, electrically programmable memory, random access memory, flash memory, magnetic disk memory, and/or optical disk memory. Either additionally or alternatively, memories 38 and/or 39 each may comprise other and/or later-developed types of computer-readable memory. Machine-readable firmware program instructions may be stored in memory 38. As described below, these instructions may be accessed and executed by integrated circuit 40. When executed by integrated circuit 40, these instructions may result in integrated circuit 40 performing the operations described herein as being performed by integrated circuit 40. Slot 30 and card 20 are constructed to permit card 20 to be inserted into slot 30. When card 20 is properly inserted into slot 30, connectors 34 and 36 become electrically and mechanically coupled to each other. When connectors 34 and 36 are so coupled to each other, card 20 becomes electrically coupled to bus 22 and may exchange data and/or commands with system memory 21, host processor 12, and/or user interface system 16 via bus 22 and chipset 14. <Desc/Clms Page number 5> Alternatively, without departing from this embodiment, the operative circuitry of card 20 may not be comprised in card 20, but instead, may be comprised in other structures, systems, and/or devices. These other structures, systems, and/or devices may be, for example, comprised in motherboard 32, coupled to bus 22, and exchange data and/or commands with other components (such as, for example, system memory 21, host processor 12, and/or user interface system 16) in system 100. Figure 2 is a diagram of integrated circuit 40. In this embodiment, integrated circuit 40 may comprise processor circuitry 202, I/O interface circuitry 204, memory control circuitry 232, memory control circuitry 230, processor bus 206, and bus bridge circuitry 208. Processor circuitry 202, I/O interface circuitry 204, memory control circuitry 232, memory control circuitry 230, and bus bridge circuitry 208 may be coupled to, and exchange, data and/or commands via, bus 206. Bus bridge circuitry 208 may couple processor bus 206 to I/O bus 254, and may permit devices that may be coupled to bus 206 to exchange data and/or commands with devices that may be coupled to bus 254, while permitting the respective address spaces of buses 206 and 254 to be isolated from each other. Memory control circuitry 230, host bus interface circuitry 210, boot code memory interface 242, and peripheral interface circuitry 244 also may be coupled to bus 254, and may exchange data and/or commands among each other via bus 254. Memory control circuitry 230 may be coupled to memory 38. Boot code memory interface 242 may be coupled to memory 39. Memory control circuitry 232 may be coupled to computer-readable memory 228. Memory 228 may comprise, for example, multi-port static random access memory (SRAM), although memory 228 may comprise other types of computer-readable memory without departing from this embodiment. Host bus interface circuitry 210 may be coupled host bus interface 35. Mode selector circuitry 42 may be coupled to general purpose I/O interface circuitry 248 that may be comprised in interface circuitry 246. Interface circuitry 246 may comprise other and/or additional types of interface circuitry (not shown) without departing from this embodiment. The interface circuitry comprised in interface 246 may be coupled together via, for example, a peripheral bus (not shown). Interface 246 may be coupled to bus 254 via peripheral interface circuitry 244 that may permit the interface circuitry in circuitry 246 that may be coupled to the peripheral bus in circuitry 246 to exchange data and/or commands with devices that may be coupled to bus 254. <Desc/Clms Page number 6> Boot code memory interface circuitry 242 may permit program instructions stored in memory 39 to be retrieved therefrom and executed by processor circuitry 202, after, for example, a reset of integrated circuit 40. More specifically, processor circuitry 202 may provide one or more commands to memory 39 and/or interface circuitry 242, via bus 206, bridge circuitry 208, bus 254, and interface circuitry 242, that may result such program instructions being retrieved from memory 39 and provided to circuitry 202, via interface 242, bus 254, bridge circuitry 208, and bus 206. Integrated circuit 40 also may comprise performance monitoring (PMON) circuitry 226. PMON circuitry 226 may monitor, e. g. , exchange of data and/or commands carried out via bus 206 and/or bus 254, and/or other and/or additional operations carried out by other circuitry in integrated circuit 40, and may determine, based at least in part upon such monitoring, whether integrated circuit 40 is operating properly. PMON circuitry 226 may indicate the results of its monitor activities to, e. g. , processor circuitry 202 and/or external devices, such as, for example, host processor 12 via circuitry 210. Processor circuitry 202 may include processor core circuitry that may comprise a plurality of processor cores 216 and 218. As used herein, a"processor core"may comprise hardwired circuitry, programmable circuitry, and/or state machine circuitry. Also, as used herein, "circuitry"may comprise, for example, singly or in any combination, hardwired circuitry, programmable circuitry, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. In this embodiment, each processor core 216 and 218 may comprise respective circuitry that may be compatible and/or in compliance with the Intel@ (B) XScalew Core micro-architecture described in "Intel@ XScalew Core Developers Manual, "published December 2000 by the Assignee of the subject application. Of course, as stated above, circuitry 202 may comprise other types of processor core circuitry without departing from this embodiment. In this embodiment, processor cores 216 and 218 may comprise, for example, computer-readable program instruction memory 220 and 224, respectively, that may contain respective sets of micro-code program instructions that processor cores 216 and 218, respectively, may execute. The execution of these respective sets of program instructions by processor cores 216 and 218, respectively, may result in, for example, the carrying out by circuitry 202, core 216, and/or core 218 of operations described herein as being carried out by circuitry 202, core 216, and/or core 218, respectively. At least a portion of these respective sets of program instructions may be retrieved from, e. g. , boot <Desc/Clms Page number 7> code memory 39 after, for example, a reset of integrated circuit 40. Processor core 216 also may comprise a level-2 cache memory 222 that may be used by processor core 216 in carrying out the operations described herein as being carried out by processor core 216. Interface circuitry 204 may comprise protocol engine circuitry 250A, 250B,... 250N and physical layer interface circuitry 252A, 252B,... 252N. As described below, each respective protocol engine circuitry 250A, 250B,... 250N may be associated with, and exchange data and/or commands with respective physical layer interface circuitry 252A, 252B,... 252N. Thus, for example, protocol engine circuitry 250A may be associated with, and exchange data and/or commands with physical layer interface circuitry 252A, protocol engine circuitry 250B may be associated with, and exchange data and/or commands with physical layer interface circuitry 252B, and protocol engine circuitry 250A may be associated with, and exchange data and/or commands with physical layer interface circuitry 252N, respectively. In this embodiment, the respective construction and operation of each of the protocol engine circuitry 250A, 250B,... 250N may be respectively identical. Additionally, in this embodiment, the respective construction and operation of each of the interfaces 252A, 252B,... 252N may be respectively identical. Without departing from this embodiment, the respective numbers of protocol engines 252A, 252B,... 252N, physical layer interfaces 252A, 252B,... 252N, and links 44 may vary. However, in this embodiment, the number of protocol engines 250A, 250B, ... 250N may be equal to the number of physical layer interfaces 252A, 252B,... 252N. Also in this embodiment, each of the physical layer interfaces 252A, 252B,... 252N may be coupled to a respective one of the links 44; therefore, in this embodiment, the number of physical layers interfaces 252A, 252B,... 252N may be equal to the number of links 44. Host bus interface circuitry 210 may comprise respective interface circuitry that may be used to permit integrated circuit 40 to be able to exchange, in accordance with one of a plurality of different host bus protocols with which bus 22 may comply or be compatible, data and/or commands with other devices that may be coupled to bus 22. For example, in this embodiment, circuitry 210 may comprise PCI-X bus interface circuitry 212 and PCI Express bus interface circuitry 214. That is, as discussed below, depending, at least in part, upon the bus protocol with which bus 22 may comply or be compatible, a particular operating mode of integrated circuit 40 may be selected in which <Desc/Clms Page number 8> only a single appropriate one of the respective interface circuitry in circuitry 210 may be enabled to exchange data and/or commands with devices that may be coupled to bus 22, other respective interface circuitry in circuitry 210 may be disabled. Although not shown in the Figures, in this embodiment, memory control circuitry 232 and/or DMA circuitry 234 may be coupled to bus 254. In this embodiment, memory control circuitry 232 may comprise direct memory access (DMA) circuitry 234. Memory control circuitry 232 may control storage of data in, and retrieval of data from memory 228. For example, in this embodiment, memory control circuitry 232 may exchange commands and/or data with, for example, processor circuitry 202, interface circuitry 204, interface circuitry 210 and/or memory control circuitry 230. Based, at least in part, upon these commands, memory control circuitry 232 may exchange data and/or commands with memory 228. This may result in memory 228 storing and/or retrieving data in accordance with the commands and/or data supplied to memory controller circuitry 232. Additionally, depending upon the selected mode of operation of integrated circuit 40, DMA circuitry 234 may control, based upon commands and/or data received by circuitry 234 from other circuitry in integrated circuit 40, the exchange among I/O interface 204 and the other circuitry in integrated circuit 40 of data and/or commands received or intended to be transmitted by I/O interface circuitry 204 via one or more links 44. Without departing from this embodiment, DMA circuitry 234 may not be comprised in circuitry 232, but instead, may comprise circuitry that is distinct from circuitry 232, and is coupled to circuitry 232 and bus 254. In this embodiment, memory control circuitry 230 may comprise RAID operation- related circuitry 240. Circuitry 240 may comprise, for example, DMA circuitry 238 and RAID calculation circuitry 236. Memory control circuitry 230 may control storage of data in, and retrieval of data from external memory 38. For example, in this embodiment, memory control circuitry 230 may exchange commands and/or data with, for example, processor circuitry 202, interface circuitry 210 and/or memory control circuitry 232. Based, at least in part, upon these commands, memory control circuitry 230 may exchange data and/or commands with memory 38. This may result in memory 38 storing and/or retrieving data in accordance with the commands and/or data supplied to memory controller circuitry 232. Additionally, depending upon the selected mode of operation of integrated circuit 40, DMA circuitry 238 may control, based upon commands and/or data received by circuitry 238 from other circuitry in integrated circuit 40, the exchange of <Desc/Clms Page number 9> RAID-related data among such other circuitry in integrated circuit 40. As used herein, "RAID-related data"means data involved in, generated as a result of, used as input or operands in, and/or used in carrying out and/or to facilitate operations involved in implementing and/or maintaining a RAID, such as, for example, RAID 29. RAID calculation circuitry 236 may comprise arithmetic accelerator circuitry (not shown) that may be capable of performing one or more arithmetic and/or logical operations using and/or involving RAID-related data, such as, for example, logical exclusive-or operations that may generate RAID parity data from initial user data and/or regenerate the initial user data from such RAID parity data. Without departing from this embodiment, DMA circuitry 238 and/or RAID calculation circuitry 236 may not be comprised in circuitry 230, but instead, may comprise circuitry that is distinct from circuitry 230, and is coupled to circuitry 230 and bus 254. Also without departing from this embodiment, integrated circuit 40 may not comprise RAID calculation circuitry 236, but alternatively, the arithmetic and/or logical operations performed by circuitry 236 instead may be performed by processor core 216. As stated previously, the respective construction of each of the protocol engines 250A, 250B,... 250N may be identical. Figure 3 is a diagram that illustrates protocol engine 250A. Protocol engine 250A may comprise interface circuitry 302, data transport layer circuitry 304, port layer circuitry 306, data link layer circuitry 308, and SAS link layer circuitry 310. Although not shown in the Figures, circuitry 302 may couple circuitry 304,306, 308, and 310 to bus 206 so as to permit circuitry 304,306, 308, and/or 310 to exchange data and/or commands with processor core 218. SAS link layer circuitry 310 may be coupled to, and exchange data and/or commands with physical interface circuitry 252A. Transport layer circuitry 304 may be coupled to, and exchange data and/or commands with port layer circuitry 306. Port layer circuitry 306 also may be coupled to, and exchange data and/or commands with data link layer circuitry 308. SAS link layer circuitry 310 may be coupled to, and exchange data and/or commands with data link layer circuitry 308 and port layer circuitry 306. In this embodiment, transport layer circuitry 304 may comprise Serial Management Protocol (SMP) transport layer circuitry 312, Serial Advanced Teclmology Attachment (ATA) Tunneled Protocol (STP) transport layer circuitry 314, and Serial Small Computer System Interface (SCSI) Protocol (SSP) transport layer circuitry 316. Also in this embodiment, port layer circuitry 306 may comprise connection management circuitry 318. <Desc/Clms Page number 10> Additionally in this embodiment, data link layer circuitry 308 may comprise SMP link layer circuitry 320, STP link layer circuitry 322, and SSP link layer 324 circuitry. In this embodiment, SAS link layer circuitry 310 may comprise out-of-band (OOB) signal management circuitry 326 and S-ATA link speed negotiation control circuitry 328. Unless stated to the contrary herein, it should be understood that circuitry 304,306, 308, and 310 may implement conventional SAS communication processes, procedures, and techniques. For example, unless stated to the contrary herein, it should be understood that circuitry 312,314, and 316 may implement conventional SMP transport layer, STP transport layer, and SSP transport layer protocols, procedures, processes, and techniques, respectively, and also may generate respective sets of signals that may result in the carrying out of such protocols, procedures, processes, and techniques. Also, for example, circuitry 306 may implement conventional SAS port control protocols, procedures, processes, and techniques, and also may generate respective signals that may result in the carrying out of such protocols, procedures, processes, and techniques. Furthermore, for example, circuitry 320,322, and 324 may implement conventional SMP link layer, STP link layer, and SSP link layer protocols, procedures, processes, and techniques, respectively, and also may generate respective sets of signals that may result in the carrying out of such protocols, procedures, processes, and techniques. Additionally, for example, circuitry 310 may implement conventional SAS data link protocols, procedures, processes, and techniques to control, e. g. , physical interface 252A, and also may generate respective sets of signals that may result in the carrying out of such protocols, procedures, processes, and techniques. Of course, depending upon the particular protocols via which integrated circuit 40 may be capable of communicating, many variations, modifications, and alternatives are possible without departing from this embodiment. In this embodiment, each physical layer interface circuitry 252A, 252B,... 252N may comprise respective analog front end (AFE) circuitry 253A, 253B,... 253N that may receive and/or transmit data and/or control signals to and/or from mass storage 28 via respective links 44. In this embodiment, physical layer interface circuitry 252A may comprise AFE circuitry 253A that may receive and/or transmit data and/or control signals to and/or from one or more external mass storage devices comprised in one or more devices 52 via one of the links 44. As stated previously, one or more devices 52 may be comprised in a protocol domain 50. In this embodiment, protocol domain 50 may be either an SAS domain or an <Desc/Clms Page number 11> S-ATA domain. If protocol domain 50 is an SAS domain, then one or more devices 52 may be capable of communicating using an SAS protocol via one of the links 44. Conversely, if protocol domain 50 is an S-ATA domain, then one or more devices 52 may be capable of communicating using an S-ATA protocol via one of the links 44. As is discussed below, in this embodiment, depending at least in part upon the selected mode of operation of integrated circuit 40, integrated circuit 40 may be capable of discovering, at least in part, whether one or more devices 52 are capable of communicating via an SAS communication protocol or via an S-ATA communication protocol. Based upon this discovery, at least in part, by integrated circuit 40, integrated circuit 40 may select, at least in part, whether to communicate with one or more devices 52 using either an SAS or an S-ATA communication protocol, in order to enable integrated circuit 40 to communicate with one or more devices 52. For example, in accordance with SAS and S-ATA protocols, during communication link initialization between integrated circuit 40 and mass storage 28, following, e. g. , a reset of system 100, OOB signal sequences may be exchanged between AFE circuitry 253A and one or more mass devices 52 via one of the links 44. In accordance with S-ATA protocol, if one or more devices 52 are capable of communicating using S-ATA protocol and are directly coupled to AFE circuitry 253A via one of the links 44 (i. e. , if one or more devices 52 are not coupled to AFE circuitry 253A via an SAS expander), one or more devices 52 may be expected to transmit to AFE circuitry 253A during an S-ATA OOB signal sequence predetermined, special primitive signal sequence (referred to in Figure 1 by the block referenced by numeral 54) that may comprise, e. g. , a predetermined comma character, such as, a K28. 5 character. As used herein, a"signal sequence"comprises one or more signals. Conversely, in accordance with SAS protocol, if one or more devices 52 are capable of communicating using SAS protocol, one or more devices 52 may be expected not to transmit to AFE circuitry 253A this predetermined, special signal sequence 54 during an SAS OOB signal sequence, but instead may be expected to transmit to AFE circuitry 253A during this signal sequence a predetermined COMSAS signal sequence 56. Thus, if, during such an OOB signal sequence, AFE circuitry 253A receives from one or more devices 52 signal sequence 54, but does not receive COMSAS signal sequence 56, this may indicate that protocol domain 50 is an S- ATA domain, one or more devices 52 are directly coupled to AFE circuitry 253A via one of the links 44, and one or more devices 52 are capable of communicating with integrated <Desc/Clms Page number 12> circuit 40 via an S-ATA protocol. Conversely, if, during such an OOB signal sequence, AFE circuitry 253A receives from one or more devices 52 COMSAS signal sequence 56, but does not receive signal sequence 54, this may indicate that protocol domain 50 is an SAS domain and one or more devices 52 are capable of communicating with integrated circuit via an SAS protocol. In accordance with this embodiment, during communication link initialization, physical interface circuitry 252A may provide to OOB management circuitry 320 signals indicative of OOB signals received by AFE circuitry 253A from one or more devices 52. OOB management circuitry 320 may examine the signals provided to it from interface circuitry 252A to detect whether AFE circuitry 253A has received, during an OOB signal sequence, from one or more devices 52, signal sequence 54 or COMSAS signal sequence 56. After OOB management circuitry 320 detects that AFE circuitry 253A has received, during an OOB signal sequence, signal sequence 54 or COMSAS signal sequence 56, OOB management circuitry 320 may provide one or more signals to processor core 218 that may indicate whether AFE circuitry 253A has received signal sequence 54 or COMSAS signal sequence 56. After completion of this OOB signal sequence, processor core 218 may determine, based at least in part upon whether OOB management circuitry 320 detected that AFE circuitry 253A received, or failed to receive, during the OOB signal sequence, signal sequence 54 and/or COMSAS signal sequence 56, whether one or more devices 52 are directly coupled to integrated circuit 40 via one of the links 44 and are capable of communicating with integrated circuit 40 via an S-ATA protocol, or one or more devices 52 are capable of communicating with integrated circuit 40 via an SAS protocol. For example, if circuitry 320 detected that AFE circuitry 253A received, during this OOB signal sequence, from one or more devices 52 signal sequence 54, but did not receive COMSAS signal sequence 56, processor core 218 may determine that one or more devices 52 are directly coupled to AFE circuitry 253A via one of the links 44 and are capable of communicating with integrated circuit 40 via an S-ATA protocol. Conversely, if circuitry 320 detected that AFE circuitry 253A received, during this OOB signal sequence, from one or more devices 52 COMSAS signal sequence 56, but did not receive signal sequence 54, processor core 218 may determine that one or more devices 52 are capable of communicating with integrated circuit 40 via an SAS protocol. <Desc/Clms Page number 13> Of course, depending upon the particular communication protocols via which integrated circuit 40 may be capable of communicating, character 54 and/or signal 56 may vary without departing from this embodiment. Additionally, depending upon the particular communication protocols via which integrated circuit 40 and/or one or more devices 52 may be capable of communicating, the manner in which integrated circuit 40 may determine the communication protocol or protocols via which one or more devices 52 may be capable of communicating may vary without departing from this embodiment. If processor core 218 determines that one or more devices 52 are directly coupled to AFE circuitry 253A via one of the links 44 and are capable of communicating with integrated circuit 40 via an S-ATA protocol, processor core 218 may issue one or more respective signals to circuitry 304,306, 308, 310, and 252A. This may result in circuitry 250A and 252A being enabled to permit integrated circuit 40 to communicate directly with one or more devices 52, using S-ATA protocol, via one of the links 44. More specifically, this may result in, for example, the disabling of circuitry 312,316, 318,320, and 324 from being involved in communications between integrated circuit 40 and one or more devices 52, and may also result in the enabling of circuitry 314,322, and 328 to be actively involved in carrying out communications between integrated circuit 40 and one or more devices 52. Alternatively, in response, at least in part, to the signaling of circuitry 310 by processor core 218, circuitry 310 may signal circuitry 306 and/or circuitry 318 ; this may result in the disabling of circuitry 318 from being involved in communications between integrated circuit 40 and one or more devices 52. The signaling of circuitry 252A by processor core 218 may result, at least in part, in the transmission and/or reception signaling levels of AFE circuitry 253A being set so as to be in compliance or compatible with S-ATA signal transmission and/or reception signaling levels. That is, this may result in AFE circuitry 253A adjusting the voltage and/or current levels of signals transmitted to one or more devices 52 by AFE circuitry 253A to be in compliance or compatible with S-ATA transmission signal voltage and/or current levels, and/or may also result in AFE circuitry 253A detecting signals received by AFE circuitry 253A whose voltage and/or current levels are in compliance or compatible with S-ATA received signal voltage and/or current levels. The signaling of circuitry 310 by processor core 218 may result in the enabling of circuitry 328 to implement conventional S-ATA communication link speed negotiation protocols, procedures, processes, and techniques to negotiate with one or more devices 52 <Desc/Clms Page number 14> the appropriate speed of communication to be carried out, via one of the links 44, between one or more devices 52 and integrated circuit 40. Circuitry 310 may generate and transmit to interface 252A one or more signals that may result in the carrying out of such protocols, procedures, processes, and techniques. In operation of system 100, when circuitry 318 is enabled to be actively involved in carrying out communications between integrated circuit 40 and one or more devices 52, circuitry 318 may implement, at least in part, connection management functions that may prevent, at least in part, timing-out of the communications between integrated circuit 40 and one or more devices 52. Conversely, in operation of system 100, when circuitry 318 is disabled from being actively involved in carrying out such communications, processor core 218 may provide one or more signals to circuitry 250A that may result in circuitry 250A emulating S-ATA host functionality that may result in the maintaining, without timing-out, of such communications. Conversely, if processor core 218 determines that one or more devices 52 are capable of communicating with integrated circuit 40 via an SAS protocol, processor core 218 may issue one or more respective signals to circuitry 304,306, 308, 310, and 252A. This may result in circuitry 250A and 252A being enabled to permit integrated circuit 40 to communicate with one or more devices 52, using an SAS protocol, via one of the links 44. More specifically, the signaling of circuitry 304, circuitry 306, and circuitry 308 may result in the disabling of circuitry 314 from being actively involved in communications between integrated circuit 40 and one or more devices 52, the enabling of circuitry 318 to be actively involved in such communications, and the disabling of circuitry 322 from being involved in such communications, respectively. Additionally, depending upon whether communications are carried out between one or more devices 52 and integrated circuit 40 via an SMP or SSP SAS protocol, the signaling of circuitry 304 by processor core 218 may result in the enabling of circuitry 312 or 316, respectively, to be actively involved in such communications, and the signaling of circuitry 308 by processor core 218 may result in the enabling of circuitry 320 or 324, respectively, to be involved in such communications. Additionally, the signaling of circuitry 252A by processor core 218 may result, at least in part, in the transmission and/or reception signaling levels of AFE circuitry 253A being set so as to be in compliance or compatible with SAS signal transmission and/or reception signaling levels. That is, this may result in AFE circuitry 253A adjusting the voltage and/or current levels of signals transmitted to one or more <Desc/Clms Page number 15> devices 52 by AFE circuitry 253A to be in compliance or compatible with SAS transmission signal voltage and/or current levels, and/or may also result in AFE circuitry 253A detecting signals received by AFE circuitry 253A whose voltage and/or current levels are in compliance or compatible with SAS received signal voltage and/or current levels. Furthermore, the signaling of circuitry 310 by processor core 218 may result in the disabling of circuitry 328 from implementing conventional S-ATA communication link speed negotiation protocols, procedures, processes, and techniques described previously. In this embodiment, a mode of operation of integrated circuit 40 may be selected, based upon and/or as a result of, at least in part, of one or more signals provided to GPIO interface circuitry 248 from selector circuitry 42, one or more signals provided to host bus interface circuitry 210 by host processor 12, and/or execution by processor circuitry 202 of one or more program instructions stored in memory 39. Depending, at least in part, upon the selected mode of operation of integrated circuit 40, integrated circuit 40 may operate in accordance with one or more operational characteristics that may correspond to the selected mode of operation. For example, depending, at least in part upon the selected mode of operation of integrated circuit 40, these operational characteristics may include which of bus interfaces 212 and 214 is enabled to or disabled from communicating with bus 22, and/or which protocol engines 250A, 250B,... 250N are enabled to or disabled from communicating with mass storage 28. Additionally or alternatively, such operational characteristics may comprise, for example, whether one or more of the communication protocols that are implemented by one or more of the protocol engines 250A, 250B,... 250N are selected based at least in part upon the discovery of one or more communication protocols via which one or more devices (such as, for example, one or more devices 52) in mass storage 28 may communicate, or whether communication between integrated circuit and such devices is to be carried out via one or more predetermined protocols. Also additionally or alternatively, such operational characteristics may comprise whether DMA circuitry 234 is enabled to control or disabled from controlling the exchange among I/O interface 204 and the other circuitry in integrated circuit 40 of data and/or commands received or intended to be transmitted by I/O interface circuitry 204 via one or more links 44. Such operational characteristics may also include, for example, whether processor core 216 and/or RAID operation-related circuitry 240 are enable to perform or disabled from performing one or more operations involved in implementing and/or maintaining a RAID, such as, for example, RAID 29. <Desc/Clms Page number 16> Examples of such operations that may be involved in implementing and/or maintain a RAID are disclosed in, e. g. , co-pending U. S. Patent Application Serial No. 10/301,028 (Attorney Docket No. 42390. P14962), entitled"Integrated Circuit Having Multiple Modes Of Operation, "filed on November 20,2002. Of course, many modifications, variations, and alternatives are possible without departing from this embodiment. In this embodiment, selector circuitry 42 may comprise one or more jumpers and/or one or more dual in-line package (DIP) switches 43 that may be set (e. g. , by a not shown human operator) in a plurality of different configurations to select, at least in part, the selected operating mode of integrated circuit 40. That is, the plurality of different configurations of the jumper and/or switches 43 may correspond to one or more different operating characteristics of one or more different operating modes of integrated circuit 40. When the one or more jumpers and/or one or more DIP switches 43 are set in a particular configuration, the selector circuitry 42 may generate one or more control signals that may correspond to one or more different operating characteristics of integrated circuit 40 selected by that particular configuration. After, for example, a reset of integrated circuit 40, these one or more control signals may be supplied to processor cores 216 and 218. In response, processor core 216 may be enabled or disabled in accordance with the selected mode of operation; additionally, processor core 218 may operate in accordance with and/or generate and supply appropriate control signals to interface circuitry 204,210, 232, and/or 236 that may result in such circuitry operating in accordance with the selected mode of operation. Alternatively or additionally, the one or more control signals from selector circuitry 42 also may be supplied to circuitry 210, circuitry 234, and/or circuitry 240. This may result in enabling or disabling of bus interface circuitry 212, bus interface circuitry 214, circuitry 240, and/or circuitry 234 in accordance with the mode of operation of integrated circuit 40 that corresponds to and/or is indicated by the one or more control signals. Alternatively or additionally, in this embodiment, the selected mode of operation of integrated circuit 40 may be selected based upon and/or as a result, at least in part, of one or more signals indicative of the selected mode of operation that may be provided to host bus interface circuitry 210 by host processor 12. In response to these one or more signals, processor core 216 may be enabled or disabled in accordance with the selected mode of operation; additionally, processor core 218 may operate in accordance with and/or generate and supply appropriate control signals to interface circuitry 204,210, 232, <Desc/Clms Page number 17> and/or 236 that may result in such circuitry operating in accordance with the selected mode of operation. Also alternatively or additionally, in this embodiment, the selected mode of operation of integrated circuit 40 may be selected based upon and/or as a result, at least in part, of execution by processor circuitry 202 of one or more program instructions stored in memory 39, memory 220, and/or memory 224. That is, according to this embodiment, different respective operating modes of integrated circuit 40 may be associated with different respective firmware program instruction set images that when executed, at least in part, by processor core 216 and processor core 218 may result in the respective operating modes being associated with these respective images being selected, and also may result in integrated circuit 40 operating in the respective operating modes. In this embodiment, only a single such firmware program instruction set image may be stored in memory 39, memory 220, and/or memory 224. This single firmware program instruction set image may comprise one or more firmware program instructions that may be executed by processor cores 216 and processor 218 after, for example, a reset of integrated circuit 40. This may result in processor core 216 being enabled or disabled in accordance with the selected mode of operation. This may also result in processor core 218 operating in accordance with and/or generating and supplying appropriate control signals to interface circuitry 204,210, 232, and/or 236 that may result in such circuitry operating in accordance with the selected mode of operation. Memory 39, memory 220, and/or memory 224 may comprise program instructions that, when executed by integrated circuit 40, may result in, among other things, integrated circuit 40 performing operations in accordance with one embodiment. Figure 4 is a flowchart that illustrates these and other operations 400 that may be carried out in system 100, in accordance with one embodiment. In this embodiment, operations 400 may be carried out in system 100 after an operating mode of integrated circuit 40 has been selected in which one or more of the communication protocols that are implemented by one or more of the protocol engines 250A, 250B,... 250N (e. g. , protocol engine 250A) are selected based at least in part upon the discovery of one or more communication protocols via which one or more devices (such as, for example, one or more devices 52) in mass storage 28 may communicate. Operations 400 may commence with the discovery, at least in part, by integrated circuit 40, of at least one communication protocol via which at least one device external to <Desc/Clms Page number 18> integrated circuit 40 (e. g. , one or more devices 52) may be capable of communicating, as illustrated by operation 402 in Figure 4. In this embodiment, the discovery,, at least in part, by integrated circuit 40 of the at least one communication protocol via which at least one device external to integrated circuit 40 may communicate, as a result of operation 402, may be based, at least in part, upon a determination by processor core 218, in the manner described previously, of whether OOB management circuitry 320 detected that AFE circuitry 253A received, or failed to receive, during the OOB signal sequence, signal sequence 54 and/or COMSAS signal sequence 56. For example, as stated previously, if circuitry 320 detected that AFE circuitry 253A received, during this OOB signal sequence, from one or more devices 52 signal sequence 54, but did not receive COMSAS signal sequence 56, processor core 218 may determine that one or more devices 52 are directly coupled to AFE circuitry 253A via one of the links 44 and are capable of communicating with integrated circuit 40 via an S-ATA protocol; as a result, at least in part, of this determination by processor core 218, integrated circuit may discovery at least in part, as a result of operation 402, that one or more devices 52 are capable of communicating via an S-ATA protocol. Conversely, if circuitry 320 detected that AFE circuitry 253A received, during this OOB signal sequence, from one or more devices 52 COMSAS signal sequence 56, but did not receive signal sequence 54, processor core 218 may determine that one or more devices 52 are capable of communicating with integrated circuit 40 via an SAS protocol; as a result, at least in part, of this determination by processor core 218, integrated circuit may discovery at least in part, as a result of operation 402, that one or more devices 52 are capable of communicating via an SAS protocol. Thereafter, integrated circuit 40 may select, at least in part, the at least one communication protocol to use to communicate with the at least one device, as illustrated by operation 404 in Figure 4. For example, in this embodiment, after discovering, as a result of operation 402, the at least one protocol via which one or more devices 52 may communicate, processor core 218 may issue one or more respective signals to circuitry 304,306, 308,310, and 252A. If, as a result of operation 402, integrated circuit 40 discovered that one or more devices 52 may be capable of communicating via an S-ATA protocol, this may result in circuitry 250A and 252A being enabled to permit integrated circuit 40 to communicate directly with one or more devices 52 using an S-ATA protocol, via one of the links 44. Conversely, if, as a result of operation 402, integrated circuit 40 discovered that one or more devices 52 may be capable of communicating via an SAS <Desc/Clms Page number 19> protocol, this may result in circuitry 250A and 252A being enabled to permit integrated circuit 40 to communicate with one or more devices 52 using an SAS protocol. Thus, in summary, one system embodiment may comprise a circuit card including an integrated circuit. The circuit card may be capable of being coupled to a bus. The integrated circuit may be capable of discovering, at least in part, at least one communication protocol via which at least one device external to the integrated circuit is capable of communicating. The integrated circuit also may be capable of communicating in accordance with a plurality of different communication protocols. The integrated circuit further may be capable of selecting, at least in part, the at least one communication protocol to use to communicate with the at least one device. One apparatus embodiment may include an integrated circuit that is capable of discovering, at least in part, at least one communication protocol via which at least one device external to the integrated circuit is capable of communicating. The integrated circuit also may be capable of communicating in accordance with a plurality of different communication protocols. The integrated circuit further may be capable of selecting, at least in part, the at least one communication protocol to use to communicate with the at least one device. Advantageously, the integrated circuit of these embodiments may offer enhanced communication capabilities, and may communicate using a plurality of communication protocols. Also advantageously, the communication protocol or protocols used by this integrated circuit may be selected, at least in part by the integrated circuit, based at least in part, upon the discovery by the integrated circuit, at least in part, of the one or more communication protocols via which one or more external devices are capable of communicating. Further advantageously, this may permit a single integrated circuit according to these embodiments to communicate with a data storage system directly using a plurality of different communication protocols. Thus, for example, it may be possible to use the integrated circuit of these embodiments to communicate directly via one or more communication links with one or more devices in SAS and/or S-ATA protocol domains in the data storage system, without having to employ one or more external communication protocol converters, translators, and/or expanders (such as, for example, one or more SAS expanders) coupled between the integrated circuit and the data storage system, although such protocol converters, translators, and/or expanders may be used without departing from these embodiments. Advantageously, these features may permit the integrated <Desc/Clms Page number 20> circuit of these embodiments to exhibit enhanced versatility and utility compared to the prior art, and may reduce design costs of employing this integrated circuit compared to the prior art. Also advantageously, for purposes of considering at least some of the functionality of one or more embodiments, circuitry 302 and the circuitry in integrated circuit 40 that is external to circuitry 250A may together be viewed, at least in part, in conceptual, behavioral, and/or functional sense, as comprising, at least in part, a single control element to control which communication protocol may be used by the integrated circuit 40 to communicate with the at least one device. Thus, advantageously, in at least these one or more embodiments, this control element may comprise, for example, singly or in any combination, hardwired circuitry, programmable circuitry, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. The terms and expressions which have been employed herein are used as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described (or portions thereof), and it is recognized that various modifications are possible within the scope of the claims. Indeed, without departing from this embodiment, system 100 may include more or fewer than the elements shown in the Figures and described previously herein as being comprised system 100. Also alternatively, circuitry 204 may comprise protocol engine circuitry that may permit integrated circuit 40 to be able to communicate with mass storage 28 using a Fibre Channel protocol that complies or is compatible with the interface/protocol described in ANSI Standard Fibre Channel (FC) Physical and Signaling Interface-3 X3.303 : 1998 Specification. Other modifications, variations, and alternatives are also possible. Accordingly, the claims are intended to cover all such equivalents.
Embodiments include methods of assisting a user in locating a mobile device executed by a processor of the mobile device. Various embodiments may include a processor of a mobile device obtaining information useful for locating the mobile device from a sensor of the mobile device configured to obtain information regarding surroundings of the mobile device, anonymizing the obtained information to remove private information, and uploading the anonymized information to a remote server in response to determining that the mobile device may be misplaced. Anonymizing the obtained information may include removing speech from an audio input and compiling samples of ambient noise for inclusion in the anonymized information. Anonymizing the obtained information to remove private information includes editing an image captured by the mobile device to make images of detected individuals unrecognizable.
CLAIMSWhat is claimed is:1. A method of assisting a user in locating a mobile device executed by a processor of the mobile device, comprising: obtaining information useful for locating the mobile device from a sensor of the mobile device configured to obtain information regarding surroundings of the mobile device; anonymizing the obtained information to remove private information; and uploading the anonymized information to a remote server.2. The method of claim 1, wherein uploading the anonymized information to a remote server comprises uploading the anonymized information to a remote server in response to determining that the mobile device may be misplaced.3. The method of claim 1, wherein anonymizing the obtained information to remove private information includes removing speech from an audio input of the mobile device to compile one or more samples of ambient noise from the surroundings of the mobile device for inclusion in the anonymized information.4. The method of claim 1, wherein anonymizing the obtained information to remove private information includes determining which of one or more predetermined categories of ambient noise are included within an audio input of the mobile device, wherein the anonymized information indicates the determined one or more predetermined categories of ambient noise.5. The method of claim 4, wherein the anonymized information indicates a text description of ambient noise that includes the determined one or more predetermined categories of ambient noise.326. The method of claim 1, wherein anonymizing the obtained information to remove private information includes converting speech to text and generating a generalized description of the converted speech, wherein the anonymized information includes the generalized description of the speech converted to text.7. The method of claim 1, wherein anonymizing the obtained information to remove private information includes editing an image captured by a video input of the mobile device to make images of individuals detected within the captured image unrecognizable, wherein the anonymized information includes the edited image.8. The method of claim 1, wherein anonymizing the obtained information to remove private information includes editing an image captured by a video input of the mobile device to make unrecognizable identifying information associated with an individual detected within the captured image, wherein the anonymized information includes the edited image.9. The method of claim 1, wherein anonymizing the obtained information to remove private information includes determining which of one or more predetermined categories of visual elements are included within an image captured by a camera of the mobile device, wherein the anonymized information indicates the determined one or more predetermined categories of visual elements.10. The method of claim 1, wherein anonymizing the obtained information to remove private information includes compiling a text description of one or more visual elements within an image captured by a camera of the mobile device, wherein the anonymized information indicates the compiled text description.3311. A mobile device, comprising: a sensor configured to obtain information regarding surroundings of the mobile device; and a processor coupled to the sensor and configure to: obtain information for locating the mobile device from the sensor; anonymize the obtained information to remove private information; and upload the anonymized information to a remote server.12. The mobile device of claim 11, wherein the processor is configure to upload the anonymized information to a remote server in response to determining that the mobile device may be misplaced.13. The mobile device of claim 11, wherein the processor is configure to anonymize the obtained information to remove private information by removing speech from an audio input of the mobile device to compile one or more samples of ambient noise from the surroundings of the mobile device for inclusion in the anonymized information.14. The mobile device of claim 11, wherein the processor is configure to anonymize the obtained information to remove private information by determining which of one or more predetermined categories of ambient noise are included within an audio input of the mobile device, wherein the anonymized information indicates the determined one or more predetermined categories of ambient noise.15. The mobile device of claim 14, wherein the processor is configure to anonymize the obtained information to remove private information by generating information indicates a text description of ambient noise that includes the determined one or more predetermined categories of ambient noise.16. The mobile device of claim 11, wherein the processor is configure to anonymize the obtained information to remove private information by converting speech to text and generating a generalized description of the converted speech, wherein the anonymized information includes the generalized description of the speech converted to text.17. The mobile device of claim 11, wherein the processor is configure to anonymize the obtained information to remove private information by editing an image captured by a video input of the mobile device to make images of individuals detected within the captured image unrecognizable, wherein the anonymized information includes the edited image.18. The mobile device of claim 11, wherein the processor is configure to anonymize the obtained information to remove private information by editing an image captured by a video input of the mobile device to make unrecognizable identifying information associated with an individual detected within the captured image, wherein the anonymized information includes the edited image.19. The mobile device of claim 11, wherein the processor is configure to anonymize the obtained information to remove private information by determining which of one or more predetermined categories of visual elements are included within an image captured by a camera of the mobile device, wherein the anonymized information indicates the determined one or more predetermined categories of visual elements.20. The mobile device of claim 11, wherein the processor is configure to anonymize the obtained information to remove private information by compiling a text description of one or more visual elements within an image captured by a camera of the mobile device, wherein the anonymized information indicates the compiled text description.21. A mobile device, comprising: means for obtaining information for locating the mobile device from a sensor configured to obtain information regarding surroundings of the mobile device; means for anonymizing the obtained information to remove private information; and means for uploading the anonymized information to a remote server in response to determining that the mobile device may be misplaced.22. The mobile device of claim 21, wherein means for uploading the anonymized information to a remote server comprises means for uploading the anonymized information to a remote server in response to determining that the mobile device may be misplaced.23. The mobile device of claim 21, wherein means for anonymizing the obtained information to remove private information comprises means for removing speech from an audio input of the mobile device to compile one or more samples of ambient noise from the surroundings of the mobile device for inclusion in the anonymized information.24. The mobile device of claim 21, wherein means for anonymizing the obtained information to remove private information comprises means for determining which of one or more predetermined categories of ambient noise are included within an audio input of the mobile device, wherein the anonymized information indicates the determined one or more predetermined categories of ambient noise.25. The mobile device of claim 21, wherein means for anonymizing the obtained information to remove private information comprises means for converting speech to text and generating a generalized description of the converted speech, wherein the36 anonymized information includes the generalized description of the speech converted to text.26. The mobile device of claim 21, wherein means for anonymizing the obtained information to remove private information comprises means for editing an image captured by a video input of the mobile device to make images of individuals detected within the captured image unrecognizable, wherein the anonymized information includes the edited image.27. The mobile device of claim 21, wherein means for anonymizing the obtained information to remove private information comprises means for editing an image captured by a video input of the mobile device to make unrecognizable identifying information associated with an individual detected within the captured image, wherein the anonymized information includes the edited image.28. The mobile device of claim 19, wherein means for anonymizing the obtained information to remove private information comprises means for determining which of one or more predetermined categories of visual elements are included within an image captured by a camera of the mobile device, wherein the anonymized information indicates the determined one or more predetermined categories of visual elements.29. The mobile device of claim 19, wherein means for anonymizing the obtained information to remove private information comprises means for compiling a text description of one or more visual elements within an image captured by a camera of the mobile device, wherein the anonymized information indicates the compiled text description.3730. A non-transitory processor-readable medium having stored thereon processorexecutable instructions configured to cause a processor of a mobile device to perform operations comprising: obtaining information useful for locating the mobile device from a sensor of the mobile device configured to obtain information regarding surroundings of the mobile device; anonymizing the obtained information to remove private information; and uploading the anonymized information to a remote server in response to determining that the mobile device may be misplaced.38
TITLELocating Mobile Device Using Anonymized InformationRELATED APPLICATION[0001] This application claims the benefit of priority from U.S. Patent Application No. 17/474,679, filed September 14, 2021; the entire contents of which is herein incorporated by reference.BACKGROUND[0002] Modem mobile devices, including cell phones, laptops, tablets, smart watches, and similar devices, come equipped with “find my phone” or other named locating features that use global navigation satellite system (GNSS) functionality, such as a Global Positioning System (GPS) receiver, to determine a last detected location of the mobile device in order to help a user locate the device when missing. However, the somewhat inaccurate nature of GPS can sometimes suggest a mobile device was lost at a location that the mobile device was not located.SUMMARY[0003] Various aspects include methods and mobile devices implementing the methods of assisting a user in locating a mobile device executed by a processor of the mobile device. Various aspects may include obtaining information useful for locating the mobile device from a sensor of the mobile device configured to obtain information regarding surroundings of the mobile device, anonymizing the obtained information to remove private information, and uploading the anonymized information to a remote server. In some aspects, uploading the anonymized information to the remote server may include uploading the anonymized information to the remote server in response to determining that the mobile device may be misplaced. In some aspects, anonymizing the obtained information to remove private information may include removing speech from an audio input of the mobile device to compile one or more samples of ambient noise from the surroundings of the mobile device for inclusion in the anonymized information.[0004] In some aspects, anonymizing the obtained information to remove private information includes determining which of one or more predetermined categories of ambient noise are included within an audio input of the mobile device, wherein the anonymized information indicates the determined one or more predetermined categories of ambient noise. In some aspects, the anonymized information may indicate a text description of ambient noise that includes the determined one or more predetermined categories of ambient noise. In some aspects, anonymizing the obtained information to remove private information may include converting speech to text and generating a generalized description of the converted speech, wherein the anonymized information includes the generalized description of the speech converted to text. In some aspects, anonymizing the obtained information to remove private information includes editing an image captured by a video input of the mobile device to make images of individuals detected within the captured image unrecognizable, wherein the anonymized information includes the edited image.[0005] In some aspects, anonymizing the obtained information to remove private information may include editing an image captured by a video input of the mobile device to make unrecognizable identifying information associated with an individual detected within the captured image, wherein the anonymized information includes the edited image. In some aspects, anonymizing the obtained information to remove private information includes determining which of one or more predetermined categories of visual elements are included within an image captured by a camera of the mobile device, wherein the anonymized information indicates the determined one or more predetermined categories of visual elements. In some aspects, anonymizing the obtained information to remove private information may include compiling a text description of one or more visual elements within an image captured by a camera of the mobile device, wherein the anonymized information indicates the compiled text description.[0006] Further aspects include a mobile device including a processor configured with processor-executable instructions to perform operations of any of the methods summarized above. Further aspects include a non-transitory processor-readable storage medium having stored thereon processor-executable software instructions configured to cause a processor to perform operations of any of the methods summarized above. Further aspects include a processing device for use in a mobile device and configured to perform operations of any of the methods summarized above.BRIEF DESCRIPTION OF THE DRAWINGS[0007] The accompanying drawings, which are incorporated herein and constitute part of this specification, illustrate exemplary embodiments, and together with the general description given above and the detailed description given below, serve to explain the features of the various embodiments.[0008] FIG. 1 is a schematic diagram illustrating example systems configured for assisting a user in locating a mobile device executed by a processor of the mobile device.[0009] FIG. 2 is a schematic diagram illustrating components of an example system in a package for use in a mobile device in accordance with various embodiments.[0010] FIG. 3 is a process flow diagram of an example method of assisting a user in locating a mobile device that may be executed by a processor of the mobile device according to various embodiments.[0011] FIG. 4 is a component block diagram of a network server computing device suitable for use with various embodiments.[0012] FIG. 5 is a component block diagram of a mobile device suitable for use with various embodiments. DETAILED DESCRIPTION[0013] Various aspects will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and embodiments are for illustrative purposes and are not intended to limit the scope of the various aspects or the claims.[0014] Location information obtained by GNSS receivers (e.g., GPS) of a last know location of a mobile device can be useful for locating the device within a general location, such as at home or at a place of work. However, the accuracy (or “granularity”) of GNSS (e.g., GPS) location information means that a user must search within a relatively large area, which can be difficult in a location with many hiding spots, such as a home. Since granularity of GNSS (e.g., GPS) location information may be too large to assist a user to find a lost mobile device in some locations, various embodiments include methods to make available to a user information about the environment in which the mobile device is located. In various embodiments, a mobile device may capture ambient audio and/or image surroundings, as well as obtain other environment or contextual information (e.g., orientation, temperature, etc.) that are wirelessly transmitted to a remote server or similar repository, which retains the information in a format that can later be provided to a user in response to a query to help the user locate the mobile device. Thus, GNSS (e.g., GPS) location information can lead a user to the general area in which the mobile device is present, while recorded images, sounds and other contextual information can help the user pinpoint the location of the mobile device. However, many jurisdictions make it illegal to regularly record audio and/or images of people without their explicit consent due to privacy issues. For example, in many countries it is illegal to record conversations without the permission of the speakers. As another example, in some many countries it is illegal to use pictures of individuals for commercial purposes without their permission. To address such legal restrictions, various embodiments include methods performed by a processor of the mobile device analyze audio and/or images recorded the mobile device, anonymizing the obtained information to remove private information, and then upload the anonymized information to a remote server, which can later provide the anonymized information to the user to help locate the device.[0015] As used herein, the terms “mobile device” refers to a portable computing device with at least a processor, communication systems, and memory, particularly with wireless communication capabilities. For example, mobile devices may include any one or all of cellular telephones, smartphones, portable mobile devices, personal or mobile multi-media players, laptop computers, tablet computers, 2-in-l laptop/table computers, smartbooks, ultrabooks, palmtop computers, wireless electronic mail receivers, multimedia Internet-enabled cellular telephones, wearable devices including smart watches, entertainment devices (e.g., wireless gaming controllers, music and video players, satellite radios, etc.), and similar electronic devices that include a memory, wireless communication components and a programmable processor. In various embodiments, mobile devices may be configured with memory and/or storage. Additionally, mobile devices referred to in various example embodiments may be coupled to or include wired or wireless communication capabilities implementing various embodiments, such as network transceiver(s) and anteima(s) configured to communicate with wireless communication networks.[0016] The term “system on chip” (SOC) is used herein to refer to a single integrated circuit (IC) chip that contains multiple resources and/or processors integrated on a single substrate. A single SOC may contain circuitry for digital, analog, mixed-signal, and radio-frequency functions. A single SOC may also include any number of general purpose and/or specialized processors (digital signal processors, modem processors, video processors, etc.), memory blocks (e.g., ROM, RAM, Flash, etc.), and resources (e.g., timers, voltage regulators, oscillators, etc.). SOCs may also include software for controlling the integrated resources and processors, as well as for controlling peripheral devices. [0017] The term “system in a package” (SIP) may be used herein to refer to a single module or package that contains multiple resources, computational units, cores and/or processors on two or more IC chips, substrates, or SOCs. For example, a SIP may include a single substrate on which multiple IC chips or semiconductor dies are stacked in a vertical configuration. Similarly, the SIP may include one or more multichip modules (MCMs) on which multiple ICs or semiconductor dies are packaged into a unifying substrate. A SIP may also include multiple independent SOCs coupled together via high-speed communication circuitry and packaged in close proximity, such as on a single motherboard or in a single wireless device. The proximity of the SOCs facilitates high speed communications and the sharing of memory and resources.[0018] As used herein, the terms “component,” “system,” “unit,” “module,” and the like include a computer-related entity, such as, but not limited to, hardware, firmware, a combination of hardware and software, software, or software in execution, which are configured to perform particular operations or functions. For example, a component may be, but is not limited to, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a communication device and the communication device may be referred to as a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one processor or core and/or distributed between two or more processors or cores. In addition, these components may execute from various non-transitory computer readable media having various instructions and/or data structures stored thereon. Components may communicate by way of local and/or remote processes, function or procedure calls, electronic signals, data packets, memory read/writes, and other known computer, processor, and/or process related communication methodologies.[0019] FIG. 1 illustrates an environment 100 with mobile device 110 configured to assist a user in locating the mobile device 110 if/when lost, stolen, or otherwise needs to be found, in accordance with various embodiments. In particular, the mobile device 110 may be a mobile device configured to obtain information useful for locating the mobile device 110 from a sensor of the mobile device 110. The sensor may be one or more sensors configured to collect data regarding surroundings of the mobile device 110, including sounds, imagery, and other sensor inputs from the things and conditions around the mobile device 110. In various embodiments, the mobile device 110 may be configured to anonymize the obtained information to remove private information and comply with privacy regulations. The mobile device 110 may upload the anonymized information to one or more remote computing device(s) 190 (e.g., a server).[0020] As used herein, the term “anonymize” refers to the act of removing identifying particulars or details from recorded information, especially recorded sounds and images. For example, anonymizing recorded audio may include determining whether spoken words are included in the recorded sounds, and distorting such sounds when detected to render the words or voice of the speaker unrecognizable. As another example, when speech is detected, the anonymized information may be simply an indication that speech can be heard in the vicinity of the mobile device. In still images and recorded video, anonymizing may involve analyzing images to detect the presence of people, and then altering portions of images (e.g., masking over or fuzzy faces or other body parts).[0021] The remote computing device(s) 190 may be part of a cloud-based computing network configured to help the mobile device 110, and others like it, assisting users in locating mobile devices. The remote computing device 190 may be configured to store the anonymized information for later access by the user (e.g., to find the mobile device that has gone missing). In this way, using a separate computing device (not illustrated), the user may later access the anonymized information from the remote computing device 190 and use that information in combination with GNSS/GPS coordinate information to locate the mobile device 110. [0022] In FIG. 1, the mobile device 110 may be a mobile device configured to include device locating functions (e.g., ‘Find My Phone’) for when the mobile device 110 is lost, stolen, and/or otherwise needs to be found. For example, at regular intervals or based on other triggering events (e.g., low battery threshold detected), the mobile device 110 may transmit its GPS information to the remote computing device 190 via a communication network 180. In addition, the mobile device 110 may use sensors to image surroundings, record sounds, and collect contextual information from the environment around the mobile device 110 that can be uploaded to a remote server from which the information may be obtained by a user via a system query to assist the user in locating the mobile device 110 at a later time.[0023] As a general term used herein, “contextual information” may be any form of information that would be useful to a user to help in locating the mobile device 110, and in particular may include ambient audio inputs captured by one or more microphone(s) 112 and/or imagery (e.g., photos and/or video) captured by one or more camera(s) 114. Additionally, the mobile device 110 may collect contextual information from other sensors 116 (e.g., decibel meter, photometer, accelerometer, gyroscope, lidar, and/or radar) to detect aspects of where the mobile device 110 is and whether or how it is moving, mobile device[0024] The microphone(s) 112 may be configured to receive audio inputs (i.e., sounds), which may include user utterances (i.e., speech) and/or background noise. The microphone(s) 112 may convert the received audio inputs to an electrical signal that may be provided to a processor 118 of the mobile device 110. Communicatively coupled between the microphone(s) 112 and the processor 118, or as part of the processor 118, the mobile device 110 may include audio hardware that converts the received electrical signals from the microphone(s) 112 using, for example, pulse code modulation (PCM).[0025] The camera(s) 114 may be configured to receive video inputs, which may include photographs or video of the things, people, and/or creatures in the surroundings. The camera(s) 114 may convert the received video inputs to electrical signals that a mobile device processor 118 can analyze for content requiring anonymizing. The processor 118 of the mobile device may anonymize any detected private information (e.g., recorded audio data including speech and images including recognizable features of a person), and convert the anonymized information into digitized data packets for transmission.[0026] The mobile device 110 may be configured by machine-readable instructions, which may include one or more instruction modules. The instruction modules may include computer program modules. In particular, the instruction modules may include one or more of the location information acquisition module 130, the sensor input analysis module 140, the anonymizing information module 150, the anonymized information uploading module 160, and/or other instruction modules.[0027] The location information acquisition module 130 be configured to obtain information from one or more sensors of the mobile device 110. For example, the location information acquisition module 130 may obtain the electrical signals from the microphone(s) 112 and/or audio hardware of the mobile device 110. Alternatively, the location information acquisition module 130 may obtain digital image data from the camera(s) 114 and/or the other sensors 116. In addition, the location information acquisition module 130 may transmit or make available the obtained information to the sensor input analysis module 140.[0028] The sensor input analysis module 140 may be configured to analyze any one or more of the converted sensor inputs from any sensor to detect contextual information in an environment from which the received sensor input was recorded by the mobile device 110. The sensor input analysis module 140 may include more than one module, each dedicated to one or more functions (e.g., audio analysis, video analysis, other sensor analysis, etc.).[0029] The sensor input analysis module 140 may analyze the electrical signals from the microphone(s) 112 and/or audio hardware to distinguish and/or separate detected speech from ambient noise. Alternatively or additionally, the sensor input analysis module 140 may analyze the electrical signals from the microphone(s) 112 and/or audio hardware to recognize speech, such as performing voice recognition.[0030] In some embodiments, speech recognition techniques may be used to transcribe the sounds of the speaker’s voice into words and/or phrases that can be processed and stored by the mobile device 110. For example, the microphone(s) 112 of the mobile device 110 may record sounds of a conversation taking place near the mobile device. The processor 118 may then transcribe the recorded conversation sounds using speech recognition methods. Alternatively, speech recognition techniques may be used to detect that speech can be heard in the background, and include an indication of detected speech or a category of detected as the contextual information, avoiding transcribing the conversation as part of anonymizing the recorded audio. In this way, a quantified set of values and/or a mathematical descriptions may be developed and configured to be used, under a specified set of circumstances, for computer-based predictive analysis of an audio signal for automatic speech recognition, which includes translation of spoken language into words, text, and/or phrases. Various embodiments use models for speech recognition that account for background noise, location, and other considerations.[0031] The sensor input analysis module 140 may extract background noise from the electrical signals from the microphone(s) 112 and/or audio hardware, which represents background noise. The extracted background noise may reflect ambient noise in the environment of the mobile device 110 without any accompanying speech that might contain private information, particularly information that could be subject to privacy laws and regulations. The sensor input analysis module 140 may then compile one or more samples of ambient noise from the surroundings of the mobile device 110 for inclusion in the anonymized information.[0032] The sensor input analysis module 140 may analyze the electrical signals from the camera(s) 114 to detect faces or other recognizable parts of individual that may be present in the received video input. Alternatively or additionally, the sensor input analysis module 140 may analyze the electrical signals from the camera(s) 114 to detect text or symbols (names or logos) that may provide identifying information regarding individuals in the captured images. As a further alternative or addition, the sensor input analysis module 140 may analyze the electrical signals from the camera(s) 114 to classify the images or identifiable objects or things therein. As yet a further alternative or addition, the sensor input analysis module 140 may generate a text description of the images or identifiable objects or things therein and/or a determined classification thereof.[0033] Image processing in various embodiments may use neural networks, knowledge-based, appearance-based, template matching, and/or other techniques for detecting faces, logos, and/or text containing private information visible in an image or video. Knowledge-based systems may use a set of rules based on human knowledge about imaging in order to identify faces, text, logos, or almost any object. Feature-based systems may extract structural features from an image and use classification/differentiation to identify faces, text, logos, or almost any object. Template matching uses pre-defined or parameterized facial templates to locate or detect faces, text, logos, or other objects by the correlation between the templates and input images. Appearance-based systems use a set of delegate training facial images to select an appropriate facial model. Similarly, other systems and techniques may be used or may be included as part of the image processing software in order to detect and identify faces, text, or logos. In addition, using lidar, computer vision, and/or any other range imaging techniques (e.g., an RGB-D camera), along with object recognition software, a processor may recognize objects or a category of objects. Objects may be recognized or categorized by the processor from distance measurements alone, as well as with a combination of distance measurements (e.g., lidar) with more conventional object recognition sensors (e.g., a computer vision system or an RGB-D camera).[0034] Similar to the analysis of the audio and/or video inputs described above, the sensor input analysis module 140 may analyze the electrical signals from the other sensors 116 to identify characteristics of the surroundings of the mobile device 110. For example, the sensor input analysis module 140 may use electrical signals from a decibel meter to measure the noise level of the surroundings, a photometer to measure light levels of the surroundings, an accelerometer to measure how fast or whether the mobile device 110 is moving, a gyroscope to measure movement and/or orientation characteristics of the mobile device 110, and/or lidar and/or radar to detect the presence or characteristics of nearby objects. Any identified characteristics (i.e., contextual information) of the surroundings of the mobile device 110 may be included in the anonymized information compiled by the anonymizing information module 150.[0035] Based on the detected contextual information, the sensor input analysis module 140 may determine a category or type of environment in which the received sensor inputs were generated. For example, the type of environment may include quiet, music, chatter (i.e., one or more other voices), machinery, vehicle cabin (e.g., car, plane, train), office, home, etc. The category or type of environment in which the received sensor inputs were generated may then be included in the anonymized information compiled by the anonymizing information module 150.[0036] The anonymizing information module 150 may be configured to anonymize the information obtained by the location information acquisition module 130 and analyzed by the sensor input analysis module 140 to remove private information. For example, the anonymizing information module 150 may remove speech from an audio input of the mobile device to compile one or more samples of ambient noise from the surroundings for inclusion in the anonymized information. The anonymizing information module 150 may remove the speech, which was distinguished and/or separated by the sensor input analysis module 140. As a further example, the anonymizing information module 150 may classify the speech and/or ambient noise by comparing the speech and/or ambient noise to samples to determine the closest match(es) that share qualities or characteristics thereto. The classifications may be predetermined and generalized descriptions of the ambient noise, which will ensure no private information is retained. As a further example, the anonymizing information module 150 may generate a text description of the speech, ambient noise, and/or the determined classification thereof In generating the text description, rules may be used that ensure no private information is included within the generated text description of the speech or ambient noise.[0037] The anonymizing information module 150 may edit captured images from the camera(s) 114 to make unrecognizable (e.g., blurring, blocking, or otherwise obscuring) one or more faces detected by the sensor input analysis module 140. Making faces unrecognizable is one way of removing private information (i.e., the identity of the individual(s)). Alternatively or additionally, the anonymizing information module 150 may edit the captured images from the camera(s) 114 to make detected text or symbols unrecognizable (e.g., blurring, blocking, or otherwise obscuring). Making text or symbols unrecognizable may ensure people’s names, employer names, and/or favorite brands are not included in the anonymized information. As another example, the anonymizing information module 150 may generate a text description of the images captured from the camera(s) 114 using the object recognition information determined by the sensor input analysis module 140. In generating the text description, rules may be used that ensure no private information is included within the generated text description of images. Alternatively, or additionally, the anonymizing information module 150 may generate a text description that includes a determined category of the images captured from the camera(s) 114.[0038] Whether audio, video, or other sensor data is analyzed by the sensor input analysis module 140 and/or anonymized by the anonymizing information module 150, the anonymizing information uploading module 160 may transmit the anonymized information to the remote computing device 190. In particular, the anonymizing information uploading module 160 may transmit the anonymized information to a wireless transceiver (e.g., 170 in FIG. 2) of the mobile device 110, which a processor may used to communicate via one or more wired and/or wireless communication links 125 with the remote computing device 190. [0039] The transmited anonymized information may also include additional information, such as what environment type was detected. The transmited anonymized information may be transmited on a schedule (every minute, hour, day, or some other interval). In addition, the anonymized information may be transmited in response to certain conditions, such as when the mobile device batery is below a predetermined threshold (i.e., “low batery) or when wireless connectivity has resumed after an extended period. Alternatively, as a further alternative, anonymized information may be transmited after a predetermined number of failures in such transmission (e.g., 10 failures).[0040] The mobile device 110 may be communicatively coupled to peripheral device(s) (not shown) and configured to communicate with the remote computing device(s) 190 and/or other external resources (not shown) using the wireless transceiver and a communication network 180, such as a cellular communication network. The mobile device 110 may access the communication network 180 via one or more base stations, which in-tum may be communicatively coupled to the remote computing device(s) 190 through wired and/or wireless connections. Similarly, the remote computing device(s) 190 may be configured to communicate with the mobile device 110 and/or the external resources using the wireless transceiver and the communication network 180.[0041] As described in more detail with reference to FIGS. 2 and 5, the mobile device 110 may include a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to the mobile device 110. For example, the mobile device 110 may include one or more processors configured to execute computer program modules similar to those in the machine-readable instructions of the remote computing device(s) 190 described above.[0042] As described in more detail with reference to FIG. 4, the remote computing device 190 may include one or more processors configured to execute computer program modules similar to those in the machine-readable instructions of the mobile device 110. By way of non-limiting examples, remote computing devices may include one or more of a server, desktop computer, a laptop computer, a hand held computer, a tablet computing platform, a NetBook, a smartphone, a gaming console, and/or other computing platforms. The remote computing device(s) 190 may also include electronic storage (e.g., 402 in FIG. 4), one or more processors (e.g., 408 in FIG. 4), and/or other components. The remote computing device(s) 190 may include communication lines, or ports to enable the exchange of information with a network, other computing platforms, and many user mobile devices, such as the mobile device 110. Illustration of the remote computing device(s) 190 is not intended to be limiting. The remote computing device(s) 190 may include a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to the remote computing device(s) 190.[0043] Electronic storage (e.g., 220, 258 in FIG. 2) may include non-transitory storage media that electronically stores information. The electronic storage media of electronic storage may include one or both of system storage that is provided integrally (i.e., substantially non-removable) with the mobile device 110 or remote computing device(s) 190, respectively, and/or removable storage that is removably connectable thereto. For example, a port (e.g., a Universal Serial Bus (USB) port, a FireWire port, etc.) or a drive (e.g., a disk drive, etc.). The electronic storage may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid- state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media. The electronic storage may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources). Also, the electronic storage may store software algorithms, information determined by processor(s), information received from the mobile device 110 or remote computing device(s) 190, respectively, that enables the mobile device 110 or remote computing device(s) 190, respectively to function as described herein. [0044] Processor(s) (e.g., 118, 210, 212, 214, 218, 252, 260 in FIG. 2) may be configured to provide information processing capabilities in the mobile device 110 or remote computing device(s) 190, respectively. As such, the processor(s) may include one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information. Although processor(s) are shown in FIG. 2 as a single entity, this is for illustrative purposes only. In some implementations, processor(s) may include a plurality of processing units. These processing units may be physically located within the same device, or processor(s) may represent processing functionality of a plurality of devices, remote and/or local to one another, operating in coordination.[0045] The processor(s) may be configured to execute the location information acquisition module 130, the sensor input analysis module 140, the anonymizing information module 150, the anonymized information uploading module 160, and/or other instruction modules. Processor(s) (e.g., 118, 210, 212, 214, 218, 252, 260 in FIG. 2), may be configured to execute the location information acquisition module 130, the sensor input analysis module 140, the anonymizing information module 150, the anonymized information uploading module 160, and/or other instruction modules by software; hardware; firmware; some combination of software, hardware, and/or firmware; and/or other mechanisms for configuring processing capabilities on processor(s). As used herein, the term “module” may refer to any component or set of components that perform the functionality attributed to the module. This may include one or more physical processors during execution of processor readable instructions, the processor readable instructions, circuitry, hardware, storage media, or any other components.[0046] The descriptions of the functionality provided by the location information acquisition module 130, the sensor input analysis module 140, the anonymizing information module 150, and the anonymized information uploading module 160 described above and below are for illustrative purposes, and is not intended to be limiting, as those modules may provide more or less functionality than is described. For example, functionality described as being performed by one or more of the location information acquisition module 130, the sensor input analysis module 140, the anonymizing information module 150, the anonymized information uploading module 160, and/or other instruction modules may be eliminated, and some or all of its functionality may be provided by other modules. As another example, processor(s) 330 may be configured to execute one or more additional modules that may perform some or all of the functionality attributed to the location information acquisition module 130, the sensor input analysis module 140, the anonymizing information module 150, the anonymized information uploading module 160, and/or other instruction modules.[0047] With reference to FIGS. 1 and 2, the illustrated example SIP 200 includes a two SOCs 202, 204, a clock 205, a voltage regulator 206, a microphone 112, a camera 114, and a wireless transceiver 170. In some embodiments, the first SOC 202 operates as central processing unit (CPU) of the wireless device that carries out the instructions of software application programs by performing the arithmetic, logical, control and input/output (I/O) operations specified by the instructions. In some embodiments, the second SOC 204 may operate as a specialized processing unit. For example, the second SOC 204 may operate as a specialized 5G processing unit responsible for managing high volume, high speed (e.g., 5 Gbps, etc.), and/or very high frequency short wavelength (e.g., 28 GHz mmWave spectrum, etc.) communications.[0048] The first SOC 202 may include a digital signal processor (DSP) 210, a modem processor 212, a graphics processor 214, an application processor 216, one or more coprocessors 218 (e.g., vector co-processor) connected to one or more of the processors, memory 220, custom circuitry 222, system components and resources 224, an interconnection/bus module 226, one or more temperature sensors 230, a thermal management unit 232, and a thermal power envelope (TPE) component 234. The second SOC 204 may include a 5G modem processor 252, a power management unit 254, an interconnection/bus module 264, a plurality of mmWave transceivers 256, memory 258, and various additional processors 260, such as an applications processor, packet processor, etc.[0049] Each processor 118, 210, 212, 214, 218, 252, 260 may include one or more cores, and each processor/core may perform operations independent of the other processors/cores. For example, the first SOC 202 may include a processor that executes a first type of operating system (e.g., FreeBSD, LINUX, OS X, etc.) and a processor that executes a second type of operating system (e.g., MICROSOFT® WINDOWS 10®). In addition, any or all of the processors 118, 210, 212, 214, 218, 252, 260 may be included as part of a processor cluster architecture (e.g., a synchronous processor cluster architecture, an asynchronous or heterogeneous processor cluster architecture, etc.).[0050] The first SOC 202 and the second SOC 204 may include various system components, resources and custom circuitry for managing sensor data, analog-to- digital conversions, wireless data transmissions, and for performing other specialized operations, such as decoding data packets and processing encoded audio and video signals for rendering in a web browser. For example, the system components and resources 224 of the first SOC 202 may include power amplifiers, voltage regulators, oscillators, phase-locked loops, peripheral bridges, data controllers, memory controllers, system controllers, access ports, timers, and other similar components used to support the processors and software clients running on a wireless device. The system components and resources 224 and/or custom circuitry 222 may also include circuitry to interface with peripheral devices, such as cameras, electronic displays, wireless communication devices, external memory chips, etc.[0051] The first SOC 202 and the second SOC 204 may communicate via interconnection/bus module 250. The various processors 118, 210, 212, 214, 218, may be interconnected to one or more memory elements 220, system components and resources 224, and custom circuitry 222, and a thermal management unit 232 via an interconnection/bus module 226. Similarly, the processor 252 may be interconnected to the power management unit 254, the mmWave transceivers 256, memory 258, and various additional processors 260 via the interconnection/bus module 264. The interconnection/bus module 226, 250, 264 may include an array of reconfigurable logic gates and/or implement a bus architecture (e.g., CoreConnect, AMBA, etc.). Communications may be provided by advanced interconnects, such as high- performance networks-on chip (NoCs).[0052] The first SOC 202 and/or second SOC 204 may further include an input/output module (not illustrated) for communicating with resources external to the SOC, such as a clock 205 and a voltage regulator 206. Resources external to the SOC (e.g., clock 205, voltage regulator 206) may be shared by two or more of the internal SOC processors/cores.[0053] In addition to the example SIP 200 discussed above, various embodiments may be implemented in a wide variety of computing systems, which may include a single processor, multiple processors, multicore processors, or any combination thereof.[0054] Various embodiments may be implemented using a number of single processor and multiprocessor computer systems, including a system-on-chip (SOC) or system in a package (SIP). FIG. 2 illustrates an example computing system or SIP 200 architecture that may be used in mobile device (e.g., 110), remote computing devices (e.g., 190), or other systems for implementing the various embodiments.[0055] FIG. 3 illustrates operations of method 300 of assisting a user in locating a mobile device executed by a processor of the mobile device in accordance with various embodiments. With reference to FIGS. 1-3, the operations of the method 300 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of the method 300 are illustrated in FIG. 3 and described below is not intended to be limiting. [0056] In some embodiments, the method 300 may be implemented in one or more processors (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information) in response to instructions stored electronically on an electronic storage medium of a mobile device. The one or more processors may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of the method 300. For example, with reference to FIGS. 1-3, the operations of the method 300 may be performed by a processor (e.g., 118, 210, 212, 214, 218, 252, 260) of a computing device (e.g., 110, 190).[0057] FIG. 3 illustrates a method 300 in accordance with one or more implementations.[0058] In block 310, the processor of a mobile device (e.g., 110) may perform operations including obtaining information useful for locating the mobile device from a sensor (e.g., 112, 114, 116) of the mobile device configured to obtain information regarding surroundings of the mobile device. For example, a processor may use audio processing techniques that identify and separate speech from ambient noise within sounds detected by the microphone(s) of the mobile device. By distinguishing the speech from ambient noise various embodiments may use information about either part of the audio input to generate anonymized information. In block 310, the processor of the mobile device may use location information acquisition module (e.g., 130) to obtain information useful for locating the mobile device from the microphone(s) (e.g., 112), the camera(s) (e.g., 114), and/or the one or more other sensor(s) (e.g., 116). In various embodiments, means for performing the operations of block 310 may include a processor (e.g., 118, 210, 212, 214, 218, 252, 260) coupled to the microphone (e.g., 112), the camera (e.g., 114), other sensor(s) (e.g., 116) and electronic storage (e.g., 220, 258). [0059] In some embodiments, in block 310 the processor may use one or more sensor readings (e.g., ambient light present/absent; a value/magnitude of ambient light), accelerometer (e.g., does the mobile device periodically move, such as being in someone’s pocket or in a sofa while someone is sitting on the sofa). In some embodiments mathematical models may be used to determine/recognize what mobile device movements correspond to, such as in a pocket of a walking person or person in car, laying in a sofa seat while a person sits on the coach and breathes, shifts, gets up, etc.), gyroscope (e.g., provide readings of the device orientation - such as laying flat, standing upright, some angle there between). In further embodiments, the sensor readings may be anonymized as well, or not.[0060] In block 312, the processor of a mobile device may perform operations including anonymizing the obtained information to remove private information. In some embodiments, a processor may further process the speech and/or ambient noise, separated using audio processing techniques, to strip away or eliminate private information contained in the obtained information. Conventional speech recognition system strip away ambient noise to enhance speech recognition. In contrast, various embodiments may the reverse by using the ambient noise after removing the speech. In this way, the detected speech is basically subtracted from the audio input (i.e., detected sounds) in order to strip away identifying voices and leave just ambient noise for inclusion in the anonymized information that gets uploaded to the server.[0061] In some embodiments, instead of using samples of the ambient noise as the anonymized information, a processor may apply a noise recognition model that would determine a classification of the detected ambient noise, which may be saved as the anonymized information. In some embodiments, the classification of the detected ambient noise may be part of a text description of the ambient noise, which defines the anonymized information. In this way, the anonymized information may include descriptions like “television is heard in the background,” “traffic noise is heard prominently,” or “no ambient sound detected.” E.g., “humans, bright light, television present nearby.” In some embodiments, the same audio processing techniques may be used to identify the speech, but rather than saving an audio sample of the speech alone or a direct speech to text transcription, the mobile device may generate a basic description of what the audio sample contains, such as “speech is heard in the background.”[0062] In some embodiments, a processor may use an imaging/video scrubbing algorithm that identifies faces and body parts (such as for facial recognition, autofocusing of cameras, etc.) to identify the portions of an image containing person- recognizable features (e.g., the face, torso, etc.), and then erase, fuzz/defocus, or black the pixels encompassing those portions of the image. Such processed images/video may be considered anonymized information that may be uploaded to the server.[0063] In some embodiments, the mobile device may have more than one camera, such as one on each side of the device. Various embodiments may consider/analyze what each camera captures (e.g., if facing down, a front camera may show darkness, but the rear camera may show something else); vice versa for when the device is facing upward; but both cameras may be dark the mobile device is covered by one or more objects. In some embodiments, a processor may use a visual scrubbing algorithm that identifies text or brands (text recognition or image recognition), like name tags or logos, which the processor may obscure my erasing, fuzzing, defocusing, covering, etc. In some embodiments, a processor may perform object recognition on objects detected in a visual image captured by a camera of the mobile device and generate a text description thereof and/or identify a category for any recognized objects, which text description and/or category may be included in the anonymized information.[0064] In block 312, the processor of the mobile device may anonymize the obtained information using the sensor input analysis module (140) and the anonymizing information module (e.g., 150). In various embodiments, means for performing the operations of block 312 may include a processor (e.g., 118, 210, 212, 214, 218, 252, 260) coupled to electronic storage (e.g., 220, 258). [0065] In block 314, the processor of a mobile device may perform operations including uploading the anonymized information to a remote computing device (e.g., 190). In some embodiments, the processor may upload the anonymized information to the remote computing device periodically, such as every five minutes, once an hour, once a day, according to a predefined schedule, etc. In some embodiments, the processor may upload the anonymized information to the remote computing device in response to a trigger event, such as in response to a query, message or ping seeking information on the location of the mobile device.[0066] In some embodiments, the processor may be configured to recognize conditions indicative that the mobile device may be misplaced, and upload the anonymized information to the remote computing device in response to determining that the mobile device may be misplaced. The processor may determine whether the mobile device is misplaced using any of the types of sensor data discussed above. Alternatively, or additionally, the determination as to whether the mobile device is misplaced may use additional resources of the mobile device. For example, after a predetermined period of non-use or immobility (e.g., changes or lack of changes in GPS coordinates), the mobile device may be considered misplaced. In addition, or alternatively, if a battery level of the mobile device falls below a predetermined threshold (e.g., 5%), the mobile device may be considered misplaced since once the mobile device runs out of power it will no longer be able to upload information. In addition, or alternatively, if the mobile device is powering off or shutting down, the mobile device may be considered misplaced since once the mobile device is turned off it will no longer be able to upload information. As yet a further addition or alternative, the mobile device may be considered misplaced in response to a user manually entering a command to upload anonymized information to the remote computing device.[0067] In block 314, the processor of the mobile device may output the results of the speech recognition analysis using a transceiver (e.g., 170) of the mobile device and/or the anonymized information uploading module (e.g., 160). In various embodiments, means for performing the operations of block 314 may include a processor (e.g., 118, 210, 212, 214, 218, 252, 260) coupled to electronic storage (e.g., 220, 258) and a transceiver (e.g., 170).[0068] In some embodiments, the processor may repeat any or all of the operations in blocks 310, 312, and 314 to repeatedly or obtain audio, video and other contextual information, anonymize the obtained information, and transmit the anonymized information to a remote computing device.[0069] Various embodiments (including, but not limited to, embodiments discussed above with reference to FIGS. 1-3) may be implemented on a variety of remote computing devices, an example of which is illustrated in FIG. 4 in the form of a server. With reference to FIGS. 1-4, the remote computing device 190 may include a processor 408 coupled to volatile memory 402 and a large capacity nonvolatile memory, such as a disk drive 403. The network mobile device 190 may also include a peripheral memory access device such as a floppy disc drive, compact disc (CD) or digital video disc (DVD) drive 406 coupled to the processor 408. The remote computing device 190 may also include network access ports 404 (or interfaces) coupled to the processor 408 for establishing data connections with a network, such as the Internet and/or a local area network coupled to other system computers and servers. The remote computing device 190 may include one or more antennas 407 for sending and receiving electromagnetic radiation that may be connected to a wireless communication link. The remote computing device 190 may include additional access ports, such as USB, Firewire, Thunderbolt, and the like for coupling to peripherals, external memory, or other devices.[0070] The various aspects (including, but not limited to, embodiments discussed above with reference to FIGS. 1-3) may be implemented on a variety of mobile device, an example of which is illustrated in FIG. 5 in the form of a mobile device. With reference to FIGS. 1-5, the mobile device 110 may include a first SoC 202 (e.g., a SoC-CPU) coupled to a second SoC 204 (e.g., a 5G capable SoC) and a third SoC 506 (e.g., a C-V2X SoC configured for managing V2V, V2I, and V2P communications over D2D links, such as D2D links establish in the dedicated Intelligent Transportation System (ITS) 5.9 GHz spectrum communications). The first, second, and/or third SoCs 202, 204, and 506 may be coupled to internal memory 516, a display 530, speakers 514, a microphone 112, and a wireless transceiver 170. Additionally, the mobile device 110 may include one or more antenna 504 for sending and receiving electromagnetic radiation that may be connected to the wireless transceiver 170 (e.g., a wireless data link and/or cellular transceiver, etc.) coupled to one or more processors in the first, second, and/or third SoCs 202, 204, and 506. Mobile devices 110 may also include menu selection buttons or switches for receiving user inputs.[0071] Mobile devices 110 may additionally include a sound encoding/decoding (CODEC) circuit 510, which digitizes sound received from the microphone 112 into data packets suitable for wireless transmission and decodes received sound data packets to generate analog signals that are provided to the speaker to generate sound and analyze ambient noise or speech. Also, one or more of the processors in the first, second, and/or third SoCs 202, 204, and 506, wireless transceiver 170 and CODEC circuit 510 may include a digital signal processor (DSP) circuit (not shown separately).[0072] The processors implementing various embodiments may be any programmable microprocessor, microcomputer or multiple processor chip or chips that can be configured by software instructions (applications) to perform a variety of functions, including the functions of the various aspects described in this application. In some communication devices, multiple processors may be provided, such as one processor dedicated to wireless communication functions and one processor dedicated to running other applications. Typically, software applications may be stored in the internal memory before they are accessed and loaded into the processor. The processor may include internal memory sufficient to store the application software instructions. [0073] Implementation examples are described in the following paragraphs. While some of the following implementation examples are described in terms of example methods, further example implementations may include: the example methods discussed in the following paragraphs implemented by a mobile device including a processor configured to perform operations of the example methods; the example methods discussed in the following paragraphs implemented by a mobile device including a modem processor configured to perform operations of the example methods; the example methods discussed in the following paragraphs implemented by a mobile device including means for performing functions of the example methods; the example methods discussed in the following paragraphs implemented in a processor use in a mobile device that is configured to perform the operations of the example methods; and the example methods discussed in the following paragraphs implemented as a non-transitory processor-readable storage medium having stored thereon processor-executable instructions configured to cause a processor or modem processor of a wireless device to perform the operations of the example methods.[0074] Example 1. A method of assisting a user in locating a mobile device executed by a processor of the mobile device, including: obtaining information useful for locating the mobile device from a sensor of the mobile device configured to obtain information regarding surroundings of the mobile device; anonymizing the obtained information to remove private information; and uploading the anonymized information to a remote server.[0075] Example 2. The method of example 1, in which uploading the anonymized information to a remote server includes uploading the anonymized information to a remote server in response to determining that the mobile device may be misplaced.[0076] Example 3. The method of either of examples 1 or 2, in which anonymizing the obtained information to remove private information includes removing speech from an audio input of the mobile device to compile one or more samples of ambient noise from the surroundings of the mobile device for inclusion in the anonymized information.[0077] Example 4. The method of any of examples 1-3, in which anonymizing the obtained information to remove private information includes determining which of one or more predetermined categories of ambient noise are included within an audio input of the mobile device, in which the anonymized information indicates the determined one or more predetermined categories of ambient noise.[0078] Example 5. The method of any of examples 1-4, in which the anonymized information indicates a text description of ambient noise that includes the determined one or more predetermined categories of ambient noise.[0079] Example 6. The method of any of examples 1-5, in which anonymizing the obtained information to remove private information includes converting speech to text and generating a generalized description of the converted speech, in which the anonymized information includes the generalized description of the speech converted to text.[0080] Example 7. The method of any of examples 1-6, in which anonymizing the obtained information to remove private information includes editing an image captured by a video input of the mobile device to make images of individuals detected within the captured image unrecognizable, in which the anonymized information includes the edited image.[0081] Example 8. The method of any of examples 1-7, in which anonymizing the obtained information to remove private information includes editing an image captured by a video input of the mobile device to make unrecognizable identifying information associated with an individual detected within the captured image, in which the anonymized information includes the edited image.[0082] Example 9. The method of any of examples 1-8, in which anonymizing the obtained information to remove private information includes determining which of one or more predetermined categories of visual elements are included within an image captured by a camera of the mobile device, in which the anonymized information indicates the determined one or more predetermined categories of visual elements.[0083] Example 10. The method of any of examples 1-9, in which anonymizing the obtained information to remove private information includes compiling a text description of one or more visual elements within an image captured by a camera of the mobile device, in which the anonymized information indicates the compiled text description.[0084] A number of different cellular and mobile communication services and standards are available or contemplated in the future, all of which may implement and benefit from the various aspects. Such services and standards may include, e.g., third generation partnership project (3GPP), long term evolution (LTE) systems, third generation wireless mobile communication technology (3G), fourth generation wireless mobile communication technology (4G), fifth generation wireless mobile communication technology (5G), global system for mobile communications (GSM), universal mobile telecommunications system (UMTS), 3 GSM, general packet radio service (GPRS), code division multiple access (CDMA) systems (e.g., cdmaOne, CDMA1020TM), EDGE, advanced mobile phone system (AMPS), digital AMPS (IS- 136/TDMA), evolution-data optimized (EV-DO), digital enhanced cordless telecommunications (DECT), Worldwide Interoperability for Microwave Access (WiMAX), wireless local area network (WLAN), Wi-Fi Protected Access I & II (WPA, WPA2), integrated digital enhanced network (iDEN), C-V2X, V2V, V2P, V2I, and V2N, etc. Each of these technologies involves, for example, the transmission and reception of voice, data, signaling, and/or content messages. It should be understood that any references to terminology and/or technical details related to an individual telecommunication standard or technology are for illustrative purposes only, and are not intended to limit the scope of the claims to a particular communication system or technology unless specifically recited in the claim language.[0085] Various aspects illustrated and described are provided merely as examples to illustrate various features of the claims. However, features shown and described with respect to any given aspect are not necessarily limited to the associated aspect and may be used or combined with other aspects that are shown and described. Further, the claims are not intended to be limited by any one example aspect. For example, one or more of the operations of the methods may be substituted for or combined with one or more operations of the methods.[0086] The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the operations of various aspects must be performed in the order presented. As will be appreciated by one of skill in the art the order of operations in the foregoing aspects may be performed in any order. Words such as “thereafter,” “then,” “next,” etc. are not intended to limit the order of the operations; these words are used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles “a,” “an,” or “the” is not to be construed as limiting the element to the singular.[0087] Various illustrative logical blocks, modules, components, circuits, and algorithm operations described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and operations have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such aspect decisions should not be interpreted as causing a departure from the scope of the claims.[0088] The hardware used to implement various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an ASIC, a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of receiver smart objects, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some operations or methods may be performed by circuitry that is specific to a given function.[0089] In one or more aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non- transitory computer-readable storage medium or non-transitory processor-readable storage medium. The operations of a method or algorithm disclosed herein may be embodied in a processor-executable software module or processor-executable instructions, which may reside on a non-transitory computer-readable or processor- readable storage medium. Non-transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer- readable or processor-readable storage media may include RAM, ROM, EEPROM, FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage smart objects, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of non- transitory computer-readable and processor-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable storage medium and/or computer-readable storage medium, which may be incorporated into a computer program product.[0090] The preceding description of the disclosed aspects is provided to enable any person skilled in the art to make or use the claims. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the claims. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein.
Systems and methods relate to a low-dropout voltage (LDO) voltage regulator which receives a maximum supply voltage and provides a regulated voltage to a load, where the load may be a processing coreof a multi-core processing system. A leakage current supply source includes a leakage current sensor to determine a leakage current demand of the load of the LDO voltage regulator and a leakage current supply circuit to supply the leakage current demand. In this manner, the leakage current supply source provides current assistance to the LDO voltage regulator, such that the LDO voltage regulator can supply only dynamic current. Thus, headroom voltage of the LDO voltage regulator, which is a difference between the maximum supply voltage and the regulated voltage, can be reduced. Reducing the headroom voltage allows greater number of dynamic voltage and frequency scaling states of the load.
1.A method of operating a low dropout LDO voltage regulator includes:Determine the leakage current demand of the LDO voltage regulator load; andLeakage current is supplied from the leakage current supply to meet the leakage current demand of the load of the LDO voltage regulator.2.The method of claim 1, comprising receiving a maximum supply voltage as an input to the LDO voltage regulator and providing an output voltage to the load of the LDO voltage regulator, wherein the drain is supplied from a leakage current supply source The current includes reducing the headroom voltage of the LDO voltage regulator, wherein the headroom voltage of the LDO voltage regulator is the maximum supply voltage and the load provided to the LDO voltage regulator The difference between the output voltages.3.The method of claim 1, wherein determining the leakage current demand of the load of the LDO voltage regulator comprises sensing a leakage current of the load based on a temperature, a voltage, and a process inflection point associated with the load. .4.The method of claim 3, further comprising determining a frequency of a ring oscillator based on the sensed leakage current of the load.5.The method of claim 1, further comprising converting the sensed leakage current to a digital code.6.The method of claim 5, further comprising determining to be switched on from said digital codeThe number of p-channel metal oxide semiconductor PMOS transistors to which the drain current supply source supplies the leakage current.7.The method of claim 6 including increasing the number of PMOS transistors to be turned on for higher values ​​of the digital code and decreasing the number of PMOS transistors to be turned on for lower values ​​of the digital code. The number.8.The method of claim 1, wherein the load of the LDO voltage regulator is a processing core of a multi-core processing system.9.A device that includes:Leakage current supply, which includes:a leakage current sensor configured to determine a leakage current demand of the load of the low dropout LDO voltage regulator;And a leakage current supply circuit configured to supply a leakage current to meet the leakage current demand of the load of the LDO voltage regulator.10.The apparatus of claim 9, wherein the low dropout LDO voltage regulator is configured to receive a maximum supply voltage and provide an output voltage to the load of the LDO voltage regulator.11.The apparatus of claim 10, wherein the leakage current supply is configured to reduce a margin voltage of the LDO voltage regulator, wherein the margin voltage of the LDO voltage regulator is the maximum supply voltage The difference between the output voltage supplied to the load of the LDO voltage regulator.12.The apparatus of claim 9, wherein the leakage current sensor is configured to sense the leakage current demand of the load of the LDO voltage regulator based on a temperature, a voltage, and a process inflection point related to the load.13.The apparatus of claim 12, wherein the leakage current sensor comprises a ring oscillator, wherein the frequency of the ring oscillator is based on the sensed leakage current demand of the load.14.The apparatus of claim 13, wherein the ring oscillator comprises an odd number of inverters connected in a ring.15.The apparatus of claim 14, wherein the inverter is a current starving type based on a head switch, a foot switch, or a combination thereof configured to allow only leakage current to pass.16.The apparatus of claim 14, wherein the inverter is a differential inverter.17.The apparatus of claim 12, wherein the leakage current sensor comprises an analog/digital converter ADC configured to convert the sensed leakage current demand into a digital code.18.The apparatus of claim 12, further comprising a finite state machine (FSM) configured to determine a p-channel metal oxide semiconductor to be turned on to supply the leakage current from the leakage current supply circuit according to the digital code The number of PMOS transistors.19.The apparatus of claim 18, wherein the number of PMOS transistors to be turned on is higher for a higher value of the digital code, and the number of PMOS transistors to be turned on is for a comparison of the digital code. The lower value is lower.20.The apparatus of claim 9, wherein the load of the LDO voltage regulator is a processing core of a multi-core processing system.21.The device of claim 9, integrated into a device selected from the group consisting of a set-top box, a music player, a video player, an entertainment unit, a navigation device, a communication device, a personal digital assistant (PDA), fixed location data Units, mobile phones and computers.22.A system includes:a device for determining a leakage current demand of a load of a device for regulating a voltage; andA device for supplying a leakage current to meet the leakage current demand of the load.23.The system of claim 22, wherein said means for determining said leakage current demand of said load comprises means for sensing said load based on temperature, voltage and process inflection points associated with said load. The device that describes the leakage current demand.24.The system of claim 23, further comprising means for converting the sensed leakage current demand into a digital code.25.The system of claim 24, further comprising means for determining a number of p-channel metal oxide semiconductor PMOS transistors to be turned on to supply the drain current demand of the load based on the digital code.
Leakage current supply circuit for reducing low voltage difference voltage regulator marginTechnical fieldThe disclosed aspects relate to low dropout (LDO) voltage regulators. More specifically, exemplary aspects involve reducing the headroom voltage of the LDO voltage regulator.Background techniqueLow dropout (LDO) voltage regulators find use in integrated circuits where voltage regulation is required. For example, an LDO voltage regulator may be used to supply selected segments or components that are less than the maximum voltage to the integrated circuit. An example environment in which the LDO voltage regulator may be deployed includes a multi-processor or multi-core processing system including two or more processors or processing cores. Each core may be configured for an operating frequency or processing capability that is specific to the core, and therefore, the power characteristics of the core (eg, power consumption at a desired operating frequency) may vary. For example, a maximum voltage supply may be provided to a core connection to be operated at its maximum performance or highest frequency, and the voltage supply may be reduced for core operating at lower performance/frequency. An LDO voltage regulator can be used to supply a voltage less than the maximum voltage (also referred to herein as a stable voltage) to some cores based on its individual power characteristics.FIG. 1 illustrates a conventional multi-core processing system 100 including two or more cores depicted as cores 102a-m. The power head switches 106a-m may be closed or turned on to, for example, supply the maximum supply voltage (VDD 108) to the respective cores 102a-m with the respective cores 102a-m operating at their maximum performance/frequencies. Where one or more cores may accept lower performance/frequencies, the core's corresponding power head switches 106a-m are opened or closed, and the LDO voltage regulators 104a-m are used to provide lower stability Voltage to those cores. Therefore, by controlling the power head switches 106a-m and the LDO voltage regulators 104a-m, a lower voltage can be supplied to the core. In this manner, the power consumption of the multi-core processing system 100 may be reduced.The LDO voltage regulators 104a-m are designed to provide high bandwidth to achieve fast response to rapid changes in current demand (or "di/dt", as known in the art) while slowing down for the corresponding core 102a -m The unfavorable pressure drop of performance or speed. To support current demand, the LDO voltage regulators 104a-m can be designed to have a large margin voltage. However, in some cases a low headroom voltage is required, which is difficult to achieve in the conventional design of an LDO voltage regulator. The related features of a conventional LDO voltage regulator are explained with reference to FIG.FIG. 2 illustrates a detailed view of an example design of any of the LDO voltage regulators 104a-m. The reference voltage Vref 202 is received at one input of the operational amplifier 204, and its output is coupled to the gate of a p-channel or p-type metal oxide semiconductor (PMOS) transistor 206. The supply voltage VDD 108 (from FIG. 1) supplies the LDO voltage regulators 104a-m with an input voltage Vin 208, and the output voltage Vout 210 is a stable voltage supplied to the corresponding core. The output voltage Vout 210 is also fed back at the other input of the operational amplifier 204 . The input voltage Vin 208 and the output voltage Vout 210 appear at the source and drain terminals of the PMOS transistor 206, respectively. The corresponding cores 102a-m for the LDO voltage regulators 104a-m are also shown.The margin of the LDO voltage regulator 104a-m is the input voltage Vin 208 (which, as would be expected, is the maximum voltage that supports the highest performance/speed of the corresponding core) and the desired output voltage Vout 210 (which corresponds to supporting the corresponding core The lower performance/speed of the voltage, the difference between the corresponding cores not operating at their maximum performance/operating frequency). It has been observed that making the margin smaller provides more states of dynamic voltage and frequency scaling (DVFS), which results in energy optimization of the multi-core processing system 100 . From the above, the margin (Vin 208 minus Vout 210) represents the drain-to-source voltage (Vds) of the PMOS transistor 206.Referring now to FIG. 3, a graph 300 is shown where the graph 300 is the load current 312 of any of the cores 102a-m of FIG. 1 as a function of its corresponding LDO voltage regulator 104a-m margin or Vds. 310 graphical representation of the change. Referring to FIG. 2, it can be seen that the minimum voltage output from the operational amplifier 204 corresponds to the maximum gate-to-source voltage (Vgs) of the PMOS transistor 206. Curves 302, 304, 306, and 308 represent the variation of load current 312 with Vds 310 for various values ​​of Vgs (in the illustrated example, for Vgs=1V, 0.8V, 0.6V, and 0.4V, respectively). . As mentioned above, a large number of DVFS states needs to be implemented, which may need to reduce headroom or Vds 310. Considering PMOS transistor 206, for a particular width of PMOS transistor 206 and a particular value of Vgs (eg, any one of curves 302-308), PMOS transistor 206 may supply the core's location when Vds 310 is greater than the minimum value. Need load current 312. As the width of PMOS transistor 206 increases, this minimum value of Vds 310 decreases. However, constraints such as the available area and bandwidth of the LDO voltage regulator 102a-m may impose a limit on increasing the width of the corresponding PMOS transistor 206.Considering the limited size and width of PMOS transistor 206 in conventional LDO voltage regulators 104a-m, reducing Vds 310 will place PMOS transistor 206 deeper into the active region of PMOS transistor 206 (eg, corresponding to that in FIG. 3 . The value of Vds 310 that falls between voltage 314 and voltage 316). For these lower values ​​of Vds 310 between voltages 314 and 316, due to the steeper slope of load current 312 to Vds 310 in these regions, the corresponding load current 312 through PMOS transistor 206 (respectively The indications by currents 315 and 317 are extremely sensitive to power supply noise. In addition, such lower values ​​of the load current 312 may not meet the current demand of the corresponding core. Thus, lowering Vds 310 to voltages 314 to 316 may result in a drop in voltage supply, which is detrimental to the performance of the corresponding core. As such, it is not possible to reduce the headroom voltage or Vds 310 to the desired level in the conventional LDO voltage regulators 104a-m.Accordingly, the remaining voltage of the conventional LDO voltage regulator 104a-m or the value of Vds 310 is often higher than required. In other words, referring back to FIG. 2, it may be difficult to increase Vout 210 closer to Vin 208 than a certain amount. This means that in a conventional implementation, cores 102a-m that can operate at intermediate voltage values ​​(where the intermediate voltage value falls between the possible maximum values ​​of Vin 208 and Vout 210) will eventually operate at maximum voltage Vin 208 ( For example, by turning on the corresponding power head switches 106a-m and avoid using LDO voltage regulators 104a-m). Accordingly, due to limitations in the degree to which the remaining voltage Vds 310 of its corresponding LDO voltage regulator 104a-m may be reduced, one or more cores 102a-m may eventually be at a higher voltage value (maximum voltage Vin 208). The next operation, even though it may be possible to operate the one or more cores 102a-m at a lower voltage (intermediate voltage value). Therefore, it can also be found that the power and energy consumption of the one or more cores 102a-m correspondingly increase.As can be seen from the above discussion, a lower margin voltage is needed but it is not possible to implement in a conventional LDO voltage regulator.Summary of the InventionExemplary aspects relate to systems and methods for reducing the headroom voltage of a low dropout (LDO) voltage regulator. The LDO voltage regulator receives the maximum supply voltage and provides a stable voltage to the load, where the load may be the processing core of a multi-core processing system. The leakage current supply includes a leakage current sensor for determining a leakage current demand of the load of the LDO voltage regulator and a leakage current supply circuit for supplying a leakage current demand. In this way, the leakage current source provides current assistance to the LDO voltage regulator so that the LDO voltage regulator can be designed to only supply dynamic current. Therefore, the margin voltage of the LDO voltage regulator can be reduced, which is the difference between the maximum supply voltage and the stable voltage. Reducing the headroom voltage allows the load to have a greater number of dynamic frequencies and voltage states.For example, an exemplary aspect relates to a method of operating a low dropout (LDO) voltage regulator, the method comprising: determining a leakage current demand of a load of an LDO voltage regulator; and supplying a leakage current from a leakage current supply to Meet the leakage current demand of the LDO voltage regulator load.Another exemplary aspect relates to an apparatus that includes a leakage current supply that includes a leakage current sensor configured to determine a leakage current demand of a load of a low dropout (LDO) voltage regulator; and A leakage current supply circuit is configured to supply a leakage current to meet a leakage current demand of a load of the LDO voltage regulator.Yet another exemplary aspect relates to a system that includes means for determining a leakage current demand of a load of a device for regulating a voltage and means for supplying a leakage current to meet a leakage current demand of a load.Description of the drawingsThe accompanying figures are presented to assist in describing aspects of the present invention and are provided merely for purposes of illustration and not as a limitation on the aspects.FIG. 1 illustrates a conventional multi-core processing system including two or more cores and corresponding LDO voltage regulators.Figure 2 illustrates a conventional LDO voltage regulator.FIG. 3 illustrates a plot of load current versus headroom voltage for a conventional LDO voltage regulator.FIG. 4A illustrates a high level block diagram of an exemplary leakage current supply to provide current assistance to an LDO voltage regulator.FIG. 4B illustrates a detailed view of the drain current source and LDO voltage regulator of FIG. 4A.Figures 5A-E illustrate an example embodiment of a leakage current sensor.FIGS. 5F-G illustrate an example embodiment of the differential inverter shown in FIGS. 5C-E.6 illustrates a graph of the frequency and corresponding leakage current of a ring oscillator used to implement a leakage current sensor according to any of FIGS. 5A-E.FIG. 7 illustrates a flow diagram of a process for assisting in operating an LDO voltage regulator using current provided by an exemplary leakage current supply.FIG. 8 illustrates an exemplary wireless device in which an aspect of the present invention may be advantageously used.detailed descriptionAspects of the present invention are disclosed in the following description of certain aspects of the invention and related drawings. Alternate aspects may be designed without departing from the scope of the invention. Additionally, well-known elements of the invention will not be described in detail or will be omitted so as not to obscure the relevant details of the invention.In addition, the word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects. Likewise, the term "aspect of the invention" does not require that all aspects of the invention include the discussed feature, advantage, or mode of operation.The terminology used herein is for the purpose of describing particular aspects only and is not intended to limit the inventive aspects. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," and/or "includes," include the use of the stated features, integers, steps, operations, elements, and/or components as they are used herein. However, the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof is not excluded.In addition, many aspects are described in terms of sequences of actions to be performed by, for example, elements of a computing device. It will be appreciated that various actions described herein may be performed by specific circuits (eg, application specific integrated circuits (ASICs)), by program instructions being executed by one or more processors, or by a combination of the two. In addition, these sequences of actions described herein may be viewed as all embodied within any form of computer-readable storage medium in which a corresponding set of computer instructions is stored and executed by the computer instructions. This will cause the associated processor to perform the functionality described herein. Accordingly, various aspects of the invention may be embodied in several different forms, all of which are contemplated within the scope of the claimed subject matter. In addition, for each of the aspects described herein, the corresponding form of any such aspect may be described herein as, for example, "logic configured to" perform the described action.As previously discussed, an LDO voltage regulator is a circuit designed to receive an input voltage such as a maximum supply voltage and provide a lower stable voltage to a load (eg, a processing core of a multi-core processing system). The LDO voltage regulator has a high bandwidth in order to achieve a fast response when supplying current to, for example, a processing core, where there may be high di/dt or rapid changes in current demand. Although the headroom voltage of the LDO voltage regulator may be increased in conventional implementations to accommodate high di/dt values ​​other than the baseline leakage current demand, the exemplary aspect is configured to avoid increasing the headroom voltage in this manner. Instead, current assist currents are introduced to provide leakage currents to the LDO voltage regulator's load (eg, core). This eases the burden of supplying the entire load current to the LDO voltage regulator, and therefore, the LDO voltage regulator can be designed to supply only fast changing (high di/dt) dynamic currents (the design purpose of the LDO voltage regulator). More specifically, a leakage current supply (which may be numerically controlled) is provided to supply leakage current when an LDO voltage regulator (which may be an analog circuit) supplies dynamic current. This allows the headroom voltage in the exemplary LDO voltage regulator to be lower.It should be understood that the reference to a processing core of a multi-core processing system is merely by way of example and should not be construed as limiting. Therefore, the technique of reducing the margin voltage of the LDO voltage regulator can be applied to any system where the LDO voltage regulator is used to regulate the voltage to any load circuit or subsystem. For example, some integrated circuits may have different voltage islands, where the voltage islands may contain subsystems that can function at a lower voltage than surrounding components. Such subsystems may include a load of an LDO voltage regulator for providing a desired voltage to a voltage island. Paying attention to various scenarios regarding exemplary aspects, the following description will focus on an LDO voltage regulator configured to provide a stable voltage to a common load.Referring to FIG. 4A, a high-level diagram of an exemplary aspect of system 400 is illustrated. System 400 may be a multi-core processing system similar to multi-core processing system 100 of FIG. 1 with several processing cores (not shown in this view). For each processing core in the system 400, a leakage current supply 402 is provided to supply a slowly varying current or leakage current 403, and the LDO voltage regulator 404 is configured to provide a fast varying current or a dynamic current as before (eg, similar to FIG. 1). 405. The total current demand or load current (eg, total current demand or load current coupled to the processing core of LDO voltage regulator 404) is the sum of leakage current 403 and dynamic current 405. The LDO voltage regulator 404 may maintain sufficient margin to supply only the dynamic current 405 that is less than the headroom voltage that may supply the total current demand. Thus, the margin of the exemplary LDO voltage regulator 404 can be reduced while satisfying the total current demand of the processing core.In some aspects, existing unused unused power head switches (eg, conventional power head switches 106a-m of FIG. 1 can be reused when corresponding processing cores 102a-m are supplied with corresponding LDO voltage regulators 104a. When the current of -m is), the leakage current supply 402 is configured to handle a slow current change of the processing core or any other load.In some aspects, the leakage current supply 402 is configured by taking into account the slow nature of the leakage current 403 . Leakage current 403 is a slowly varying current because it depends only on the following parameters: temperature, voltage, and process variations. The nature of these parameters is briefly explained as follows. With respect to temperature, the temperature change on the die cannot be easily controlled because it is based on the ambient temperature and the amount of activity of the components integrated on the die. However, the temperature change is slow relative to operating the integrated circuit, allowing the integrated circuit enough time to adapt to temperature changes. Now considering the voltage, the voltage change is more controllable, but the time taken to reach the new voltage set point is still mainly related to the speed of the integrated circuit. Finally, process variations that can affect the leakage current are almost static within the die because the process can vary from die to die at most, or very likely between wafers or between batches. Therefore, the leakage current based on parameters such as temperature, voltage, and process variation changes relatively slowly. An exemplary embodiment of a leakage current supply 402 utilizes the slowly varying nature of the leakage current, as will be explained in the following section of the present invention.Referring now to FIG. 4B, a more detailed block diagram of a leakage current supply 402 and LDO voltage regulator 404 of the system 400 is illustrated. The leakage current supply 402 includes a leakage current sensor 406, an analog/digital converter (ADC) 408, a finite state machine (FSM) 410, and a leakage current supply circuit 412 including one or more PMOS transistors 412a-p. The leakage current sensor 406 is configured to sense a leakage current (analog value) based on the above parameters of the semiconductor die of the integrated system 400, which is also referred to as a corresponding process, voltage, and temperature (PVT) inflection point on the die. The ADC 408 is configured to digitize the sensed leakage current. For example, ADC 408 may output a specific binary value or digital code that is a digital encoding of the sensed leakage current.The FSM 410 may include any logic (eg, a state machine) that can be used to determine the number of PMOS transistors corresponding to the digital code that will be in combination to supply the desired sensed leakage current. Thus, the FSM 410 may be used to selectively turn on a corresponding number of PMOS transistors 412a-p in the drain current supply circuit 412 based on the digital code supplied by the ADC 408 corresponding to the leakage current sensed by the leakage current sensor 406. In an exemplary aspect, the leakage current supply 402 may be calibrated for a particular parameter and maximum supply voltage value (eg, using a look-up table). Once calibrated, the leakage current sensor 406 can track changes in temperature and voltage to output the sensed leakage current based on these tracked changes. Correspondingly, the digital code output by the ADC 408 may be modified, and the FSM 410 may be used to turn on or off one or more additional PMOS transistors of the drain current supply circuit 412 based on whether the sensed leakage current has increased or decreased, respectively. 412a-p. For example, the number of PMOS transistors 412a-p to be turned on is generally higher for higher values ​​of the digital code received from the ADC 408 and lower for lower values ​​of the digital code. Thus, the FSM 410 may generally be configured to indicate that the number of PMOS transistors 412a-p is increased or decreased accordingly based on higher or lower values ​​of the digital code. In this way, the leakage current supplied by the leakage current supply source 402 can be adjusted based on the temperature change.The configuration of the LDO voltage regulator 404 is similar to the conventional LDO voltage regulators 104a-m discussed with reference to FIGS. Without repeating the description of similar features of LDO voltage regulators 104a-m, LDO voltage regulator 404 accepts input voltage Vin 428 (which may be a positive or maximum supply voltage, such as VDD 108 of FIG. 1), An output voltage Vout 430 is provided to its load 432, which is a stable voltage that is lower than the input voltage Vin 428. In some aspects, load 432 may be a processing core (eg, any of cores 102a-m). The LDO voltage regulator 404 is also supplied with a reference voltage Vref 422, which is one input to the operational amplifier 424, and the other input to the operational amplifier 424 is the output voltage 430 provided via the feedback path. The output of operational amplifier 424 drives the gate of PMOS transistor 426, and the remaining voltage of LDO voltage regulator 404 (ie, Vin 428 minus Vout 430) corresponds to the drain-to-source voltage (Vds) of PMOS transistor 426. As previously described, the current supplied to the load 432 is not entirely provided by the LDO voltage regulator 404 as compared to the conventional LDO voltage regulators 104a-m. In contrast, LDO voltage regulator 404 may only provide fast-varying dynamic current to load 432, while slow-change leakage current may be supplied by leakage current supply 402 (wherein the amount or magnitude of leakage current supplied is based on the corresponding load 432 or process. The core leakage current needs to be proportional to the number of PMOS transistors 412a-p turned on in the leakage current supply circuit 412. Thus, the headroom voltage of the exemplary LDO voltage regulator 404 may be advantageously reduced compared to conventional LDO voltage regulators 104a-m, allowing a greater number of DVFS states.5A-E, several example embodiments of a leakage current sensor 406 are illustrated. In general, the leakage current sensor 406 is designed to track changes across a wide range of temperatures and process inflection points to accurately sense changes in leakage current with temperature and process inflection point changes. As previously described, voltage based leakage current changes may be obtained from a lookup table (not specifically shown). For example, once the leakage current has been adjusted based on temperature and process inflection points, the leakage current sensor 406 may be calibrated (eg, based on a lookup table) to adjust the sensed leakage current based on, for example, changes in Vin 428 and Vout 430 . Therefore, in some aspects, the leakage current demand of the load 432 (eg, the processing core, such as the core 102a-m) may be accurately supplied by the leakage current supply 402 to meet the specific margin required by the corresponding LDO voltage regulator 404 . Voltage (ie, Vin 428 minus Vout 430). Accordingly, in some aspects, the leakage current sensed by the leakage current sensor 406 may be processed by the FSM 410 in order to determine the corresponding number of PMOS transistors 412a - p that will be conducting in the leakage current supply circuit 412 .Referring now to FIG. 5A, a first embodiment of a leakage current sensor 406 is shown. The leakage current sensor 406 of FIG. 5A includes a ring oscillator 500, which includes an odd number (eg, three or more than three) inverters 502 a-m connected in a ring, wherein the output of the inverter 502 m is, for example, via a feedback path 512. Connected to the input of inverter 502a. Inverters 502a-m are current starved in that the current through them is limited. For example, inverters 502a-m are current starved based on head switches, foot switches, or combinations thereof. As shown, the current through the inverters 502a-m is driven by a corresponding PMOS transistor or p-channel field effect transistor (PFET) 504a-m (which couples the positive supply voltage VDD 508 to the respective inverter 502a-m). It is limited to the corresponding NMOS transistor or n-channel field effect transistor (NFET) 506a-m, which couples the respective inverter 502a-m to ground 510. Therefore, the PFET 504a-m is configured to switch off the head (by tethering its gate terminal to its source terminal), and only the current passing through it is a leakage current. Similarly, the NFET 506a-m is configured as a foot switch that is turned off (by tethering its gate terminal to its source terminal) and only the current passing through it is a leakage current. Accordingly, only leakage currents are allowed to pass through the corresponding inverters 502a-m, causing the inverters to be current starved.Consider a first current starving inverter (eg, inverter 502a). For example, when the input of inverter 502a switches to "1", the output of inverter 502a will pass through the corresponding first NFET (ie, The leakage current of NFET 506a) discharges. On the other hand, when the input of the inverter 502a is "0", the output of the inverter 502a will be charged through the leakage current of the corresponding first PFET (ie, the PFET 504a). Therefore, it can be seen that the creep rate of the output of the inverter 502a (ie, the rate at which the output of the inverter 502a rises or falls) is controlled by the leakage current of the PFET 504a and the NFET 506a. Similarly, the rate of climb of the output of each of the inverters 502a-m is controlled by the leakage currents of the corresponding PFETs 504a-m and NFETs 506a-m. Therefore, the frequency at which the oscillator 500 switches or oscillates depends on the leakage current through the inverters 502a-m. When the leakage current increases, for example, due to an increase in temperature, the frequency of the ring oscillator 500 increases. If, for example, a specific process inflection point with respect to system 400 results in a smaller leakage current, then the frequency of ring oscillator 500 will be slower, and if the process inflection point causes a larger leakage current, the frequency of ring oscillator 500 will be higher. Accordingly, the frequency of the ring oscillator 500 is found to vary based on the leakage current. More specifically, it was found that the frequency of the ring oscillator 500 is proportional to the leakage current.Referring to FIG. 6, a graph 600 of a plot of a normalized ring oscillator (RO) frequency 602 versus a normalized core leakage current 604 including an example ring oscillator (eg, the ring oscillator 500 of FIG. 5A) is shown. The graph 600 contains process inflection points (e.g., slow-slow, fast-fast (ff), typical-typical (tt)) sample points plotted for different temperatures, e.g., based on the simulation model. Processor cores can be heavily connected to the positive Or p-channel and n-channel devices (eg, p-channel field effect transistors or "PFET" and n-channel field effect transistors or "NFET") modeled for maximum power supply (eg, Vdd 108) and ground voltage, and available Core leakage currents for this model including PFETs and NFETs.The core leakage current can then be normalized relative to a typical-typical (tt) inflection point that is specifically identified by reference numeral 606 (eg, where the temperature is 110° C.) on the x-axis. The normalized value of the leakage current is plotted as a normalized core leakage current 604. On the y-axis, the frequency of the leakage current sensor 406 is plotted at various temperatures for various process inflection points, corresponding to the normalized core leakage therein. The current 604 is a tt inflection point 606, and the normalized RO frequency 602 is also 110 at 110° C. From the graph 600, the normalized RO frequency 602 is changed and normalized for different process inflection points sampled at a variable temperature. Changes in core leakage current 604 into a substantially linear relationship system.Returning to FIG. 5A, the leakage current sensor 406 is configured to use a model such as the graph 600 of FIG. 6 (eg, a substantially linear relationship of RO frequency and leakage current), and senses the leakage current by measuring the frequency of the ring oscillator 500. FIGS. 5B-E illustrate an alternate embodiment of a leakage current sensor 406 that is similarly configured to sense leakage current. These alternate embodiments will now be briefly explained in the following sections.In FIG. 5B, a second embodiment of a leakage current sensor 406 including a ring oscillator 520 is shown. In ring oscillator 520, three or more inverters 502a-m are connected in a ring form to feedback path 512, as in ring oscillator 500. However, in ring oscillator 520, the corresponding head switch formed by PFETs 504a-m and the foot switch formed by NFETs 506a-m are not turned off. Specifically, the PFET 504a-m and NFET 506a-m of FIG. 5B, which are smaller in size than the corresponding PFET 504a-m and NFET 506a-m of FIG. 5A, are supplied with leakage current to the inverter 502a-m based on additional circuitry ( The additional circuitry may be used to compensate for the smaller sizes of PFET 504a-m and NFET 506a-m in FIG. 5B). The additional circuitry includes, for example, a first bias circuit including PFET 528 connected to PFETs 504a-m (with its gate connected to its drain terminal) and NFET leakage device 522 (with its gate connected to its source terminal) so as to be biased The gate voltage of PFET 504a-m is depressed so that leakage current flows through PFET 504a-m. Similarly, the additional circuit also includes a second bias circuit including a PFET leakage device 524 (with its gate connected to its source terminal) and NFET leakage device 526 (with its gate connected to its drain terminal) connected to NFETs 506a-m In order to bias the gate voltage of the NFET 506a-m so that the drain current flows through the NFET 506a-m. The first and second bias circuits may be used to additionally control the leakage current through inverters 502a-m.In the embodiment of leakage current sensor 406 shown in FIGS. 5C-E, the inverter is a differential inverter that includes positive and negative inputs and outputs. FIGS. 5F-G illustrate a detailed implementation of an example differential inverter that may be used in the implementation of the leakage current sensor 406 shown in FIGS. 5C-E. In order to oscillate the ring oscillator in the embodiment of the leakage current sensor 406 shown in FIGS. 5C-E, each differential inverter is connected so that the output is inverted from its input. In these embodiments, the advantage of the differential design is that the sensed leakage current is not affected by the power supply noise, which in turn means that the oscillation frequency depends only on the leakage current bias and not on the power supply. Therefore, the calibration of the leakage current sensor 406 is simplified. This simplifies the calibration phase.For example, FIG. 5C illustrates a third embodiment of a leakage current sensor 406 that includes a ring oscillator 530 . The three or more differential inverters 532a-m of the ring oscillator 530 have one inverting input and one inverting output, wherein the non-inverting output of the differential inverter 532m is connected to the differential inversion via the feedback path 538 The inverting input of the 532a and the inverting output of the differential inverter 532m are connected via the feedback path 537 to the non-inverting input of the differential inverter 532a. Differential inverters 532a-m may be configured in at least two ways, eg, based on whether they are coupled to a positive supply voltage via a PFET (eg, PFET 534b) or to an earth via an NFET (eg, NFET 536a). In the ring network of the ring oscillator 530, the differential inverters 532a-m may alternate between the first and second configurations. These two configurations of a differential inverter will now be described with reference to FIGS. 5F to G.FIG. 5F shows a first configuration of an exemplary differential inverter that may correspond to a differential inverter 532a. Like the differential inverter 532a, the differential inverter 532c also shown in FIG. 5C may be similarly configured in the first configuration. Similarly, each other differential inverter can be configured in a first configuration. As shown in FIG. 5F, the differential inverter 532a may include a first inverter formed of NFET N1 and PFET P1 and a second inverter formed of NFET N2 and PFET P2. PFETs P1 and P2 may be diode coupled with their gate terminals connected to their drain terminals. The negative output 532a_no can be derived at the drain terminal of the PFET P1, and the positive output 532a_po can be derived at the drain terminal of the PFET P2. Positive input 532a-pi may be coupled to the gate terminal of NFET N1, and negative input 532a-ni may be coupled to the gate terminal of NFET N2. Thus, with reference now to FIGS. 5C and 5F, it can be seen that feedback path 537 is coupled to positive input 532a_pi and feedback path 538 is coupled to negative input 532a_ni; whereas negative output 532a_no is coupled to wire 571 and positive output 532a_po is coupled to wire 573.FIG. 5G shows a second configuration of another exemplary differential inverter corresponding to differential inverter 532b. Although other inverters having similar configurations are not explicitly described, each of the other differential inverters of the differential inverters 532a-m may be similarly configured in the second configuration. As shown in FIG. 5G, differential inverter 532b may include a third inverter formed by NFET N3 and PFET P3 and a fourth inverter formed by NFET N4 and PFET P4. NFETs N3 and N4 may be diode coupled with their gate terminals connected to their drain terminals. The negative output 532b_no can be derived at the drain terminal of NFET N3, and the positive output 532b_po can be derived at the drain terminal of NFET N4. Positive input 532b_pi may be coupled to the gate terminal of PFET P3, and negative input 532b_ni may be coupled to the gate terminal of PFET P4. Thus, with reference now to FIGS. 5C and 5G, visible wire 571 is coupled to positive input 532b_pi and wire 573 is coupled to negative input 532b_ni; whereas negative output 532b_no is coupled to wire 575 and positive output 532b_po is coupled to wire 577.With respect to the above configuration of the differential inverters 532a-m, as can be seen in FIG. 5C similar to FIG. 5B, a first bias circuit comprising a PFET 528 and an NFET leakage device 522 is connected to a PFET (eg PFET 534b) which is coupled to A differential inverter having a second configuration, such as a differential inverter 532b; while a second bias circuit including a PFET leakage device 524 and an NFET leakage device 522 is coupled to the NFET (eg, NFETs 536a, 536c), which are coupled to have a first A configured differential inverter, such as differential inverters 532a, 532c. In the ring oscillator 530, the leakage current of the PFET 534b controls the rise and fall of the signal in the differential inverter having the second configuration, and the NFET leakage device 522 controls the rise of the signal of the differential inverter formed by the first configuration. And drop. In alternative embodiments, the control of the first and second configurations may also be reversed. The final ring oscillator 530 including the differential inverter 532m is shown connected to both the PFET 534m and the NFET 536m in order to demonstrate that NFET or PFET differential pairs may be used in alternative embodiments.5D shows a fourth embodiment of a leakage current sensor 406 including a ring oscillator 540 comprising three or more than three differential inverters 532a of the type described with reference to ring oscillator 530 of FIG. 5C. -m. In FIG. 5D, only NFETs 536a-m are shown for controlling the leakage current through the differential inverters 532a-m configured in the first configuration shown in FIG. 5F. The alternative NFETs 536a-m may be biased with different bias voltages. For example, NFETs 536a, 536c, etc. are biased with a first bias voltage provided by a second bias circuit, and NFET 536b and each other NFET not biased with a first bias voltage is a comparator. The second bias voltage provided by the second bias circuit biases. The currents of the first and second bias circuits are supplied by current mirrors including PFET 544 and NFET 542 as shown.FIG. 5E shows a fifth embodiment of a leakage current sensor 406 including a ring oscillator 550 that includes three or more than three differentials of the type described with reference to the ring oscillators 530 and 540 of FIGS. 5C-D. Inverters 532a-m. Also, in FIG. 5E, only NFETs 536a-m are coupled to differential inverters 532a-m that are all configured in the first configuration of FIG. 5F to control the leakage current through differential inverters 532a-m. A bias circuit including PFETs 564, 552, 554, 556, and a leakage device 524 and NFET leakage devices 522, 558, 562, and 526 provide bias voltages for NFETs 536a-m. More specifically, the bias voltage for all of the NFETs 536a-m is the sum of the bias voltages supplied by the NFET leakage device 522 and the PFET leakage device 524. PFETs 564 and 552 form a first current mirror to mirror NFET leakage device 522 . NFETs 562 and 558 and PFETs 556 and 554 form a second current mirror to mirror PFET leakage device 524 . The sum of the current mirrored by the first and second current mirrors is obtained by coupling the drains of PFETs 554 and 552 to the drain of NFET 526 . This sum is used to provide the NFET 536a-m with the aforementioned bias voltage.Referring back to FIG. 4B, the sensed leakage current (based on the ring oscillator frequency measurement) is measured using any of the above five embodiments of the leakage current sensor 406 shown in FIGS. 5A-E or any other suitable alternative implementation. ) is supplied to ADC 408 as previously mentioned. The ADC 408 supplies a digital code corresponding to the sensed leakage current. Based on the values ​​of the digital codes, the aforementioned state machine FSM 410 may be used to determine the number of one or more PMOS transistors 412a-p that are conducting in the leakage current supply circuit 412.Accordingly, it is to be understood that the exemplary aspects include various methods for performing the processes, functions, and/or algorithms disclosed herein. For example, FIG. 7 illustrates a method 700 of operating a low dropout (LDO) voltage regulator. Method 700 may include the following aspects.In block 702, the method 700 includes determining a demand for a leakage current of a load of the LDO voltage regulator. For example, using the leakage current sensor 406, the leakage current demand of the load 432 of the LDO voltage regulator 404 may be obtained (where, as will be appreciated, the LDO voltage regulator 404 may be configured to receive the maximum supply voltage and provide a stable voltage to the load 432). Leakage current sensor 406 may be configured in accordance with any of the example embodiments shown and described with reference to FIGS. 5A-E.In block 704, the method 700 includes supplying a leakage current from a leakage current supply to meet a leakage current demand of a load of the LDO voltage regulator. For example, the leakage current 403 from FIG. 4A may be supplied from the leakage current supply 402 to the load 432 . More specifically, the FSM 410 may turn on the appropriate number of PMOS transistors 412a-p in the leakage current supply circuit 412 based on the digital code supplied by the ADC 408, which represents the amount of current sensed by the leakage current sensor 406. In this way, the LDO voltage regulator 404 may be designed to supply only dynamic or fast varying currents to the load 432, and thus the remaining voltage of the LDO voltage regulator 404 (ie, the difference between the maximum supply voltage and the stable voltage) may advantageously be Decreased, it can in turn increase the number of DVFS states available to the load 432.In addition, it will also be understood that one or more aspects of the present invention are directed to a system (eg, system 400, which may be a multi-core processing system or other integrated circuit) that includes means for regulating the voltage of a load, wherein the The means for adjusting the voltage of the load includes means for receiving the maximum supply voltage and means for providing a stable voltage to the load. For example, the means for regulating the voltage of the load may be for receiving a maximum supply voltage or input voltage Vin 428 (eg, it may be VDD 108 shown in FIG. 1) and providing an output voltage Vout 430 (which is A stable voltage lower than the input voltage Vin 428 supplies a stable voltage to the LDO voltage regulator 404 of the load 432 .The system may further include means for determining a leakage current demand of the load, which may include means for sensing a leakage current demand of the load of the LDO voltage regulator based on load-dependent temperature, voltage, and process inflection points. For example, a device for determining the leakage current demand of a load may include a leakage current sensor 406 that includes a ring oscillator for determining a leakage current based on load-dependent temperature, voltage, and process inflection points (eg, FIG. 5A to FIG. Ring oscillator 500 to 550) shown in E). In addition, a device for converting the sensed leakage current demand into a digital code may be provided, such as the ADC 408 .The system may further include means for supplying a leakage current to meet the leakage current demand of the load, which may include, for example, a p-channel metal oxide semiconductor for determining a leakage current demand to be turned on to supply the load. The number of (PMOS) transistors. For example, a device for supplying a leakage current may include a leakage current supply circuit 412 including one or more PMOS transistors 412a-p, and the FSM 410 may include to determine a leakage current based on a digital code received from the ADC 408. The device in the supply circuit 412 to be turned on to satisfy the number of PMOS transistors 412a-p for the leakage current demand.An example device in which exemplary leakage current supply 402 may be deployed will now be discussed with respect to FIG. FIG. 8 shows a block diagram of a wireless device 800 configured in accordance with an exemplary aspect. Wireless device 800 includes system 400, which may be a processing system including one or more processing cores. In FIG. 8, a leakage current supply 402 is shown supplying a leakage current 403 that provides current assistance to an LDO voltage regulator 404 that supplies a dynamic current 405 to a load. It may be the processing core or other subsystem of the system 400. System 400 may be communicatively coupled to 810 .FIG. 8 also shows a display controller 826 coupled to system 400 and display 828 . A decoder/decoder (codec) 834 (eg, an audio and/or speech codec) may be coupled to system 400 . Other components are also illustrated, such as wireless controller 840 (which may include a modem). Speaker 836 and microphone 838 may be coupled to codec 834 . FIG. 8 also indicates that wireless controller 840 may be coupled to wireless antenna 842 . In particular aspects, system 400, display controller 826, memory 810, codec 834, and wireless controller 840 are included in a system-in-package or system-on-chip device 822.In particular aspects, input device 830 and power supply 844 are coupled to system-on-chip device 822 . In addition, in certain aspects, as illustrated in FIG. 8, the display 828, the input device 830, the speaker 836, the microphone 838, the wireless antenna 842, and the power supply 844 are external to the system-on-chip device 822. However, each of display 828, input device 830, speaker 836, microphone 838, wireless antenna 842, and power supply 844 may be coupled to a component of system-on-chip device 822, such as an interface or controller.It should be noted that while FIG. 8 depicts a wireless communication device, system 400 and memory 810 may also be integrated into a set-top box, music player, video player, entertainment unit, navigation device, communication device, personal digital assistant (PDA), fixed location In data units, mobile phones, computers, or other similar devices.Those of skill in the art will appreciate that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, etc. that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic or magnetic particles, light fields, or light particles, or any combination thereof. And chips.Moreover, those skilled in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends on the specific application and the design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.The methods, sequences, and/or algorithms described in connection with the aspects disclosed herein may be embodied directly in hardware, as software modules executed by a processor, or as a combination of hardware and software modules. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from the storage medium and write information to the storage medium. In the alternative, the storage medium may be integral to the processor.Accordingly, an aspect of the present invention may include a computer-readable medium embodying a method for reducing a headroom voltage of an LDO voltage regulator. Thus, the present invention is not limited to the illustrated examples, and any means for performing the functionality described herein are included in aspects of the present invention.While the foregoing disclosure shows illustrative aspects of the invention, it should be noted that various changes and modifications could be made herein without departing from the scope of the invention as defined by the appended claims. The functions, steps, and/or actions of the method claims in accordance with the aspects of the invention described herein need not be performed in any particular order. In addition, although elements of the invention may be described or claimed in the singular, the plural is also encompassed unless it is explicitly recited in the singular.
A key cache container provides secure storage of cryptographic keys and secure operation of cryptographic functionality for a workload container. A cryptographic call adapter in each workload container converts an application cryptographic operation request made by an application to a workload container cryptographic operation request sent to the key cache container. Secure provisioning of keys is implemented by a key intermediary service that acts as a proxy for a key management service. A secure enclave within the key cache container stores the key in an encrypted format, and instructions to perform cryptographic operations. The key cache container provides a key handle associated with the cryptographic key to a requester application that uses the key handle in a subsequent application cryptographic operation request. A secure enclave is created and managed in a security-enabled integrated circuit component that is part of a hardware platform of a computing system using security-related instructions.
1.A method that includes:sending a workload container cryptographic operation request from the workload container to the key cache container, the workload container cryptographic operation request including a request for a cryptographic key; andA key handle associated with the cryptographic key is received from the key cache container at the workload container.2.The method of claim 1, further comprising:generating the workload container cryptographic operation request by an application operating within the workload container; andThe key handle is provided to the application.3.The method of claim 1, further comprising:generating an application cryptography operation request by an application operating within the workload container;converting the application cryptographic operation request to the workload container cryptographic operation request; andThe key handle is provided to the application.4.The method of claim 1, wherein the workload container cryptographic operation request is a first workload container cryptographic operation request, the method further comprising:sending a second workload container cryptographic operation request from the workload container to the key cache container, the second workload container cryptographic operation request including the key handle, indicating that information on cryptographic operations performed by the key cache container, and input data; andOutput data generated by the key cache container performing the cryptographic operation on the input data is received at the workload container from the key cache container.5.The method of claim 4, further comprising:An applied cryptographic operation request is generated by an application operating in the workload container, the applied cryptographic operation request including the key handle, an indication of the cryptographic operation to be performed by the key cache container the information, and the input data; andConverting the application cryptographic operation request to the workload container cryptographic operation request.6.5. The method of claim 4, wherein the cryptographic key is a first cryptographic key, the workload container is a first workload container, the key handle is a first key handle, and The workload container cryptographic operation request is a first workload container cryptographic operation request, and the method further includes:sending a second workload container cryptographic operation request from a second workload container to the key cache container, the second workload container cryptographic operation request including a request for a second cryptographic key; andA second key handle associated with the second cryptographic key is received from the key cache container at the second workload container.7.6. The method of claim 6, wherein a first computing system hosts the first workload container and the key cache container, and a second computing system hosts the second workload container.8.6. The method of claim 6, wherein the first workload container, the second workload container, and the key cache container are hosted on a computing system.9.The method of claim 1, wherein the request for the cryptographic key comprises a request to generate the cryptographic key.10.The method of claim 1, wherein the request for the cryptographic key comprises a request to load the cryptographic key.11.The method of claim 1, wherein a container runtime engine provides an interface between the workload container and an operating system or a hypervisor.12.A method that includes:receiving, at the key cache container, a workload container cryptographic operation request from a workload container, the workload container cryptographic operation request including a request for a cryptographic key; andA key handle associated with the cryptographic key is sent from the key cache container to the workload container.13.The method of claim 12, further comprising:requesting the cryptographic key from a key brokerage service;receiving the cryptographic key from the key brokerage service; andThe cryptographic keys are stored in a secure enclave in encrypted form.14.14. The method of claim 13, wherein storing the cryptographic key in the encrypted form in the secure enclave utilizes a processing unit of a computing system hosting the key cache container Security related directives.15.The method of claim 13, further comprising:generating an enclave public-private key pair within the secure enclave, the enclave public-private key pair comprising an enclave public key and an enclave private key; andA remotely verifiable signed claim is provided to the key brokerage service for verification, the remotely verifiable signed claim comprising a hash of the enclave public key.16.16. The method of claim 15, wherein the remotely verifiable signed claim further includes one or more security enclave attributes.17.The method of claim 13, further comprising:generating an enclave public-private key pair within the secure enclave, the enclave public-private key pair comprising an enclave public key and an enclave private key;generating a hash of the enclave public key; andProvide the key brokerage service with a remotely verifiable signed claim and the hash of the enclave public key for verification, the remotely verifiable signed claim excluding the enclave public key The key also does not include the hash of the enclave public key.18.The method of claim 12, further comprising:A workload container cryptographic operation request from the workload container is received at the key cache container, the workload container cryptographic operation request including the key handle, an indication to be cached by the key Information about cryptographic operations performed by the container, and input data;performing the cryptographic operation on the input data within a secure enclave to generate output data, the cryptographic operation utilizing the cryptographic key associated with the key handle, the cryptographic key being stored in said secure enclave; andThe output data is sent from the key cache container to the workload container.19.19. The method of claim 18, wherein the cryptographic key is a first cryptographic key, the workload container is a first workload container, the key handle is a first key handle, and the The input data is first input data, and the output data is first output data, and the workload container cryptographic operation request is a first workload container cryptographic operation request, the method further comprising:receiving at the key cache container from a second workload container a second key handle, information indicating a second cryptographic operation to be performed by the key cache container, and second input data;The second cryptographic operation is performed on the second input data within the secure enclave to generate second output data, the second cryptographic operation using a handle associated with the second key and stored in a second cryptographic key in the secure enclave; andThe second output data is sent from the key cache container to the second workload container.20.19. The method of claim 19, wherein the second workload container and the key cache container are hosted by a computing system.21.19. The method of claim 19, wherein the key cache container is hosted on a first computing system and the second workload container is hosted on a second computing system.22.20. The method of claim 19, wherein performing the cryptographic operation utilizes security-related instructions of a processing unit of a computing system hosting the key cache container.23.19. The method of claim 19, wherein a container runtime engine provides an interface between the key cache container and an operating system or hypervisor.24.One or more non-transitory computer-readable storage media storing computer-executable instructions that, when executed, cause a computing system to perform the method of any of claims 1-23.25.A computing system comprising:one or more processing units;One or more computer-readable storage media storing instructions that, when executed, cause the one or more processing units to perform the method of any of claims 1-23.
Secure key provisioning and hardware-assisted secure key storage in container-based environments storage and secure cryptographic function operationBackground techniqueIn some existing cloud-native computing environments, containers are deployed to computing systems under the control of an orchestrator. These containers contain applications that may require the use of cryptographic keys to perform various cryptographic functions, such as encryption and decryption. Some existing key management solutions use hardware security modules (HSMs) to perform cryptographic tasks. An HSM is a physical computing device or component that may take the form of a plug-in card or an external device attached directly to a computing system.Description of drawings1 is a block diagram of an example computing system for secure key provisioning and hardware-assisted secure key storage and secure cryptography function operation.2 is a flowchart of a first example method of requesting a cryptographic key.3 is a flowchart of a second example method of requesting a cryptographic key.4 is a flowchart of a third example method of requesting a cryptographic key.5 is a flowchart of an example container deployment method.6 is a block diagram of an example computing system in which the techniques described herein may be implemented.7 is a block diagram of an example processor unit that may execute instructions as part of implementing the techniques described herein.Detailed waysIn a container-based OS virtualization environment, it may be necessary to provision cryptographic keys to allow applications to operate within the container to perform various cryptography-related tasks, such as Transport Layer Security (TLS) session establishment, digital signature generation , and encryption/decryption. Due to security concerns, these keys are not included in the container image but are provisioned during the container runtime. To ensure the security of the container-based application and any associated data, the keys can be securely provisioned to the computer system on which the container executes, and remain secure while in use. This prevents cryptographic keys from being loaded into the main memory (DRAM) of the computing system, where they may be vulnerable. Existing approaches for protecting cryptographic keys when they are in use rely on hardware security modules (HSMs), network HSMs, and trusted platform modules (TPMs), but operational, performance, and cost issues make container-based It is challenging to adopt these approaches in an environment.Disclosed herein are techniques for securely provisioning containers with cryptographic keys and providing hardware-assisted secure key storage and secure cryptographic function operation in a container-based environment. The key cache container utilizes the security features of the host computing system's hardware platform to provide secure key storage and secure cryptographic function operation for other containers operating on the same (or different) host computing system. In some embodiments, keys are securely stored in a secure enclave and cryptographic operations are securely performed in the secure enclave.In the following description, numerous specific details are set forth, but embodiments of the techniques described herein may be practiced without these specific details. Well-known circuits, structures and techniques have not been shown in detail in order to avoid obscuring the understanding of this description. "Embodiments," "embodiments," "some embodiments," etc. may include features, structures, or characteristics, but not every embodiment necessarily includes those particular features, structures, or characteristics.Some embodiments may have some, all, or none of the features described for other embodiments. "First," "second," "third," etc. describe common objects and indicate that different instances of the same object are being referenced. Such adjectives do not imply that the objects so described must be in a given order, temporally or spatially, in order, or in any other way. "Connected" may indicate that elements are in direct physical or electrical contact with each other, and "coupled" may indicate that elements co-operate or interact with each other but the elements may or may not be in direct physical or electrical contact.The specification may use the phrases "in one embodiment," "in an embodiment," "in some embodiments," and/or "in various embodiments," each of which may refer to the same or a different one or more of the embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous.As used herein, the term "integrated circuit assembly" refers to a packaged or unpackaged integrated circuit product. A packaged integrated circuit assembly includes one or more integrated circuits mounted on a package substrate. In one example, a packaged integrated circuit assembly includes one or more processor units mounted on a substrate, including a solder ball grid array (BGA), on the outer surface of the substrate. In one example of an unpackaged integrated circuit assembly, a single monolithic integrated circuit die includes solder bumps that are attached to contacts on the die. Solder bumps allow the die to be attached directly to the printed circuit board. An integrated circuit component may include one or more of any computing system component described or referenced herein, or any other computing system component, such as a processor unit (eg, a system on a chip (SoC), processor core, Graphics processing unit (GPU), accelerator), I/O controller, chipset processor, memory or network interface controller.As used herein, the terms "operate," "execute," or "run" are used interchangeably as they relate to software or firmware associated with a system, device, platform, or resource, and may refer to storage on a computer that can be accessed by a computing system software or firmware in one or more computer-readable storage media that can be accessed by a computer, device, platform, or resource, even if the instructions contained in the software or firmware are not actively being executed by the computing system, device, platform, or resource.Reference is now made to the drawings, which are not necessarily to scale, wherein the use of like or the same numerals in the different figures indicates the same or similar parts. The use of similar or identical numerals in different figures does not imply that all figures including similar or identical numerals constitute a single or the same embodiment. The same numbers with different letter suffixes may represent different instances of similar components. The drawings generally illustrate the various embodiments discussed in this document by way of example and not limitation.In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the description. It may be evident, however, that novel embodiments may be practiced without these specific details. In other instances, various well-known structures and devices are shown in block diagram form in order to facilitate the description of these structures and devices. The intention is to cover all modifications, equivalents, and alternatives falling within the scope of the claims.1 is a block diagram of an example computing system for secure key provisioning and hardware-assisted secure key storage and secure cryptography function operation. System 100 is a container-based computing system that includes an orchestrator 104 , a host computing system 108 , a key brokerage service 112 , a key management service 116 , and an additional host computing system 122 . Orchestrator 104 is responsible for managing system 100 and for various container-related tasks such as deploying container images to host computing systems 108 and 122 , monitoring the performance of deployed containers, and monitoring host computing system 108 and utilization of 122 resources.The host computing system 108 includes a hardware platform 120 on which a computing environment 124 is built. Computing environment 124 includes a key cache container 128 , a plurality of workload (or application) containers 132 , an operating system 136 , and a container runtime engine 140 . Key cache container 128 and workload container 132 are running instances of the key cache container image and workload container image deployed by orchestrator 104 to host computing system 108 . Key cache container 128 and workload container 132 operate on top of container runtime engine 140 , which in turn operates within operating system 136 . In some embodiments, the computing environment includes one or more container runtime images that operate on top of a hypervisor (or virtual machine monitor (VMM)), thereby allowing different container runtime engines supporting different container types to operate on a single operate on the host computing system. The hypervisor may be a bare-bones (or Type 1) hypervisor that operates directly on the hardware platform of the computing system, or a hosted (or Type 2) hypervisor that operates as a software layer on top of an operating system. In some embodiments, the container-based system 100 utilizes a cloud-native computing approach and operates within a public cloud, private cloud, or hybrid cloud architecture.As used herein, the term "container image" refers to the encapsulation of a binary image of one or more applications and any libraries, configuration settings, and any other information required for application execution. Container images can conform to any container image format, such as Appc or LXC container image formats. As used herein, the term "container" refers to a running instance of a container image.The hardware platform 120 includes one or more security-enabled integrated circuit components. In some embodiments, an integrated circuit assembly implementing security includes one or more processing units (eg, processor cores, CPUs, GPUs, XPUs) that support an instruction set that includes, in addition to a general-purpose instruction set, an A collection of security-related directives. That is, a security-enabled integrated circuit component is not a dedicated security processor. In some embodiments, the security-related instructions allow for the creation and operation of security enclaves. A secure enclave includes an area of main memory (eg, memory located outside the integrated circuit components responsible for executing applications on the computing system, such as DRAM), the contents of which are encrypted and cannot be accessed by any software outside the secure enclave (eg, other processes, operating system kernel, firmware) access. Encryption and decryption of secure enclave content (eg, data, instructions) occurs "sequentially" because the encrypted secure enclave content is decrypted as it is accessed, and the information to be stored in the secure enclave is Encrypted when written to the secure enclave. Information read from or written to the secure enclave is decrypted or encrypted within the integrated circuit assembly that executes the application on the computing system, which ensures that the content of the secure enclave is transmitted across its integrated circuit assembly and main memory. The interface is encrypted as it travels.In some embodiments, the content of the secure enclave is encrypted using a key derived from one or more fixed keys programmed into the processing unit during manufacture. Thus, only integrated circuit components that include such processing units can decrypt secure enclave content.In some embodiments, a security-enabled integrated circuit assembly includes one or more security-enabled processing units, a cache memory, and a memory controller that retrieves from main memory to be placed in the cache memory and send the evicted information from cache memory to main memory for storage. In addition to supporting the generic instruction set, the security-enabled processing unit also supports security-related instructions that enable the generation and operation of secure enclaves. Security-related instructions that enable the generation and operation of secure enclaves may include instructions to create an enclave, initialize an enclave, add pages to an enclave, remove pages from an enclave, enter an enclave, and exit an enclave. If the memory controller determines that the memory address of the region of main memory from which it is read or written falls within the memory address range of the secure enclave, the memory controller will Information written to the secure enclave is encrypted and information read from the secure enclave is decrypted upon receipt of the encrypted information from the secure enclave.In some embodiments, the security-enabled integrated circuit component is a Software Guard Extensions (SGX)-enabled enabled processor. In addition to the general-purpose instruction set, these processors also support security-related instruction sets. An SGX processor includes one or more processing cores that support the SGX instruction set, which allows for the generation and support of secure enclaves. The SGX Secure Enclave resides in the Enclave Page Cache (EPC), which resides in the portion of main memory reserved for the processor that cannot be directly accessed by other software, including system software. In the SGX processor, encryption and decryption of information read from or written to the secure enclave may be performed by a Memory Encryption Engine (MEE) or Memory Encryption Instruction Set Extension located within the SGX processor.In other embodiments, a security-enabled integrated circuit assembly includes one or more security-enabled processing units that, in addition to a general-purpose instruction set operating within a standard (non-secure) partition of computing system resources, The security-enabled processing unit also supports a set of security-related instructions that operate within a secure partition of computing system resources. In these embodiments, data and instructions attributed to the secure partition do not travel beyond the cache memory of security-enabled integrated circuit components in the computing system memory hierarchy, allowing data and instructions attributed to the secure partition to remain untouched encryption.In some embodiments, the security-enabled integrated circuit component is a dedicated security processing unit that is physically separate from other processing units that execute instructions outside the secure environment (eg, secure partition, secure enclave) of the computing system, such as Security enclave coprocessor or security coprocessor. A dedicated security processing unit may or may not support a general-purpose instruction set, except for the security-related instruction set. Instructions executed by the security processing unit and data acted upon by the security processing unit are stored in encrypted main memory and encrypted as they are written to/read from main memory/ decrypt. Encryption and decryption of data and instructions written to and read from main memory may be accelerated using a hardware cryptographic acceleration processing unit separate from the security processing unit.Container runtime engine 140 provides an interface between operating system 136 and key cache container 128 and workload container 132 . The container runtime engine 140 may be a Docker engine, LXU, an Open Container Initiative (OCI) compliant container runtime (eg, Railcar, CRI-O), or any other container runtime engine. The operating system can be Windows or any other operating system for which the container runtime engine is available. In a particular container ecosystem, multiple container runtime engines may be available, where different container runtime engines provide standardized container interfaces to different operating systems. Thus, multiple container runtime engines together allow containers to be developed and deployed without dependencies on the host computing system's operating system/VMM or hardware.In some embodiments, key cache container 128 and workload container 132 are containers and are runtime instances of container images, and container runtime engine 140 is a Docker engine. In some embodiments, key cache container 128 and workload container 132 are the Kubernetes open source container-orchestration system and are runtime instances of container images that are deployed to computing system 108 as container images. The container runtime engine 140 may be a Docker engine, a container, or a container runtime engine that conforms to the Kubernetes container runtime interface. In a Kubernetes environment, the key cache container 128 can be deployed as a DaemonSet, which ensures that all nodes within the Kubernetes cluster run instances of the key cache container.Key cache container 128 securely supplies cryptographic keys requested by applications 144, securely supplies keys, and securely performs cryptographic operations using those keys. Thus, key cache container 128 provides resources that workload container 132 can use for secure key protection and secure migration of cryptographic functions. The provisioned key 150 is stored in the secure enclave 148 . The secure enclave 148 also stores instructions that, when executed, implement the cryptography engine 164 . The cryptography engine 164 may perform various cryptography-related tasks, such as encryption, decryption, hashing, and digital signing. The key cache container 128 also includes an untrusted portion that performs tasks related to key provisioning, key storage, and management of the key engine 164 . In some embodiments, secure enclave 148 is created and managed by instructions in key cache container 182 that cause security-related instructions of security-enabled processing units in hardware platform 120 to be executed.One or more applications 144 and cryptographic call adapters 152 operate within each workload container 132 . When an application 144 loads a cryptographic key, generates a cryptographic key, or performs a cryptographic function, the application 144 invokes its associated cryptographic call adapter 152. The cryptography call adapter 152 translates calls to the cryptography library to which the application 144 is linked (application cryptography operation requests) into calls to the cryptography library implemented in the cryptography engine 164 (workload container cryptography operation requests) . For example, if the application is linked against an OpenSSL or golang cryptography library and the cryptography engine 164 implements a public key cryptography standard (PKCS) library (such as PKCS#11), the cryptography call adapter 152 may call the application's OpenSSL or golang library API Calls are converted to PKCS#11 library API calls. If the cryptographic library linked to the application 144 is the same as the one implemented in the cryptographic engine 164, no translation is required (the application cryptographic operation request is the same as the workload container cryptographic operation request), and either the workload container 132 The cryptography call adapter 152 may not be included, or the adapter 152 may pass the application's cryptography library calls without conversion. When an application's cryptography calls that might otherwise be accommodated by the HSM are routed to the cryptography engine 164, the cryptography engine 164 may be considered a software security module.The key cache container 128 and the workload container 132 each further include an inter-process communication (IPC) module 160 that enables communication between the containers. In some embodiments, the IPC module 160 utilizes the open source gRPC remote procedure call framework. In other embodiments, the IPC module 160 may utilize network protocols (eg, HTTPS (Hypertext Transfer Protocol Secure), plain TCP over TLS) to provide inter-container communication. The IPC module 160 may provide communication between containers operating on the same host computing system or on different host computing systems.A container may serialize information between sending it to another container and deserialize serialized information received from another container, and the information is sent across a communication channel between containers Previously, the communication channel could be mutually authenticated. In some embodiments, information sent across communication channels between containers is encrypted.In some embodiments, the cryptographic keys may be supplied to the application as follows. The container images of workload containers 132 are configured with key identifiers for cryptographic keys that applications 144 may need access to. The key identifier uniquely identifies the key to the key brokerage service 112 . In some embodiments, the key identifier may be a Uniform Resource Identifier (URI). When an application 144 is to load or generate a cryptographic key, it makes an appropriate applied cryptographic operation request to the cryptographic call adapter 152 using the unique identifier of the key, and the workload container 132 converts the applied cryptographic operation request to A workload container cryptographic operation request including the key unique identifier is sent to the key cache container 128 .After receiving a workload container cryptographic operation request from workload container 132 , key cache container 128 submits to key brokerage service 112 a request for a cryptographic key that includes the key identifier provided by workload container 132 ask. The key cache container 128 also sends a signed and remotely verifiable claim that is generated for the secure enclave 148 and used by the key brokerage service 112 to authenticate the enclave 138 . The signed statement contains one or more enclave attributes and contains the enclave public key, which is part of the enclave public-private key pair generated within the secure enclave 148 . In some embodiments, the signed claim includes a hash of the enclave public key, rather than directly including the enclave public key.Before the key cache container 128 submits a request for a cryptographic key to the key brokerage service 112, the channel between the key brokerage service 112 and the key cache container 128 may be mutually authenticated. Information sent along the communication channel between key brokerage service 112 and key cache container 128 may be encrypted. In some embodiments, the communication channel is encrypted using HTTPS. Before providing the requested key to the key cache container 128 , the key brokerage service 112 performs validation of the signed claim provided by the key cache container 128 . The key brokerage service 112 may confirm that the signed claim has been generated in the secure enclave by sending the signed claim to the attestation service. Once the signed server has been verified by the service, either because the enclave public key provided by the key cache container 128 is included in the signed statement, or if a hash of the enclave public key is included In the signed statement and the enclave public key is provided separately from the signed statement, the key brokerage service 112 ensures that the enclave public key is authentic, wherein the enclave public key included in the signed statement The key hash matches the hash of the enclave public key generated by the Attestation Service.In some embodiments, the attestation service may be provided by a vendor of the security-enabled integrated circuit assembly, and the attestation service may attest that the signed statement was generated within the secure enclave created by the vendor-provided integrated circuit. In embodiments in which the security-enabled integrated circuit component is an SGX-enabled processor, an SGX authentication infrastructure such as the SGX Data Center Authentication Primitive (SGX DCAP) or a The SGX Attestation feature available in the Intel Security Library (Intel SecL-DC) performs attestation on signed claims provided by the key cache container 128 .After successful verification, key brokerage service 112 sends a request for the key requested by key cache container 128 to key management service 116, and key management service 116 provides the requested key in response to the request to the key brokerage service 112. In response to the request, the key management service 116 may generate new keys or provide already generated keys. In some embodiments, key management service 116 does not store newly generated keys, with system 100 instead relying on secure enclave 148 for secure key storage. In other embodiments, the key management service 116 securely stores the generated keys (eg, in an HSM local to the key management service 116). Regardless of whether the requested key is newly generated or already generated by the key management service 116 , the key brokerage service 112 uses a symmetric encryption key known only to the key brokerage service 112 and the key cache container 128 to encrypt the data. The requested key is encrypted, and the encrypted requested key is sent to the key cache container 128 . The key cache container 128 then decrypts the encrypted requested key with the symmetric encryption key.If the symmetric encryption key used to encrypt the requested key has not been generated, the key broker service 112 generates a symmetric encryption key, encrypts it with the enclave public key, and encrypts the encrypted symmetric encryption key The key is sent to the key cache container 128 . The secure enclave 148 then decrypts the symmetric encryption key with the enclave private key, and the symmetric encryption key is now known to both the secure enclave 148 and the key intermediary service 112. In some embodiments, the symmetric key is generated by the key management service 116 and encrypted with the enclave public key. In this case, the key brokerage service 112 sees only the encrypted version of the key requested by the key cache container 128 .Upon receipt and decryption of the requested key, key cache container 128 generates a key handle for the requested key and provides the key handle to workload container 132 . The workload container 132 passes the key handle to the application via a response to the application cryptographic operation request requesting the key. The provisioned key is securely stored as one of the keys 150 in the secure enclave 148 . A key handle may be any kind of identifier or data structure that key cache container 128 may use to uniquely identify a key among keys 150 stored in secure enclave 148 . In order to utilize the stored keys 150 for cryptographic operations, the workload container 132 provides the corresponding key handle to the key cache container 128 . Because multiple workload containers 132 can communicate with a single key cache container 128, the key cache container 128 restricts the use of the key 150 to the workload container 132 that originally requested the key.Key brokerage service 112 may be any type of resource accessible to key cache container 128 . For example, key brokerage service 112 may be a remote service (eg, provided by a remote server) available over a network connection. In some embodiments, key brokerage service 112 may be a service operating within a container under the management of orchestrator 104 . Thus, in a container embodiment, the key brokerage service 112 may execute on the same host computing system as the key cache container 128, or on one of the additional host computing systems 122.Key management service 116 may be a remote service (eg, a remote server) accessible to key brokerage service 112 that provides cryptographic keys to key brokerage service 112 . Having two separate entities (key brokerage service 112 and key management service 116) handle key generation, attestation, and secure key provisioning allows existing key management services (eg, open source or confidential key management services) For key management service 116. That is, existing key management services may not be able to perform secure enclave attestation. However, in some embodiments, a single entity, such as a key provisioning service, may perform secure enclave authentication and key generation functions.The requested cryptographic key is provisioned and stored as one of the keys 150 in the secure enclave 148 and a key handle corresponding to the provisioned key is provided to the requesting workload container 132 In the case of requester application 144, cryptographic operations may be performed for application 144 as follows. The application 144 makes an application cryptographic operation request to the cryptographic call adapter 152, identifying the cryptographic key to be used in the requested cryptographic operation by a unique identifier of the key (eg, a key handle). The workload container 132 converts the application cryptographic operation request into a workload container cryptographic operation request including a key handle associated with the key, a key identifying the requested cryptographic operation to be performed. information, and any input data that the requested cryptographic operation is to act on the key cache container 128 . The key handle, cryptographic operation identification information, and input data are processed by the untrusted portion of the key cache container 128, and the requested cryptographic operation is performed by the secure enclave 148 on the input data to generate output data. The output data is sent by the key cache container 128 to the requester workload container 132 and passed to the requester application 144 .The following examples illustrate the performance of cryptographic operations utilizing applications of the techniques described herein. Consider system 100 operating in a cloud-native environment implementing a Kubernetes-based container orchestration system. Workload container 132 and key cache container 128 are Kubernetes pods on host computing system 108 (Kubernetes nodes). Key cache container 128 and workload container 132 are not required to be part of the same pod - key cache container 128 may support multiple workload containers operating in multiple Kubernetes pods. The applications 144 in one of the workload containers 132 utilize the golang cryptography library to perform cryptographic operations, and the secure enclave 148 implements cryptographic operations from the PKCS#11 cryptography library. The application makes a golang cryptography library call to its associated adapter 152, the golang cryptography library call including the key handle for the key to be used in the cryptographic operation, information identifying that the cryptographic operation to be performed is a cryptographic operation, and the input data to be encrypted. The adapter 152 converts the golang cryptography library call (application cryptography operation request) to a PKCS#11 cryptography library call (workload container cryptography operation request), the PKCS#11 cryptography library call includes the key handle, the identification to be executed The PKCS#11 cryptographic operations are encryption operations on information, as well as input data. After the channel between the requesting workload container 132 and the key cache container 128 is mutually authenticated, the PKCS#11 cryptography library call is serialized by the requesting workload container 132 using gRPC and sent to the key cache Container 128 .After receiving the PKCS#11 cryptography library call from the requestor workload container 132, the key cache container 128 deserializes the workload container cryptographic operation request. PKCS#11 library calls are handled by the key cache container 128, where the input data is encrypted by the requested PKCS#11 encryption operation. Encryption is performed in the secure enclave 148 by the cryptography engine 164 using the stored PKCS#11 key 150 associated with the key handle provided in the workload container cryptographic operation request. The output data is passed back to the requestor workload container 132 in serialized and encrypted form and provided to the application 144 via the cryptography call adapter 152 in the form of a response to the golang cryptography library call.The secure key provisioning and hardware-assisted secure key storage and secure cryptographic function operations disclosed herein may have at least the following advantages. First, unlike HSMs, software-based key cache containers can be deployed remotely, and their deployment can be easily scaled. Installing or attaching an HSM for enabling secure key storage or operation of confidential cryptographic functions does not require physical access to a remote host computing system. Second, since a single key cache container can support multiple workload containers, the development and operational overhead of deploying workload containers each with its own secure enclave is avoided. Third, the key cache container avoids the cost of purchasing a physical HSM and installing the physical HSM in the host computing system.2 is a flowchart of a first example method of requesting a cryptographic key. The method 200 may be performed by a workload container operating on a server. At 210, a workload container cryptographic operation request is sent from the workload container to the key cache container, the workload container cryptographic operation request including a request for a cryptographic key. At 220, a key handle associated with a cryptographic key is received at the workload container from the key cache container.3 is a flowchart of a second example method of requesting a cryptographic key. The method 300 may be performed by a key cache container operating on the server. At 310, a workload container cryptographic operation request from the workload container is received at the key cache container. A workload container cryptographic operation request includes a request for a cryptographic key. At 320, the key handle associated with the cryptographic key is sent from the key cache container to the first workload container.4 is a flowchart of a third example method of requesting a cryptographic key. Method 400 may be performed by a key brokerage service. At 410, a request for a cryptographic key is received from a key cache container. At 420, a cryptographic key is requested from a key management service. At 430, a cryptographic key is received from a key management service. At 440, the cryptographic key is encrypted. At 450, the encrypted cryptographic key is sent to a key cache container.5 is a flowchart of an example container deployment method. Method 500 may be performed by a container orchestrator. At 510, the deployed plurality of workload containers are deployed to one or more computing systems. Each workload container includes: an application; and a cryptographic call adapter for converting application cryptographic operation requests made by the application into workload container cryptographic operation requests to be sent to the key cache container . At 520, the deployed key cache container is deployed to one of the one or more computing systems. a key cache container for: serving a cryptographic key in response to receiving a first workload container cryptographic operation request including a request for a cryptographic key; storing the cryptographic key in the secure enclave; and performing a cryptographic operation on the input data using the cryptographic key in response to receiving a second workload container cryptographic operation request, the second workload container cryptographic operation request including a key handle associated with the cryptographic key, Input data, and information indicating a cryptographic operation to be performed on the input by the key cache container, the cryptographic operation will utilize the first cryptographic key, the cryptographic operation will be performed in the secure enclave.The techniques described herein may be performed by or implemented in any of a variety of computing systems, including mobile computing systems (eg, smart phones, handheld computers , tablet computers, laptop computers, portable game consoles, 2-in-1 convertible computers, portable all-in-one computers), non-mobile computing systems (eg, desktop computers, servers, workstations, stationary game consoles, set-top boxes, smart Televisions, rack-mounted computing solutions (eg, blade, tray, or skid-board computing systems), and embedded computing systems (eg, as part of vehicles, smart home appliances, consumer electronics or equipment, manufacturing equipment computing system). As used herein, the term "computing system" includes a computing device and includes a system that includes multiple discrete physical components. In some embodiments, the computing system is located in a data center, such as an enterprise data center (eg, a data center owned and operated by a company and typically located on company premises), a managed service data center (eg, data managed by a third party on behalf of the company) center), co-located data centers (e.g., data centers where data center infrastructure is provided by data center hosts and a company provides and manages its own data center components (servers, etc.)), cloud data centers (e.g., hosted by data centers operated by cloud service providers for corporate applications and data), as well as edge data centers (eg, data centers that typically have a smaller footprint than other data center types and are located closer to the geographic area they serve).6 is a block diagram of an example computing system in which the techniques described herein may be implemented. In general, the components shown in FIG. 6 may be in communication with other shown components, although not all connections are shown for ease of illustration. Computing system 600 is a multiprocessor system that includes a first processor unit 602 and a second processor unit 604 that include a point-to-point (P-P) interconnect. The point-to-point (P-P) interface 606 of the processor unit 602 is coupled to the point-to-point interface 607 of the processor unit 604 via a point-to-point interconnect 605 . It should be understood that any or all of the point-to-point interconnects illustrated in FIG. 6 may alternatively be implemented as multidrop buses, and that any or all of the buses illustrated in FIG. 6 may be replaced by point-to-point interconnects.Processor units 602 and 604 include multiple processor cores. Processor unit 602 includes processor core 608 and processor unit 604 includes processor core 610 . Processor cores 608 and 610 may execute computer-executable instructions in a manner similar to that discussed below in connection with FIG. 7 or otherwise.Processor units 602 and 604 further include cache memories 612 and 614, respectively. Cache memories 612 and 614 may store data (eg, instructions) utilized by one or more components of processor units 602 and 604, such as processor cores 608 and 610. Cache memories 612 and 614 may be part of the memory hierarchy of computing system 600 . For example, cache memory 612 may locally store data that is also stored in memory 616 to allow faster access to the data by processor unit 602 . In some embodiments, cache memories 612 and 614 may include multiple cache levels, such as level 1 (L1), level 2 (L2), level 3 (L3), level 6 (L4) and/or Or other caches or cache levels, such as the last level cache (LLC). Some of these cache memories (eg, L2, L3, L4, LLC) may be shared among multiple cores in a processor unit. One or more of the higher levels of cache levels in the memory hierarchy (smaller and faster caches) may be located on the same integrated circuit die as the processor core, and the lower cache levels One or more of the (larger and slower cache) may be located on an integrated circuit die that is physically separate from the processor core integrated circuit die.Although computing system 600 is shown with two processor units, computing system 600 may include any number of processor units. Further, the processor unit may include any number of processor cores. The processor unit may take various forms, such as a central processing unit (CPU), graphics processing unit (GPU), general purpose GPU (GPGPU), accelerated processing unit (APU), field programmable gate array (FPGA), neural network processing unit (NPU), data processor unit (DPU), accelerator (eg, graphics accelerator, digital signal processor (DSP), compression accelerator, artificial intelligence (AI) accelerator), controller, or other type of processing unit. Hence, the processor unit may be referred to as an XPU (or xPU). Further, the processor unit may comprise one or more of these different types of processing units. In some embodiments, the computing system includes one processor unit with multiple cores, and in other embodiments, the computing system includes a single processor unit with a single core. As used herein, the terms "processor unit" and "processing unit" may refer to any processor, processor core, component, module, engine, circuit system, or any other processing element described or referenced herein.In some embodiments, computing system 600 may include one or more processor units that are heterogeneous or asymmetric to another processor unit in the computing system. Various differences may exist between processing units in a system in a range of metrics including architecture, microarchitecture, thermal, power consumption characteristics, and more. These differences can effectively manifest themselves as asymmetries and heterogeneities among the processor units in the system.Processor units 602 and 604 may be located in a single integrated circuit package, such as a multi-chip package (MCP) or multi-chip module (MCM), or they may be located in separate integrated circuit packages. An integrated circuit assembly including one or more processor units may include additional components such as embedded DRAM, stacked high bandwidth memory (HBM), shared cache memory (eg, L3, L4, LLC), input/output (I/O) /O) controller, or memory controller. Any of the additional components may be located on the same integrated circuit die as the processor unit, or on one or more integrated circuit dies separate from the integrated circuit die that includes the processor unit. In some embodiments, these separate integrated circuit dies may be referred to as "chiplets." In some embodiments, where heterogeneity or asymmetry exists between processor units in a computing system, the heterogeneity or asymmetry may exist between processor units located in the same integrated circuit assembly between. In embodiments in which the integrated circuit assembly includes a plurality of integrated circuit dies, the interconnections between the dies may be formed by one or more of a package substrate, one or more silicon interposers, embedded in the package substrate Silicon bridges such as embedded multi-die interconnect bridges (EMIBs) or combinations of the above are provided.Processor units 602 and 604 further include memory controller logic (MC) 620 and 622 . As shown in FIG. 6, MCs 620 and 622 control memory 616 coupled to processor unit 602 and memory 618 coupled to processor unit 604, respectively. Memories 616 and 618 may include various types of volatile memory (eg, dynamic random access memory (DRAM), static random access memory (SRAM)) and/or non-volatile memory (eg, flash memory, sulfur-based memory, etc.) family of phase-change non-volatile memory), and includes one or more layers of the memory hierarchy of the computing system. Although MCs 620 and 622 are illustrated as being integrated into processor units 602 and 604, in alternative embodiments, the MCs may be external to the processor units.Processor units 602 and 604 are coupled to input/output (I/O) subsystem 630 via point-to-point interconnects 632 and 634 . The point-to-point interconnect 632 connects the point-to-point interface 636 of the processor unit 602 with the point-to-point interface 638 of the I/O subsystem 630, and the point-to-point interconnect 634 connects the point-to-point interface 640 of the processor unit 604 with the point-to-point interface of the I/O subsystem 630 642 connections. Input/output subsystem 630 further includes interface 650 for coupling I/O subsystem 630 to graphics engine 652 . I/O subsystem 630 and graphics engine 652 are coupled together via bus 654 .Input/output subsystem 630 is further coupled to first bus 660 via interface 662 . The first bus 660 may be a Peripheral Component Interconnect Express (PCIe) bus or any other type of bus. Various I/O devices 664 may be coupled to the first bus 660 . The bus bridge 670 may couple the first bus 660 to the second bus 680 . In some embodiments, the second bus 680 may be a low pin count (LPC) bus. Various devices may be coupled to the second bus 680 including, for example, a keyboard/mouse 682, audio I/O devices 688, and storage devices 690 (such as hard drives, solid-state drive or another storage device). Code 692 may include computer-executable instructions for performing the methods described herein. Additional components that may be coupled to the second bus 680 include communication device(s) 684 that may use one or more communication standards (eg, the IEEE 602.11 standard and its supplements) via one or more wired or wireless communication links (eg, wire, cable, Ethernet connection, radio frequency (RF) channel, infrared channel, Wi-Fi channel) provides computing system 600 with one or more wired or wireless networks 686 (eg, Wi-Fi, cellular or satellite network) ) communication.In embodiments in which communication device 684 supports wireless communication, communication device 684 may include a wireless communication component coupled with one or more antennas to support communication between computing system 600 and external devices. The wireless communication component can support various wireless communication protocols and technologies, such as Near Field Communication (NFC), IEEE 802.11 (Wi-Fi) variants, WiMax, Bluetooth, Zigbee, 4G Long Term Evolution (LTE), Code Division Multiple Access (CDMA) ), Universal Mobile Telecommunications System (UMTS) and Global System for Mobile Telecommunications (GSM) and 5G broadband cellular technologies. Additionally, wireless modems may support communications with one or more cellular networks for data and voice communications within a single cellular network, between cellular networks, or between a computing system and a public switched telephone network (PSTN).System 600 may include removable memory such as flash memory cards (eg, SD (Secure Digital) cards), memory sticks, Subscriber Identity Module (SIM) cards. Memory in system 600 (including caches 612 and 614, memories 616 and 618, and storage device 690) may store data and/or computer-executable instructions for execution of operating system 694 and applications 696 (which may operate inside containers) ). Example data includes web pages, text messages, images, sound files, video data, and container image robustness that is sent by system 600 via one or more wired or wireless networks 686 to one or more network servers or other devices and/or Received from, or used by, system 600 from one or more web servers or other devices. System 600 may also have access to external memory or storage (not shown) such as an external hard drive or cloud-based storage.The operating system 694 may control the allocation and use of the components illustrated in FIG. 6 and support one or more applications 696 or containers. Applications 696 may include common computing system applications (eg, email applications, calendars, contact managers, web browsers, messaging applications) as well as other computing applications.Computing system 600 may support various additional input devices (such as touchscreens, microphones, monoscopic cameras, stereo cameras, trackballs, touchpads, touchpads, proximity sensors, light sensors, electrocardiogram (ECG) sensors, PPG (light plethysmography) sensor, galvanic skin response sensor) and one or more output devices (such as one or more speakers or displays). Other possible input and output devices include piezoelectric and other haptic I/O devices. Any of these input devices or output devices may be internal to system 600 , external to system 600 , or may be removably attached to system 600 . External input and output devices may communicate with system 600 via wired or wireless connections.Additionally, computing system 600 may provide one or more natural user interfaces (NUIs). For example, operating system 694 or application 696 may include speech recognition logic as part of a voice user interface that allows a user to operate operating system 600 through voice commands. Further, computing system 600 may include input devices and logic that allow a user to interact with computing system 600 via body, hand, or facial gestures.System 600 may further include at least one input/output port, including physical connectors (eg, USB, IEEE 1394 (Firewire), Ethernet, RS-232), power (eg, battery), Global Navigation Satellite System (GNSS) reception a gyroscope; an accelerometer; and/or a compass. The GNSS receiver can be coupled to the GNSS antenna. Computing system 600 may further include one or more additional antennas coupled to one or more additional receivers, transmitters, and/or transceivers to enable additional functionality.It should be appreciated that FIG. 6 illustrates only one example computing system architecture. The techniques described herein may be implemented using computing systems based on alternative architectures. For example, instead of processors 602 and 604 and graphics engine 652 on separate integrated circuits, a computing system may include a SoC (system on chip) integrated circuit that includes multiple processors, graphics engines, and additional components. Further, a computing system may connect its constituent components via a different bus or point-to-point configuration than that shown in FIG. 6 . Moreover, the illustrated components in FIG. 6 are not required or all included, as in alternative embodiments, the illustrated components may be removed and other components added.7 is a block diagram of an example processor unit 700 for executing computer-executable instructions as part of implementing the techniques described herein. Processor unit 700 may be a single-threaded core, or may be a multi-threaded core, in that each processor unit may include more than one hardware thread context (or "logical processor").FIG. 7 also illustrates memory 710 coupled to processor unit 700 . Memory 710 may be any memory described herein or any other memory known to those skilled in the art. The memory 710 may store computer-executable instructions 715 (code) executable by the processor core 700 .The processor unit includes front end logic 720 that receives instructions from memory 710 . Instructions may be processed by one or more decoders 730 . Decoder 730 may generate as its output micro-operations, such as fixed-width micro-operations in a predefined format, or may generate other instructions, micro-instructions, or control signals that reflect the original code instructions. Front-end logic 720 further includes register renaming logic 735 and scheduling logic 740 that generally allocate resources and queue operations corresponding to converting instructions for execution.Processor unit 700 further includes execution logic 750 including one or more execution units (EUs) 765-1 through 765-N. Some processor unit embodiments may include several execution units dedicated to a particular function or set of functions. Other embodiments may include only one execution unit or one execution unit that may perform certain functions. Execution logic 750 performs the operations specified by the code instructions. After completing execution of the operations specified by the code instructions, backend logic 770 uses retirement logic 775 to retire the instructions. In some embodiments, processor unit 700 allows out-of-order execution but requires in-order retirement of instructions. Retirement logic 775 may take various forms as known to those skilled in the art (eg, reorder buffers, etc.).Processor unit 700 is translated during execution of instructions, at least for the output generated by decoder 730, the hardware registers and tables utilized by register renaming logic 735, and any registers (not shown) modified by execution logic 750. .As used herein, the term "module" refers to logic that may be implemented in hardware components or devices, software or firmware running on a processor unit, or a combination thereof for performing one or more operations consistent with the present disclosure. Software and firmware may be embodied as instructions and/or data stored on a non-transitory computer-readable storage medium. As used herein, the term "circuitry" may include, alone or in any combination, non-programmable (hard-wired) circuitry, programmable circuitry (such as a processor unit), state machine circuitry, and/or storage may be Firmware that programs the instructions that the circuitry executes. The modules described herein may collectively or individually be embodied as circuits that form part of a computing system. Thus, any of the modules may be implemented as circuitry. A computing system referred to as being programmed to perform a method may be programmed to perform the method via software, hardware, firmware, or a combination thereof.Any of the disclosed methods (or portions thereof) can be implemented as computer-executable instructions or as a computer program product. Such instructions may cause a computing system or one or more processor units capable of executing computer-executable instructions to perform any of the disclosed methods. As used herein, the term "computer" refers to any computing system or device described or referred to herein. Thus, the term "computer-executable instructions" refers to instructions executable by any computing system or device described or referred to herein.Computer-executable instructions or computer program products and any data created and/or used during implementation of the disclosed technology may be stored on one or more tangible or non-transitory computer-readable storage media, such as volatile non-volatile memory (eg, DRAM, SRAM), non-volatile memory (eg, flash memory, chalcogenide-based phase change non-volatile memory), optical media disks (eg, DVD, CD), and magnetic storage (eg, , tape storage, hard drives). Computer-readable storage media may be embodied in computer-readable storage devices such as solid-state drives, USB flash drives, and memory modules. Alternatively, any of the methods disclosed herein (or a portion thereof) may be performed by hardware components comprising non-programmable circuits. In some embodiments, any of the methods herein may be performed by a combination of non-programmable hardware components and one or more processing units executing computer-executable instructions stored on a computer-readable storage medium.Computer-executable instructions may be, for example, part of the computing system's operating system, an application stored locally to the computing system, or a remote application accessible to the computing system (eg, via a web browser). Any of the methods described herein may be performed by computer-executable instructions executed by a single computing system or by one or more networked computing systems operating in a networked environment. Computer-executable instructions and updates to computer-executable instructions may be downloaded to the computing system from a remote server.Further, it should be understood that implementations of the disclosed technology are not limited to any particular computer language or program. For example, the disclosed techniques may be implemented by software written in C++, C#, Java, Perl, Python, JavaScript, Adobe Flash, C#, assembly language, or any other programming language. Likewise, the disclosed technology is not limited to any particular computer system or any particular type of hardware.Furthermore, any software-based embodiments (including, for example, computer-executable instructions for causing a computer to perform any of the disclosed methods) may be uploaded, downloaded, or remotely accessed through suitable communication means. Such suitable means of communication include, for example, the Internet, the World Wide Web, an intranet, electrical cables (including fiber optic cables), magnetic communications, electromagnetic communications (including RF, microwave, ultrasonic, and infrared communications), electronic communications, or other such means of communication.As used in this application and in the claims, a list of items linked by the term "and/or" can mean any combination of the listed items. For example, the phrase "A, B and/or C" can mean A; B; C; A and B; A and C; B and C; or A, B and C. As used in this application and the claims, a list of items joined by the term "at least one of" may mean any combination of the listed items. For example, the phrase "at least one of A, B, or C" can mean A; B; C; A and B; A and C; B and C; Also, as used in this application and the claims, a list of items joined by the term "one or more of" may mean any combination of the listed items. For example, the phrase "one or more of A, B, and C" can mean A; B; C; A and B; A and C; B and C;The disclosed methods, apparatus, and systems should not be construed as limiting in any way. Instead, the present disclosure is directed to all novel and non-obvious features and aspects of the various disclosed embodiments individually and in various combinations and subcombinations with each other. The disclosed methods, apparatus, and systems are not limited to any particular aspect or feature or combination thereof, nor do the disclosed embodiments require that any particular advantage or problems be present or that any particular problem or problems be solved.Theories of operation, scientific principles or other theoretical descriptions presented herein with reference to the devices or methods of the present disclosure are provided for the purpose of better understanding and are not intended to limit the scope. The apparatus and methods in the appended claims are not limited to those apparatus and methods that function in the manner described by such theories of operation.Although the operations of some of the disclosed methods have been described in a specific, sequential order for ease of presentation, it is to be understood that this manner of description contemplates rearrangements unless a specific ordering is required by the specific language set forth herein . For example, in some cases operations described sequentially may be rearranged or performed concurrently. Also, for the sake of simplicity, the figures may not show the various ways in which the disclosed methods may be used in conjunction with other methods.The following examples relate to additional embodiments of the techniques disclosed herein.Example 1 is a method comprising: sending a workload container cryptographic operation request from a workload container to a key cache container, the workload container cryptographic operation request including a request for a cryptographic key; and A key handle associated with a cryptographic key is received from the key cache container at the workload container.Example 2 includes the method of example 1, further comprising: generating, by an application operating within the workload container, a workload container cryptographic operation request; and providing the key handle to the application.Example 3 includes the method of Example 1, further comprising: generating, by an application operating within a workload container, an application cryptographic operation request; converting the application cryptographic operation request to a workload container cryptographic operation request; and converting the encryption The key handle is provided to the application.Example 4 includes the method of one of Examples 1-3, wherein the workload container cryptographic operation request is a first workload container cryptographic operation request, the method further comprising: converting the second workload container cryptographic operation request from the workload container. a workload container cryptographic operation request is sent to the key cache container, the second workload container cryptographic operation request including a key handle, information indicating a cryptographic operation to be performed by the key cache container, and input data; and receiving at the workload container from the key cache container output data generated by performing a cryptographic operation on the input data by the key cache container.Example 5 includes the method of Example 4, further comprising generating, by an application operating in the workload container, an application cryptographic operation request, the application cryptographic operation request including a key handle, an indication to be performed by the key cache container and input data; and converting the application cryptographic operation request into a workload container cryptographic operation request.Example 6 includes the method of example 4, wherein the cryptographic key is the first cryptographic key, the workload container is the first workload container, the key handle is the first key handle, and the workload container password The academic operation request is a first workload container cryptographic operation request, the method further comprising: sending a second workload container cryptographic operation request from a second workload container to the key cache container, the second workload container The cryptographic operation request includes a request for a second cryptographic key; and receiving, at the second workload container, a second key handle associated with the second cryptographic key from the key cache container.Example 7 includes the method of example 6, wherein the first computing system hosts the first workload container and the key cache container, and the second computing system hosts the second workload container.Example 8 includes the method of example 6, wherein the first workload container, the second workload container, and the key cache container are hosted on the computing system.Example 9 includes the method of any of Examples 1-8, wherein the request for a cryptographic key includes a request to generate the cryptographic key.Example 10 includes the method of any of Examples 1-8, wherein the request for a cryptographic key includes a request to load the cryptographic key.Example 11 includes the method of any of Examples 1-10, further comprising: the workload container authenticating the key cache container before sending the workload container cryptographic operation request to the key cache container.Example 12 includes the method of any of Examples 1-11, wherein the container runtime engine provides an interface between the workload container and the operating system or hypervisor.Example 13 is a method comprising: receiving, at a key cache container, a workload container cryptographic operation request from a workload container, the workload container cryptographic operation request including a request for a cryptographic key; and The key handle associated with the cryptographic key is sent from the key cache container to the workload container.Example 14 includes the method of Example 13, further comprising: requesting the cryptographic key from the key brokerage service; receiving the cryptographic key from the key brokerage service; and storing the cryptographic key in encrypted form on a secure in the enclave.Example 15 includes the method of example 14, wherein the cryptographic key is stored in an encrypted form in a secure enclave with security-related instructions of a processing unit of a computing system of a master key cache container.Example 16 includes the method of any of Examples 14-15, further comprising generating an enclave public-private key pair within a secure enclave, the enclave public-private key pair comprising an enclave public key and an enclave public key and providing a remotely verifiable signed claim to the key brokerage service for verification, the remotely verifiable signed claim comprising a hash of the enclave's public key.Example 17 includes the method of example 16, wherein the remotely verifiable signed claim further includes one or more security enclave attributes.Example 18 includes the method of Example 14, further comprising: generating an enclave public-private key pair within the secure enclave, the enclave public-private key pair comprising an enclave public key and an enclave private key; generating an enclave public key-private key pair; hash of the enclave public key; and providing a remotely verifiable signed claim and a hash of the enclave public key to the key brokerage service for verification, the remotely verifiable signed claim excluding the enclave public key also Does not include the hash of the enclave public key.Example 19 includes the method of any of Examples 13-18, further comprising receiving, at the key cache container, a workload container cryptographic operation request from a workload container, the workload container cryptographic operation request comprising a key handle, information indicating a cryptographic operation to be performed by the key cache container, and input data; perform a cryptographic operation on the input data within the secure enclave to generate output data, the cryptographic operation utilizing the associated key handle associated cryptographic keys stored in the secure enclave; and sending output data from the key cache container to the workload container.Example 20 includes the method of Example 19, wherein the cryptographic key is a first cryptographic key, the workload container is a first workload container, the key handle is a first key handle, and the input data is a first the input data, and the output data is the first output data, and the workload container cryptographic operation request is the first workload container cryptographic operation request, the method further comprising: receiving at the key cache container from the second workload container a second key handle, information indicating a second cryptographic operation to be performed by the key cache container, and second input data; performing the second cryptographic operation on the second input data within the secure enclave to generate a second output data, the second cryptographic operation utilizes the second cryptographic key associated with the second key handle and stored in the secure enclave; and sending the second output data from the key cache container to the second workload container.Example 21 includes the method of example 20, wherein the second workload container and the key cache container are hosted by a computing system.Example 22 includes the method of example 20, wherein the key cache container is hosted on the first computing system and the second workload container is hosted on the second computing system.Example 23 includes the method of any of Examples 19-22, wherein performing the cryptographic operation utilizes security-related instructions of a processing unit of a computing system that hosts the key cache container.Example 24 includes the method of any of Examples 13-23, wherein the container runtime engine provides an interface between the key cache container and the operating system or hypervisor.Example 25 is a method comprising: receiving a request for a cryptographic key from a key cache container; requesting the cryptographic key from a key management service; receiving the cryptographic key from a key management service; encrypting the cryptographic key to generate an encrypted cryptographic key; and sending the encrypted cryptographic key to a key cache container.Example 26 includes the method of Example 25, further comprising: receiving an enclave public key of an enclave public key-private key pair from a key cache container; encrypting a symmetric encryption key with the enclave public key to generating an encrypted symmetric encryption key; and sending the encrypted symmetric encryption key to a key cache container; wherein encrypting the cryptographic key includes encrypting the cryptographic key with the symmetric encryption key.Example 27 includes the method of Example 25, further comprising: receiving a signed claim from a key cache container; performing verification of the signed claim; and if the verification is successful, converting the encrypted cryptographic key Sent to the key cache container.Example 28 is a method comprising deploying a plurality of workload containers to one or more computing systems, each of the workload containers including an application and a cryptographic invocation adapter using for converting application cryptographic operation requests made by the application into workload container cryptographic operation requests to be sent to the key cache container; and deploying the key cache container to one of the one or more computing systems , the key cache container for: supplying the cryptographic key in response to receiving a first workload container cryptographic operation request including a request for the cryptographic key; storing the cryptographic key in a secure enclave; and performing a cryptographic operation on the input data using the cryptographic key in response to receiving a second workload container cryptographic operation request, the second workload container cryptographic operation request including a key handle associated with the cryptographic key , input data, and information indicating the cryptographic operations to be performed by the key cache container on the input data, the cryptographic operations will utilize the cryptographic keys, and the cryptographic operations will be performed in the secure enclave.Example 29 includes the method of Example 28, wherein a plurality of workload containers, and a key cache container are deployed to the computing system.Example 30 includes the method of example 28, wherein a first workload container of the workload containers is deployed to the first computing system, and at least a second workload container and a key cache container of the workload containers are Deployed to the second computing system.Example 31 includes one or more non-transitory computer-readable storage media storing computer-executable instructions that, when executed, cause a computing system to perform a method, the method comprising as described in Examples 1-30. any of the methods described.Example 32 is an apparatus comprising one or more means for performing any of the methods recited in Examples 1-30.Example 33 is a computing system comprising: one or more processing units; and one or more computer-readable storage media storing instructions that, when executed, cause the one or more processing units to perform as Any of the methods described in Examples 1-30.
A method and system for providing an external locking mechanism for memory locations. The memory includes a first plurality of storage locations configured with BIOS data and a second plurality of storage locations. The second plurality of storage locations includes a first plurality of blocks readable only in SMM and a second plurality of blocks readable in SMM and at least one operating mode other than SMM. The computer system includes a bus, a memory coupled to the bus, and a device coupled to access the memory over the bus. The memory includes a plurality of storage locations, divided into a plurality of memory units. The device includes one or more locks configured to control access to one or more of the plurality of memory units.
CLAIMS 1. A computer system, comprising: a bus; a memory coupled to the bus, wherein the memory includes a plurality of storage locations, wherein the plurality of storage locations are divided into a plurality of memory units; and a device coupled to access the memory over the bus, characterized in that: the device includes one or more locks configured to control access to one or more of the plurality of memory units.2. The computer system of claim 1, wherein the memory is a ROM.3. The computer system of claim 2, wherein the ROM is a BIOS ROM.4. The computer system of claim 1, wherein the locks include a plurality of registers, wherein one or more entries in one or more of the plurality of registers indicate an access control setting for one or more of the memory units. 5. The computer system of claim 4, wherein at least one of the plurality of registers is configured to store three locking bits for one of the memory blocks, wherein the three locking bits include a read lock bit, a write lock bit, and a lock-down bit, wherein the read lock bit and the write lock bit are permanent until reset when the lock-down bit is set.6. The computer system of claim 4, wherein at least one of the plurality of registers is configured to store eight bits, wherein the eight bits include three locking bits for one of the memory blocks and another three locking bits for another one of the memory blocks, wherein the three locking bits include a first read lock bit, a first write lock bit, and a first lock-down bit, wherein when the first lock-down bit is set, the first read lock bit and the first write lock bit are permanent until reset, and wherein the another three locking bits include a second read lock bit, a second write lock bit, and a second lock-down bit, wherein when the second lock-down bit is set, the second read lock bit and the second write lock bit are permanent until reset.7. A memory, comprising: a first plurality of storage locations configured with BIOS data; and a second plurality of storage locations, wherein the second plurality of storage locations includes: a first plurality of blocks readable only in SMM; and a second plurality of blocks readable in SMM and at least one operating mode other than SMM.8. The memory of claim 7, wherein the first plurality of blocks includes at least one of : a block with a write once lock ; a block with a never erase lock; and a block that can be written in SMM and in at least one operating mode other than SMM.9. The memory of claim 7, wherein the second plurality of blocks includes at least one of : a block with a write once lock ; a block with a never erase lock ; and a block that can be written in SMM and in at least one operating mode other than SMM 10. A method for operating a computer system, the method comprising: requesting a memory transaction for one or more memory addresses; determining a lock status for the one or more memory addresses; returning the lock status for the one or more memory addresses; determining if the lock status for the one or more memory addresses can be changed if the lock status indicates that the memory transaction for the one or more memory addresses is not allowed; changing the lock status of the one or more memory addresses to allow the memory transaction if the lock status of the one or more memory addresses can be changed. I 1. The method of claim 10, wherein determining a lock status includes reading a first lock bit; and wherein returning the lock status includes returning the value of the first lock bit.12. The method of claim 11, wherein determining if the lock status for the one or more memory address can be changed includes reading a second lock bit.13. The method of claim 12, wherein changing the lock status of the one or more memory addresses to allow the memory transaction includes changing the value of the first lock bit.14. A method of operating a computer system, the method comprising: issuing a request from a first device for a memory transaction for a memory location; receiving the request for the memory transaction at a second device that does not include the memory location or a copy of the contents of the memory location; returning a response from the second device to the first device issuing the request for the memory transaction.15. The method of claim 14, wherein returning the response from the second device includes ending the memory transaction without the memory transaction reaching the memory location.16. The method of claim 14, further comprising: ending the request for the memory transaction without the memory location responding to the request for the memory transaction. 17. The method of claim 14, wherein the second device includes a bridge coupled between the first device and the memory location, wherein said returning the response from the second device to the first device issuing the request for the memory transaction includes returning the response from the bridge to the first device issuing the request for the memory transaction.18. The method of claim 17, wherein said returning the response from the bridge to the first device issuing the request for the memory transaction includes responding from an access filter within the bridge with a predetermined value upon receipt of the request for the memory transaction for the memory location, when the computer system is operating in a first operating mode.19. The method of claim 18, wherein said issuing the request from the first device for the memory transaction for the memory location includes issuing the request from the first device for the memory transaction for the memory location in a memory, a ROM, or a flash memory.20. The method of claim 14, wherein the first device includes security hardware, wherein said receiving the request for the memory transaction at the second device that does not include the memory location or the copy of the contents of the memory location includes receiving the request for the memory transaction at the security hardware within the first device; and wherein said returning the response from the second device to the first device issuing the request for the memory transaction includes returning the response from the security hardware to the first device issuing the request for the memory transaction.21. The method of claim 14, further comprising: reading a first value from a memory location within the second device before returning the response, wherein the memory location within the second device is different from the memory location for the memory transaction.
EXTERNAL LOCKING MECHANISM FOR PERSONAL COMPUTER MEMORY LOCATIONSThis Application is a continuation-in-part of co-pending U. S. Patent Application No. 09/852,372, entitled,"Secure Execution Box and Method,"filed on May 10,2001, whose inventors are Dale E. Gulick andGeoffrey S. Strongin. This Application is also a continuation-in-part of co-pending U. S. Patent Application No.09/852,942, entitled,"Computer System Architecture for Enhanced Security and Manageability,"filed on May 10,2001, whose inventors are Geoffrey S. Strongin and Dale E. Gulick. TECHNICAL FIELDThis invention relates generally to computing systems, and, more particularly, to an external locking mechanism for controlling access to memory locations, e. g. the ROM BIOS, in a personal computer system. BACKGROUND ARTFig. IA illustrates an exemplary computer system 100. The computer system 100 includes a processor 102, a north bridge 104, memory 106, Advanced Graphics Port (AGP) memory 108, a Peripheral ComponentInterconnect (PCI) bus 110, a south bridge 112, a battery, an AT Attachment (ATA) interface 114 (more commonly known as an Integrated Drive Electronics (IDE) interface), a universal serial bus (USB) interface 116, a Low Pin Count (LPC) bus 118, an input/output controller chip (SuperI/O) 120, and BIOS memory 122.It is noted that the north bridge 104 and the south bridge 112 may include only a single chip or a plurality of chips, leading to the collective term"chipset."It is also noted that other buses, devices, and/or subsystems may be included in the computer system 100 as desired, e. g. caches, modems, parallel or serial interfaces, SCSI interfaces, network interface cards, etc. ["Superl/O"is a trademark of National Semiconductor Corporation ofSanta Clara, Calif.]The processor 102 is coupled to the north bridge 104. The north bridge 104 provides an interface between the processor 102, the memory 106, the AGP memory 108, and the PCI bus 110. The south bridge 112 provides an interface between the PCI bus 110 and the peripherals, devices, and subsystems coupled to the IDE interface 114, the USB interface 116, and the LPC bus 118. The battery 113 is shown coupled to the south bridge 112. The Super I/OT chip 120 is coupled to the LPC bus 118. The north bridge 104 provides communications access between and/or among the processor 102, memory 106, the AGP memory 108, devices coupled to the PCI bus 110, and devices and subsystems coupled to the south bridge 112. Typically, removable peripheral devices are inserted into PCI"slots" (not shown) that connect to the PCI bus 110 to couple to the computer system 100. Alternatively, devices located on a motherboard may be directly connected to the PCI bus 110. The south bridge 112 provides an interface between the PCI bus 110 and various devices and subsystems, such as a modem, a printer, keyboard, mouse, etc., which are generally coupled to the computer system 100 through the LPC bus 118 (or its predecessors, such as an X-bus or an ISA bus). The south bridge 112 includes the logic used to interface the devices to the rest of computer system 100 through the IDE interface 114, the USB interface 116, and the LPC bus 118. Fig. 1B illustrates certain aspects of the prior art south bridge 112, including those provided reserve power by the battery 113, so-called"being inside the RTC battery well"125. The south bridge 112 includes south bridge (SB) RAM 126 and a clock circuit 128, both inside the RTC battery well 125. The SB RAM 126 includes CMOS RAM 126A and RTC RAM 126B. The RTC RAM 126B includes clock data 129 and checksum data 127. The south bridge 112 also includes, outside the RTC battery well 125, a CPU interface 132, power and system management units 133, PCI bus interface logic 134A, USB interface logic 134C, IDE interface logic 134B, and LPC bus interface logic 134D. Time and date data from the clock circuit 128 are stored as the clock data 129 in the RTC RAM 126B.The checksum data 127 in the RTC RAM 126B may be calculated based on the CMOS RAM 126A data and stored by BIOS during the boot process, such as is described below, e. g. block 148, with respect to Fig. 2A.The CPU interface 132 may include interrupt signal controllers and processor signal controllers. The power and system management units 133 may include an ACPI (Advanced Configuration and Power Interface) controller. System Management Mode (SMM) is a mode of operation in the computer system that was implemented to conserve power. The SMM was created for the fourth generation x86 processors. As newer x86 generation processors have appeared, the SMM has become relatively transparent to the operating system.That is, computer systems enter and leave the SMM with little or no impact on the operating system. Referring now to the drawings, and in particular to Fig. 2A, a flowchart of a prior art method of initializing a computer system using code stored in the BIOS 122 is shown. During initialization of the power supply, the power supply generates a power good signal to the north bridge, in block 136. Upon receiving the power good signal from the power supply, the south bridge (or north bridge) stops asserting the reset signal for the processor, in block 138. During initialization, the processor reads the default jump location, in block 140. The default jump location in memory is usually at a location such as FFFFOh. The processor performs a jump to the appropriateBIOS code location (e. g. FFFFOh) in the ROM BIOS, copies the BIOS code to the RAM memory, and begins possessing the BIOS code instructions from the RAM memory, in block 142. The BIOS code, processed by the processor, performs a power-on self test (POST), in block 144. The BIOS code next looks for additional BIOS code, such as from a video controller, IDE controller,SCSI controller, etc. and displays a start-up information screen, in block 146. As examples, the video controllerBIOS is often found at COOOh, while the IDE controller BIOS code is often found at C800h. The BIOS code may perform additional system tests, such as a RAM memory count-up test, and a system inventory, including identifying COM (serial) and LPT (parallel) ports, in block 148. The BIOS code also identifies plug-and-play devices and other similar devices and then displays a summary screen of devices identified, in block 150. The BIOS code identifies the boot location, and the corresponding boot sector, in block 152. The boot location may be on a floppy drive, a hard drive, a CDROM, a remote location, etc. The BIOS code next calls the boot sector code at the boot location to boot the computer system, such as with an operating system, in block 154. It is noted that for a cold boot or a hard (re) boot, all or most of the descriptions given in blocks 136154 may occur. During a warm boot or a soft (re) boot the BIOS code usually jumps from block 142 into block 148, skipping the POST, memory tests, etc. In Fig. 2B, a flowchart of a prior art method of operating a computer system in SMM using code stored in the BIOS 122 is shown. An interrupt controller receives a request for SMM, in block 172. The interrupt controller signals the request for SMM to the processor by asserting a system management interrupt (SMI#) signal, in block 174. The processor recognizes the request for SMM and asserts an SMI ACTive (SMIACT#) signal, in block 176. The system recognizes the SMIACT# signal, disables access to the system RAM, and enables access to system management RAM (SMRAM) space, in block 178. The current processor state is saved to SMRAM, in block 180. The processor resets to the SMM default state and enters SMM, in block 182. The processor next reads the default pointer and jumps to the appropriate place in SMRAM space, in block 184. In block 186, the source and/or nature of the SMI request is identified. An SMI handler services the SMI request, in block 188. After servicing the SMI request, the SMI handler issues a return from SMM (RSM) instruction to the processor, in block 190. Upon operating on theRSM instruction, the processor restores the saved state information and continues normal operation, in block 192. From a hardware point of view, an x86 operating environment provides little for protecting user privacy, providing security for corporate secrets and assets, or protecting the ownership rights of content providers. All of these goals, privacy, security, and ownership (collectively, PSO) are becoming critical in an age of Internet-connected computers. The original personal computers were not designed in anticipation ofPSO needs. From a software point of view, the x86 operating environment is equally poor for PSO. The ease of direct access to the hardware through software or simply by opening the cover of the personal computer allows an intruder or thief to compromise most security software and devices. The personal computer's exemplary ease of use only adds to the problems for PSO. DISCLOSURE OF INVENTIONIn one aspect of the present invention, a computer system is provided. The computer system includes a bus, a memory coupled to the bus, and a device coupled to access the memory over the bus. The memory includes a plurality of storage locations, divided into a plurality of memory units. The device includes one or more locks configured to control access to one or more of the plurality of memory units. In various embodiments, the locks may include a plurality of registers. One or more entries in one or more of the plurality of registers may indicate an access control setting for one or more of the memory units. In another aspect of the present invention, a memory is provided. The memory includes a first plurality of storage locations configured with BIOS data; and a second plurality of storage locations. The second plurality of storage locations includes a first plurality of blocks readable only in SMM and a second plurality of blocks readable in SMM and at least one operating mode other than SMMIn still another aspect of the present invention, a method for operating a computer system is provided.The method includes requesting a memory transaction for one or more memory addresses and determining a lock status for the one or more memory addresses. The method also includes returning the lock status for the one or more memory addresses and determining if the lock status for the one or more memory addresses can be changed if the lock status indicates that the memory transaction for the one or more memory addresses is not allowed. The method also includes changing the lock status of the one or more memory addresses to allow the memory transaction if the lock status of the one or more memory addresses can be changed. In still another aspect of the present invention, another method of operating a computer system is provided. This method includes issuing a request from a first device for a memory transaction for a memory location and receiving the request for the memory transaction at a second device that does not include the memory location or a copy of the contents of the memory location. This method also includes returning a response from the second device to the first device issuing the request for the memory transaction. BRIEF DESCRIPTION OF THE DRAWINGSThe invention may be understood by reference to the following description taken in conjunction with the accompanying drawings, in which like reference numerals identify similar elements, and in which:Fig. IA illustrates a block diagram of a prior art computer system, while Fig. 1B illustrates a block diagram of a prior art south bridge;Figs. 2A and 2B illustrate flowcharts of prior art methods for operating a computer system using code stored in ROM;Fig. 3 illustrates a flowchart of an embodiment of data and command flow in a computer system having a secure execution box, according to one aspect of the present invention;Fig. 4 illustrates a block diagram of an embodiment of a computer system including security hardware in the south bridge as well as a crypto-processor, according to one aspect of the present invention;Figs. 5A and 5B illustrate block diagrams of embodiments of a south bridge including security hardware for controlling SMM, according to various aspect of the present invention;Fig. 6 illustrates a block diagram of an embodiment of a south bridge including security hardware for secure SMM operations, according to one aspect of the present invention;Figs. 7A and 7B illustrate embodiments of secure memory, according to various aspects of the present invention;Figs. 8A and 8B illustrate block diagrams of embodiments of a BIOS ROM and an SMM ROM for secure SMM operations, respectively, according to various aspects of the present invention;Figs. 9A, 9B, 9C, 9D, 9E, 9F, and 9G illustrate flowcharts of embodiments of methods for accessing the security hardware, which may be locked, according to various aspects of the present invention;Figs. 10A, IOB, and 10C illustrate block diagrams of embodiments of the access locks 460 shown inFig. 6, while Fig. IOD illustrates a block diagram of an embodiment of the override register, all according to various aspects of the present invention; andFigs. I I A, I I B, 12, and 13 illustrate flowcharts of embodiments of methods for secure access to storage, according to various aspects of the present invention. While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the invention as defined by the appended claims. MODE (S) FOR CARRYING OUT THE INVENTIONIllustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will, of course, be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers'specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure. The use of a letter in association with a reference number is intended to show alternative embodiments or examples of the item to which the reference number is connected. Fig. 3 illustrates a block diagram of an embodiment of a flowchart showing data and command flow in a computer system having a secure execution box 260, according to one aspect of the present invention. User input and output (I/O) data and/or commands 205 are provided to and received from one or more applications 210. The applications 210 exchange data and commands with cryptography service providers 215 within the computer system, such as the computer system 100 or any other computer system. The cryptography service providers 215 may use API (Application Programming Interface) calls 220 to interact with drivers 225 that provide access to hardware 230. According to one aspect of the present invention, the drivers 225 and the hardware 230 are part of a secure execution box configured to operate in a secure execution mode (SEM) 260. Trusted privacy, security, and ownership (PSO) operations, also referred to simply as security operations, may take place while the computer system is in SEM 260. Software calls propagated from the user I/O 205 and/or the applications 210 may be placed into the secure execution box in SMM 260 via an SMM initiation register 425B (or SMM initiator 425A) discussed below with respect to Fig. 5B (or Fig. 5A). Parameters may be passed into and out of the secure execution box in SEM 260 via an access-protected mailbox RAM 415, also discussed below withFigs. 5A and 5B. The software calls have access to the secure execution box in SEM 260 to various security hardware resources, such as described in detail below. Fig. 4 illustrates a block diagram of an embodiment of a portion of an improved version of computer system 100 including security hardware 370 in a south bridge 330, as well as a crypto-processor 305, according to one aspect of the present invention. The south bridge 330 includes the security hardware 370, an interrupt controller (IC) 365, USB interface logic 134C, and the LPC bus interface logic (LPC BIL) 134D. The IC 365 is coupled to the processor 102. The USB interface logic 134C is coupled to an optional USB hub 315. The LPC bus 118 is coupled to the south bridge 330 through the LPC BIL 134D. The crypto-processor 305 is also coupled to the LPC bus 118. A memory permission table 310 within the Crypto-processor 305 provides address mappings and/or memory range permission information. The memory permission table 310 may be comprised in a non-volatile memory. A BIOS 355, i. e. some memory, preferably read-only memory or flash memory, is coupled to the crypto-processor 305. The security hardware 370 in the south bridge 330 may be operable to provide an SMI interrupt request to the IC 365 for the processor 102. The security hardware 370 may also interact with the crypto-processor 305.Access to the BIOS 355 is routed through the crypto-processor 305. The crypto-processor 305 is configured to accept and transfer access requests to the BIOS 355. The crypto-processor 305 therefore may understand the address mappings of the BIOS 305. According to one aspect of the present invention, the security hardware 370 allows the computer system 100 to become an embodiment of the secure execution box 260 shown in Fig. 3. It is noted that the IC 365 may be included in the processor instead of the south bridge 330. The IC 365 is also contemplated as a separate unit or associated with another component of the computer system 100.It is also noted that the operations of the LPC bus 118 may correspond to the prior art Low Pin Count InterfaceSpecification Revision 1.0 of September 29,1997. It is further noted that the USB interface logic 134C may couple to the LPC BIL 134D is any of a variety of ways, as is well known in the art for coupling different bus interface logics in a bridge. Figs. 5A and 5B illustrate block diagrams of embodiments of the south bridge 330, including the security hardware 370, according to various aspects of the present invention. In Fig. 5A, the south bridge 330A includes the security hardware 370A and IC 365. The security hardware 370A includes sub-devices such as anSMM access controller 402A and control logic 420A. The sub-devices may be referred to as security hardware or secure assets of the computer system 100. The SMM access controller 402A includes SMM access filters 410, mailbox RAM 415, and an SMM initiator 425A. As shown in Fig. 5A, the control logic 420 is coupled to control operation of the SMM access controller 402A and the SMM initiator 425A. Input and output (I/O) to the security hardware 370A pass through the SMM access filters 410 and are routed through the control logic 420A. The SMM access controller 402A includes the SMM access filters 410, which are configured to accept input requests for the sub-devices within the security hardware 370A. When the computer system 100 is inSMM, the SMM access filters are configured to pass access requests (e. g. reads and writes) to the control logic 420A and/or the target sub-device. When the computer system 100 is not in SMM, the SMM access filters are configured to respond to all access requests with a predetermined value, such as all'I's. The SMM access controller 402A also includes the mailbox RAM 415. In one embodiment, the mailbox RAM 415 includes two banks of RAM, such as 512 bytes each, for passing parameters into and out of the secure execution box 260.Parameters passed to or from the sub-devices included within the security hardware 370 are exchanged at the mailbox RAM 415. One bank of RAM 415, an inbox, is write-only to most of all of the computer system in most operating modes. Thus, parameters to be passed to the sub-devices included within the security hardware 370 may be written into the inbox. During selected operating modes, such as SMM, both read and write accesses are allowed to the inbox. Another bank of RAM 415, an outbox, is read-only to most of all of the computer system in most operating modes. Thus, parameters to be received from the sub-devices included within the security hardware 370 may be read from the outbox. During selected operating modes, preferably secure modes, such as SMM, both read and write accesses are allowed to the outbox. The SMM initiator 425A may advantageously provide for a convenient way to request that the computer system 100 enter SMM. A signal may be provided to the SMM initiator 425A over the request (REQ) line. The signal should provide an indication of the jump location in SMM memory. The SMM initiator 425A is configured to make a request for SMM over the SMM request (SMM REQ) line, for example, by submitting an SMI# to the interrupt controller 365. The SMM initiator 425A is also configured to notify the control logic 420A that the request for SMM has been received and passed to the interrupt controller 365. In Fig. 5B, the south bridge 330B includes the security hardware 370B. The IC 365 is shown external to the south bridge 330B. The security hardware 370B includes an SMM access controller 402B and control logic 420B. The SMM access controller 402B includes SMM access filters 410 and mailbox RAM 415. AnSMM initiation register 425B is shown external to the south bridge 330B. As shown in Fig. 5B, the control logic 420B is coupled to control operation of the SMM access controller 402B. Input and output (I/O) signals to the security hardware 370B pass through the SMM access filters 410 and are routed through the control logic 420B. The control logic 420B is also coupled to receive an indication of a request for SMM from the SMM initiation register 425B. The SMM access controller 402B includes the SMM access filters 410, which are configured to accept input requests for the sub-devices within the security hardware 370B. When the computer system 100 is inSMM, the SMM access filters are configured to pass access requests (e. g. reads and writes) to the control logic 420B and/or the target sub-device. When the computer system 100 is not in SMM, the SMM access filters may be configured to respond to all access requests with a predetermined value, such as all'I's. The SMM access controller 402B also includes the mailbox RAM 415, described above with respect to Fig. 5A. The SMM initiation register 425B may advantageously provide for a convenient way to request that the computer system 100 enter SMM. A signal may be provided to the SMM initiation register 425B over the request (REQ) line. The signal should provide an indication of the jump location in SMM memory. The SMM initiation register 425B is configured to provide the indication to the control logic 420B. The control logic 420B is configured to make a request for SMM over the SMM request (SMM REQ) line, for example, by submitting an SMI# to the interrupt controller 365. It is noted that in the embodiment illustrated in Fig. 5A, the SMM initiator 425A includes internal logic for handling the SMM request. In the embodiment illustrated in Fig. 5B, the SMM initiation register 425B relies on the control logic 420B to handle the SMM request. It is also noted that the SMM initiator 425A is part of the security hardware 370A, while the SMM initiation register 425B is not part of the security hardware 370B. Fig. 6 illustrates a block diagram of an embodiment of the south bridge 330C including security hardware 370C, according to one aspect of the present invention. As shown, the security hardware 370C includes sub-devices, such as the SMM access controller 402, the control logic 420, a TCO counter 430, the scratchpad RAM 440, a random number generator 455, secure system (or SMM) management registers 470,OAR- (Open At Reset) locks 450, and an OAR override register 445. The SMM access controller 402 includes one or more access locks 460 within the SMM access filters 410. Some aspects of embodiments of the SMM access controller 402, and the control logic 420 are described herein with respect to Figs. 5A and 5B, above. The embodiment of the SMM access controller 402 illustrated in Fig. 6 includes the one or more access locks 460 within the SMM access filters 410. The access locks 460 provide a means of preventing (or locking) and allowing (or unlocking) access to one or more of the devices within the security hardware 370C.Various embodiments for the one or more access locks 460 are shown in Figs. IOTA-VIOC and described with reference thereto. In one embodiment, the access locks 460 are open at reset (OAR), allowing the BIOS software access to the security hardware 370. The BIOS software then closes the access locks 460 prior to calling the boot sector code, shown in block 154 in Fig. 2A. In various embodiments, the access locks 460 may be opened by software or hardware to allow for access to the security hardware 370. For example, the access locks 460 may be opened by a signal from the IC 365 or the processor 102 or the control logic 420. The access locks 460 may be opened in response to an SMI# or in response to the processor 102 entering SMM. Additional information on the access locks 460 may be obtained from one or more of the methods 900A-900C described below with respect to Figs. 9A-9C. The TCO counter (or timer) 430 may include a programmable timer, such as a count-down timer, that is used to detect a lock-up of the computer system 100. Lock-up may be defined as a condition of the computer system 100 where one or more subsystems or components do not respond to input signals for more than a predetermined period of time. The input signals may include internal signals from inside the computer system 100 or signals from outside the computer system 100, such as from a user input device (e. g. keyboard, mouse, trackball, biometric device, etc.). It is also noted that the lock-ups may be software or hardware in nature.According to various aspects of the present invention, the TCO counter 430 may be programmed and read from inside SMM. The TCO counter 430 is preferably programmed with value less than a default duration for the kick-out timer 407. In one embodiment, the TCO timer 430 generates an SMI# upon a first expiration of theTCO timer 430, and the TCO timer 430 generates a reset signal for the computer system upon a second, subsequent expiration of the TCO timer 430. In one embodiment, the TCO timer 430 may be accessed by the computer system 100 or software running in the computer system 100 for the computer system 100 to recover from lock-ups when the computer system is not in SMM. In another embodiment, the TCO timer 430 may be accessed by the computer system 100 both in and out of SMM. The scratchpad RAM 440 includes one or more blocks of memory that are available only while the computer system 100 is in certain operating modes, such as SMM. It is also contemplated that other subdevices of the security hardware 370 may use the scratchpad RAM 440 as a private memory. One embodiment of the scratchpad RAM 440 includes 1 kB of memory, although other amounts of memory are also contemplated. In one embodiment, the scratchpad RAM is open at reset to all or most of the computer system 100, while in another embodiment, the scratchpad RAM is inaccessible while the computer system is booting. The random number generator (RNG) 455 is configured to provide a random number with a number of bits within a predetermined range. In one embodiment, a new random number with from 1 to 32 bits in length is provided in response to a request for a random number. It is noted that restricting access to the RNG, such as only in SMM, may advantageously force software to access the RNG through a standard API (application programming interface), allowing for increased security and easing hardware design constraints. The OAR locks 450 may include a plurality of memory units (e. g. registers), which include associated programming bit (or lock bits) that lock the memory (or memories) used to store BIOS information or other data, for example, BIOS ROM 355 and SMM ROM 550 in Figs. 7A and 7B below. Each memory unit may have, by way of example, three lock bits associated with it. In one embodiment, four 8-bit registers may store the lock bits for each 512kB ROM-page, one register for every two 64-kB segment. With sixteen blocks of four registers, a maximum of 8MB of ROM may be locked. Addressing may be as follows: <tb> 64-kB <SEP> segment <SEP> Register <SEP> ADDRESS<tb> 0, <SEP> 1 <SEP> Register <SEP> 0 <SEP> FFBx, <SEP> E000h<tb> 2, <SEP> 3 <SEP> Register <SEP> I <SEP> FFBx, <SEP> E001 <SEP> h<tb> 4, <SEP> 5 <SEP> Register <SEP> 2 <SEP> FFBx, <SEP> E002h<tb> 6, <SEP> 7 <SEP> Register <SEP> 3 <SEP> FFBx, <SEP> E003h<tb> Each physical ROM chip may include four identification pins (ID [3: 0]), known as strapping pins. The strapping pins may be used to construct sixteen spaces of 64 kB each. The'x'in the address may represent the decode of the strapping pins, or the inverse. The lock registers from the OAR locks 450 may include: <tb> Register\Bits <SEP> 7 <SEP> OAR <SEP> Lock <SEP> 6 <SEP> : <SEP> 4 <SEP> 3 <SEP> OAR <SEP> Lock <SEP> 2 <SEP> : <SEP> 0<tb> Register <SEP> 0 <SEP> Reserved <SEP> Segment <SEP> 1 <SEP> Reserved <SEP> Segment <SEP> 0<tb> Register <SEP> 1 <SEP> Reserved <SEP> Segment <SEP> 3 <SEP> Reserved <SEP> Segment <SEP> 2<tb> Register <SEP> 2 <SEP> Reserved <SEP> Segment <SEP> 5 <SEP> Reserved <SEP> Segment <SEP> 4<tb> Register <SEP> 3 <SEP> Reserved <SEP> Segment <SEP> 7 <SEP> Reserved <SEP> Segment <SEP> 6<tb> In one embodiment, one bit controls write access, one bit controls read access, and one bit prevents the other two bits from being changed. In one embodiment, once the locking bit is set (also described as the state being locked down), the write access bit and read access bit cannot be reprogrammed until the memory receives a reset signal. The layout of each register may include: Bit <SEP> 7 <SEP> 6 <SEP> 5 <SEP> 4 <SEP> 3 <SEP> 2 <SEP> 1 <SEP> 0<tb> Value <SEP> Rsvrd <SEP> Lock <SEP> 2 <SEP> Lock <SEP> 1 <SEP> Lock <SEP> 0 <SEP> Rsvrd <SEP> Lock <SEP> 2 <SEP> Lock <SEP> 1 <SEP> Lock <SEP> 0<tb> With a decode of the three lock bits including: <tb> <SEP> Read <SEP> Lock <SEP> Lock-Down <SEP> Write <SEP> Lock <SEP> Resulting <SEP> block <SEP> state<tb> Decode<tb> <SEP> Data <SEP> 2 <SEP> Data <SEP> 1 <SEP> Data <SEP> 0<tb> 0x00 <SEP> Full <SEP> access<tb> 0x01 <SEP> Write <SEP> locked <SEP> (default <SEP> state)<tb> 0x02 <SEP> 0 <SEP> 1 <SEP> 0 <SEP> Lock <SEP> open <SEP> (full <SEP> access <SEP> locked <SEP> down)<tb> 0x030 <SEP> 1 <SEP> 1 <SEP> Write <SEP> locked <SEP> down<tb> 0x04 <SEP> Read <SEP> locked<tb> 0x05 <SEP> 1 <SEP> 0 <SEP> 1 <SEP> Read <SEP> and <SEP> write <SEP> locked<tb> 0x06 <SEP> Read <SEP> locked <SEP> down<tb> 0x07 <SEP> 1 <SEP> 1 <SEP> 1 <SEP> Read <SEP> and <SEP> write <SEP> locked <SEP> down<tb> The embodiment of the security hardware 370C illustrated in Fig. 6 also includes the OAR override register 445. The OAR override register 445 provides a mechanism for allowing (or unlocking) and preventing (or locking) access to one or more of the devices within the security hardware 370C. The OAR override register 445 also provides a mechanism to override the access locks 460. In one embodiment, the OAR override register 445 includes a first indicator that the access locks 460 are to be ignored, with access to the security hardware locked by the access locks 460 either always available or never available, as implemented. The OAR override register 445 may also include a second indicator that the status of the first indicator may be changed, or not. If the second indicator shows that the first indicator may not be changed, then the device including theOAR override register 445 preferably needs reset for the second indicator to be changed. In other words, the second indicator is preferably OAR, similar to one embodiment of the access locks 460. Methods that include using the access locks 460 and/or the OAR override indicators are described below with respect to Figs. 9A-9F. Various embodiments for the one or more access locks 460 are shown inFigs. 10A-IOC and described with reference thereto, and an embodiment of the OAR override register 445 is shown in Fig. l OD and described with reference thereto. In one embodiment, the access locks 460 are open at reset (OAR), allowing the BIOS software access to the security hardware 370. The BIOS software then closes the access locks 460 prior to calling the boot sector code, shown in block 154 in Fig. 2A. In various embodiments, the access locks 460 may be opened by software or hardware to allow for access to the security hardware 370. For example, the access locks 460 may be opened by a signal from the IC 365 or the processor 102 or the control logic 420. The access locks 460 may be opened in response to an SMI# or in response to the processor 102 entering SMM. Additional information on the access locks 460 may be obtained from one or more of the methods 900A-900C described below with respect to Figs. 9A-9C. It is noted that in one embodiment, all of the security hardware 370 (and the SMM initiation register 425B are inside the RTC battery well 125. In other embodiments, selected sub-devices of the security hardware 370 are excluded from the RTC battery well 125. In one embodiment, only a portion of the scratchpad RAM 440 is inside the RTC battery well 125 with the remaining portion outside the RTC battery well 125. For example, in one embodiment, the mailbox RAM 415 is outside the RTC battery well 125. Figs. 7A and 7B illustrate embodiments of extended BIOS security, according to various aspects of the present invention. In Fig. 7A, the BIOS ROM 355 and the SMM ROM 550 are coupled to the LPC bus 118.As shown, a crypto processor 305, including a secret 610A, is coupled between the BIOS ROM 355 and theLPC bus 118. In Fig. 7B, an extended BIOS ROM 555 is shown coupled to the LPC bus 118. The extendedBIOS ROM 555 includes the BIOS ROM 355 and the SMM ROM 550. BIOS ROM 355 memory space in the computer system 100 may include anywhere from 128kB to 4MB, divided into 64 kB segments. An additional one or more 4MB of SMM ROM 550 memory space may be addressed via a paging mechanism, for example, where the second page of ROM memory space is within separate chips and selected by an additional set of identification select (IDSEL) pins. Each segment of theBIOS ROM 355 memory space and the SMM ROM 550 memory space may be lockable, and open at reset. In one embodiment, the access protection mechanism (i. e. the lock) is not implemented in the BIOS ROM 355 orSMM ROM 550, but, for example, in the south bridge 330C in the security hardware 370C, as previously described with respect to Fig. 6. In one embodiment, the BIOS ROM 355 includes 4 MB of memory space. Read access to the BIOSROM 355 memory space may be unrestricted at any time. Write locks on the BIOS ROM 355 memory space may be OAR and cover the memory space from FFFF, FFFFh to FFC0, OOOOh, in 32-bit address space on theLPC bus 145. In one embodiment, the crypto processor 305 is a specialized processor that includes specialized cryptographic hardware. In another embodiment, the crypto processor 305 includes a general-purpose processor programmed with cryptographic firmware or software. In still another embodiment, the crypto processor 305 includes a general-purpose processor modified with specialized cryptographic hardware. Other embodiments are also contemplated. For example, the BIOS ROM 355 may be coupled to theLPC bus 118, and the crypto processor 305 may be coupled between the SMM ROM 550 and the LPC bus 118.Also, the crypto processor 305 may be coupled between the extended BIOS ROM 555 and the LPC bus 118. Figs. 8A and 8B illustrates block diagrams of embodiments of a BIOS ROM 355 and an SMM ROM 550 for secure SMM operations, respectively, according to various aspects of the present invention. As shown in Fig. 8A, the BIOS ROM 355 may include data storage 608B, a secret 610C, and private memory 606. As shown in Fig. 8B, the SMM ROM 550 may be divided into a plurality of SMM ROM blocks 615617, a stored secret 610D, a plurality of public ROM blocks 625-630, one or more reserved ROM blocks 635, and one or more registers 640. The plurality of SMM ROM blocks 615-617 may include an SMM ROM 0 block 615, an SMM ROMI block 616, and an SMM ROM 2 block 617. The plurality of public ROM blocks 625-630 may include a public ROM block 0 625 and a public ROM block 1 630. One embodiment of access rights, lock status, and 32bit address ranges in the LPC bus 118 space are given here in table form. <tb>ROM <SEP> READ <SEP> WRITE <SEP> ADDRESS<tb> BLOCK <SEP> ACCESS <SEP> LOCK <SEP> RANGE<tb> SMM <SEP> ROM <SEP> 0 <SEP> SMM <SEP> Write <SEP> Once<tb> <SEP> FFBx, <SEP> lFFFh <SEP> : <SEP> FFBx, <SEP> 0000h<tb> 615 <SEP> Only<tb> SMM <SEP> ROM <SEP> 1 <SEP> SMM <SEP> Never <SEP> Erase<tb> <SEP> FFBx, <SEP> 3FFFh: <SEP> FFBx, <SEP> 2000h<tb> 616 <SEP> Only<tb> SMM <SEP> ROM <SEP> 2 <SEP> SMM<tb> <SEP> None <SEP> FFBx, <SEP> 5FFFh <SEP> : <SEP> FFBx, <SEP> 4000h<tb> 617 <SEP> Only<tb> Public <SEP> 0 <SEP> Write <SEP> Once<tb> <SEP> Unrestricted <SEP> FFBx, <SEP> 9FFFh: <SEP> FFBx, <SEP> 8000h<tb> 625 <SEP> In <SEP> SMM<tb> Public <SEP> 1 <SEP> Never <SEP> Erase,<tb> <SEP> Unrestricted <SEP> FFBx, <SEP> BFFFh: <SEP> FFBx, <SEP> AOOOh<tb> 630 <SEP> Write <SEP> in <SEP> SMM<tb> Reserved<tb> <SEP> N/A <SEP> N/A <SEP> FFBx, <SEP> DFFFh: <SEP> FFBx, <SEP> COOOh<tb> 635<tb> Registers<tb> <SEP> N/A <SEP> N/A <SEP> FFBx, <SEP> FFFFh: <SEP> FFBx, <SEP> EOOOh<tb> 640<tb> The'x'in the address ranges given in the table may denote the strapping pin decode or their inverse.In one embodiment, the ROM blocks 615-617 and 625-630 in the table are each 64 kB in size. In one embodiment, the computer system may support up to 8MB of extended BIOS ROM 555 storage, divided into sixteen pages of 512 kB each. In another embodiment, the memory address range from FFBx, FFFFh down toFFBx, 0000h includes the plurality of SMM ROM blocks 615-617, the plurality of public ROM blocks 625-630, and the one or more registers 640. The one or more reserved ROM blocks 635 may be used for future expansion. The one or more registers 640 may store additional data, as needed. Figs. 9A-9G illustrate flowcharts of embodiments of methods 900A-900G that attempt to access the security hardware 370, which may be locked, according to various aspects of the present invention. Fig. 9A shows a method 900A of locking the security hardware 370 as a part of the boot (or cold reboot) process. Fig.9B shows a method 900B of unlocking and later locking the security hardware 370 as a part of a reboot (or warm boot) process. Fig. 9C shows a method 900C of checking for rights to lock or unlock the security hardware 370 and checking a bit to disable changing the rights. Fig. 9D shows a method 900D of attempting to use the security hardware 370 while the computer system 100 is not in SMM. Fig. 9E shows a method 900E of checking and/or setting the lock on the OAR access locks 460 and checking the bit to disable changing the lock. Fig. 9F shows a method 900F of unlocking and later locking the security hardware 370 while the computer system 100 is in SMM. Fig. 9G shows a method 900G of checking for rights to unlock and later lock the security hardware 370 while the computer system 100 is in SMM. Referring now to Fig. 9A, the method 900A includes the processor executing the BIOS code instructions from SMM space in the RAM memory, in block 920. The BIOS code, executed by the processor, performs a power-on self test (POST), in block 925. The method 900A includes accessing the security hardware 370, in block 930. The accesses to the computer hardware 370 may initiate an unlocking of the security hardware 370, if the security hardware 370 is not open-at-reset. The accesses to the security hardware 370 may be by the BIOS code or other device or subsystem in the computer system 100, or from outside the computer system 100, if allowed. The method 900A may optionally include entering a BIOS management mode, in block 932. The BIOS management mode could allow for, for example, remote booting instructions, remote or secure permission to continue the boot sequence, other remote operations or remote hardware accesses or set-ups, or choosing between or among boot choices, such as hardware configurations and/or operating systems or other software choices. The BIOS code next looks for additional BIOS code, such as from a video controller, IDE controller,SCSI controller, etc. and displays a start-up information screen, in block 935. As examples, the video controllerBIOS is often found at COOOh, while the IDE controller BIOS code is often found at C800h. The BIOS code may perform additional system tests, such as a RAM memory count-up test, and a system inventory, including identifying COM (serial) and LPT (parallel) ports, in block 940. The BIOS code also identifies plug-and-play devices and other similar devices and then displays a summary screen of devices identified, in block 945. The method includes closing the access locks to the security hardware, in block 950. The BIOS code or another device or agent in the computer system 100 may close the access locks. The BIOS code identifies the boot location, and the corresponding boot sector, in block 955. The boot location may be on a floppy drive, a hard drive, a CDROM, a remote location, etc. The BIOS code next calls the boot sector code at the boot location to boot the computer system, such as with an operating system, in block 960. Referring now to Fig. 9B, the method 900B includes opening the access locks to the security hardware, in block 915. The processor executes the BIOS code instructions from SMM space in the RAM memory, in block 920. The computer system accesses the security hardware 370 while in SMM, while booting, in block 930. The method 900B may optionally include entering a BIOS management mode, in block 932. The BIOS code next looks for additional BIOS code, such as from a video controller, IDE controller,SCSI controller, etc. and displays a start-up information screen, in block 935. As examples, the video controllerBIOS is often found at COOOh, while the IDE controller BIOS code is often found at C800h. The BIOS code also identifies plug-and-play devices and other similar devices and then displays a summary screen of devices identified, in block 945. The BIOS code closes the access locks to the security hardware, in block 950. The BIOS code identifies the boot location, and the corresponding boot sector, in block 955. The boot location may be on a floppy drive, a hard drive, a CDROM, a remote location, etc. The BIOS code next calls the boot sector code at the boot location to boot the computer system, such as with an operating system, in block 960. Turning now to Fig. 9C, the. method 900C includes deciding whether to set the OAR-lock, in decision block 946. The OAR-lock in decision block 946 may correspond to the first indicator described above with respect to Fig. 6. The OAR-lock in decision block 946 may also correspond to setting the OAR lock override bit 1050 described below with respect to Fig. 10D. If the decision is made to set the OAR-lock, then, according to one embodiment, all access to the security hardware 370 is blocked, in block 947. If the decision is made not to set the OAR-lock, then the method 900C moves to decision block 948. In decision block 948, the method 900C decides whether to set the OAR-lock change bit. The OAR-lock change bit in decision block 948 may correspond to the second indicator described above with respect to Fig. 6. The OAR-lock change bit in decision block 948 may also correspond to setting the change OAR lock override bit 1055 described below with respect to Fig. 10D. If the decision is made to set the OAR-lock change bit, in decision block 948, then, according to one embodiment, the OAR-lock cannot be changed, thereafter, as changes to the OAR-lock are themselves locked out, in block 949. Turning now to Fig. 9D, the method 900D includes a processor, such as processors 102, etc., operating in a mode that is not SMM, in block 904. In block 906, code being processed by the processor attempts to access any part of the security hardware 370, or other hardware whose access may require a check of an access lock similar to the access locks 460. The method checks, at decision block 907, to see if the security hardware 370 is available. If the security hardware 370 is not available, at decision block 907, then the method 900D exits or returns. If the security hardware 370 is available, at decision block 907, then the method 960D accesses the security hardware 370, at block 930. The method, optionally, closes the access locks to the security hardware, if necessary, at block 950. Turning now to Fig. 9E, the method 900E includes an embodiment of decision block 907 from Fig. 9D.The method 900E includes checking if access to all security hardware is locked out, i. e. forbidden, at decision block 990. If access to all security hardware is locked out, then at decision block 990 the method 900E moves to decision block 992. If access to all security hardware is not locked out, then the method 900E moves to decision block 991. In decision block 991, the method 900E checks if the requested security hardware is locked out (e. g. separately using one or more access locks). If the requested security hardware is locked out, then the method 960E moves to decision block 992. If the requested security hardware is not locked out, then the method 960E moves directly to block 993. In decision block 992, the method 960E checks if the access lock for the requested security hardware can be changed, e. g., unlocked. If the access lock for the requested security hardware cannot be changed, then in decision block 992 the method 900E aborts the access to the security hardware. If the access lock for the requested security hardware can be changed, then in decision block 992 the method 900E requests authorization, such as from a user, to change the access lock for the requested security hardware, in decision block 993. If the authorization to change the access lock for the requested security hardware is not given, then the method 900E aborts the access to the security hardware. If the authorization to change the access lock for the requested security hardware is not given, then the method 900E moves to block 994 and changes the lock to allow access to the requested security hardware. It is noted that any authorization method may be used in decision block 993. Any authorization methods known in the art that have security properties in the presence of an observer may be used. Turning now to Fig. 9F, the method 900F includes the processor loading code instructions into SMM space in the RAM memory, in block 905. For example, loading code instructions into SMM space may occur in response to an SMI#. The access locks to the security hardware are opened in block 915. The opening of the access locks may be through the SMM code instructions or through a hardware mechanism, or both. The processor processes the code instructions from SMM space in the RAM memory, in block 920.. The method 900F includes accessing the security hardware 370, in block 930. As the computer system is inSMM and the access locks have been opened, in block 915, the security hardware is available to most or all of the subsystems of the computer system 100, as desired. The method 900F includes closing the access locks to the security hardware 370, in block 950. The processor reloads the previous state and continues operating, in block 965. It is noted that the processing of theSMM code instructions, in block 920, may continue while the actions described in block 930 occurs.Preferably, the actions described in block 950 occur after the processing of the SMM code instructions, in block 920, has ceased. Turning now to Fig. 9G, the method 900G includes the processor loading code instructions into SMM space in the RAM memory, in block 905. For example, the loading of code instructions into SMM space may occur in response to an SMI#. The method 900G next checks if the security hardware is available, in decision block 907. If the security hardware is not available, then in decision block 907 the method 900G aborts the access to the security hardware. If the security hardware is available, then the method 900G continues with block 920. The processor executes the code instructions from SMM space in the RAM memory, in block 920.The method 900F includes accessing the security hardware 370, in block 930. As the computer system is inSMM and the access locks are open, as determined in decision block 907, the security hardware is available to most or all of the subsystems of the computer system 100, as desired. The method 900G includes closing the access locks to the security hardware 370, in block 950. The processor reloads the previous state and continues operating, in block 965. It is noted that the executing of theSMM code instructions, in block 920, may continue while the actions described in block 930 occurs.Preferably, the actions described in block 950 occur after the processing of the SMM code instructions, in block 920, has ceased. It is noted that other processes of locking and unlocking the security hardware 370, other than the access locks, may be used. The methods 900A-900G are intended to extend to those other processes. For the purposes of this disclosure, the computer system is considered to have two operating modes, normal and SMM. There are boot phases that are not in SMM, but they are, by definition, as trusted as SMM, and therefore considered equivalent to SMM herein. The boot code configures and arranges how SMM will work. SMM derives its trustworthiness from the trustworthiness of the boot code. It is contemplated that the standard boot sequence could be varied. Variations include a transition to a setup environment where the user may have the opportunity to input parameters. The input parameters may, for example, modify the BIOS code.Most setup environments return to reset before loading the operating system and operating in normal mode.This is a form of maintenance mode that is an alternative to loading the operating system and is not part of the normal mode. As contemplated, the access locks would not be set in this mode. It would be part of the boot process and as trusted as SMM, although security measures could be used if remote accesses are possible inside the setup environment. Figs. 10A, 10B, and 10C illustrate block diagrams of embodiments 460A, 460B, and 460C of the access locks 460 shown in Fig. 6. In Fig. I OD, a block diagram of an embodiment of the OAR override register 455, from Fig. 6, is shown. In the embodiment 460A shown in Fig. 10A, the one or more access locks 460 include a sequester bit register 1005. The bit stored in the sequester bit register 1005 may be set or cleared as a flag. In the embodiment 460B shown in Fig. 10B, the one or more access locks 460 include two or more sequester registers configured to store two or more sequestering bits to lock or unlock all of the devices within the security hardware 370. The additional bits beyond the sequester bit stored in the sequester register 1005 allows for flag bits for locking and unlocking of privileges separately. For example, a write privilege could be locked, while a read privilege could be unlocked. In the embodiment of Fig. 10C, the one or more access locks 460 include one or more sequester registers 1015A-1015N for each device within the security hardware 370C. In Fig. 10D, the OAR override 445 includes an OAR-lock override register 1050 that stores at least oneOAR-lock override bit, and a change OAR-lock override register 1055 that stores at least one change OAR-lock override bit. According to one embodiment of the present invention, if the OAR-lock override bit is not set, then access to the security hardware 370 is determined by the settings of the access locks 460. If the OAR-lock override bit is set, then the access locks 460 are ignored in favor of the security hardware 370 being either always available or never available, based on the implementation. Preferably, the security hardware is never available when the OAR-lock override bit is set. The setting of the OAR-lock override bit may be changed inSMM (or with authorization) unless the change OAR-lock override bit is set. Preferably, the change OAR-lock override bit is OAR, similar to one embodiment of the access locks 460, and may be set, in various embodiments, with the access locks 460 at boot time, such as in block 950. Figs. I IA, I I B, 12, and 13 illustrate flowcharts of embodiments of methods 1100A, 1100B, 1110A, and 1120 for secure access to storage, according to various aspects of the present invention. Fig. 11A shows a flowchart of the method 1100A where a security device maintains secure access to a storage device, according to one aspect of the present invention. Fig. I I B shows a flowchart of the method 1100B where a crypto processor maintains secure access to a memory, according to one aspect of the present invention. Fig. 12 shows a flowchart of the method 111 OA where a security device provides secure access control to a storage device using a challenge-response authentication protocol, according to one aspect of the present invention. Fig. 13 shows a flowchart of the method 1120 where a secret is used to unlock data access to a secure storage device. Turning to Fig. I I A, the method 1100A includes the security device receiving a transaction request for a storage location associated with the storage device connected to the security device (block 1105A). The security device provides access control for the storage device (block 1110A). One embodiment of the access control shown in block 1110A is illustrated by the method 1100B shown in Fig. 12. According to the method 1100A, the security device maps the storage location in the transaction request according to the address mapping of the storage device (block 1115A). The security device provides the transaction request to the storage device (block 1120A). Under normal circumstances, the storage device will perform the requested transaction (block 1125A). In various embodiments, the security device associated with the method 1100A may include a crypto processor or a block of logic configured to provide security for the storage device. The storage device may include an electronic storage medium like a memory or a magnetic or optical storage medium like a hard drive or an optical drive. The memory may include a RAM, a ROM, or a flash memory. The hard drive or optical drive may be fixed or removable. The transaction request may include, for example, a read request, a write request, or a combination of read and write requests. It is noted that in various embodiments, the memory (or the storage device) may include further security hardware of its own. Turning to Fig. I IB, the method 1100B includes the crypto-processor receiving a transaction request for a memory location associated with the memory connected to the crypto-processor (block 1105B). The crypto-processor provides access control for the memory (block l l l OB). One embodiment of the access control shown in block 1110B is illustrated in Fig. 12. According to the method 1100B, the crypto-processor maps the memory location in the transaction request according to the address mapping of the memory (block 1115B). The crypto-processor provides the transaction request to the memory (block 1120B). Under normal circumstances, the memory will perform the requested transaction (block 1125B). Turning to Fig. 12, the method 1110A includes the security device determining if a lock is in place for the storage location (block 1205). A transaction request may have been received for the storage location. If the lock is not in place (block 1210), then the method 111 OA moves past the authentication portion. If the lock is in place (block 1210), then the security device provides a challenge for the storage location (block 1215). The challenge may be associated with the storage location or with the storage device that includes the storage location. The challenge may be in response to the transaction request. Next, the security device receives a response to the challenge (block 1220). The security device evaluates the response by comparing the response to an expected response (block 1225). If the evaluation is not correct (block 1230), then the method ends. If the evaluation is correct (block 1230), then the method proceeds with the security device providing the transaction request to the storage device (block 1235). In various embodiments, the security device associated with the method 1110A may include a crypto processor or a block of logic configured to provide security for the storage device. The storage device may include an electronic storage medium like a memory or a magnetic or optical storage medium like a hard drive or an optical drive. The memory may include a RAM, a ROM, or a flash memory. The hard drive or optical drive may be fixed or removable. The transaction request may include, for example, a read request, a write request, or a combination of read and write requests. Turning to Fig. 13, the method 1120 includes storing a secret in a storage device (block 1305). The storage device may include only a portion of a physical device. The storage device itself may be embodied as any storage device known in the art. The method 1120 may also include storing data in the storage device (block 1310) and storing code in the storage device (block 1315). The method 1120 may also include providing a lock (e. g. a lock bit or bits) to secure data stored in the storage device or the storage device itself (block 1315).Note that the above steps of method 1120 (blocks 1305-1320) may be performed relatively proximate in time, such as when the storage device is manufactured, installed, or initialized. The method 1120 also includes reading the secret from the storage device (block 1325), such as, for example, when the computer system including the storage device or coupled to communicate with the storage device is booted. For the secret to remain secure, the reading of the secret preferably occurs when the storage device is in a secure or trusted configuration. The method 1120 may also read the code from the storage device (block 1330). The method 1120 stores the secret in a secure location (block 1325) and also may store the code in the secure location (block 1330). The secure location may be in the SMM memory space previously described, or in a secure memory, register, or other storage location in the computer system 100, such as in the processor 102 or in the south bridge 330. In various embodiments, the storage device associated with the method 1120 may include an electronic storage medium like a memory or a magnetic or optical storage medium like a hard drive or an optical drive.The memory may include a RAM, a ROM, or a flash memory. The hard drive or optical drive may be fixed or removable. A read in method 1120 may describe any transaction request, such as, for example, a read request, a write request, or a combination of read and write requests. For the purposes of this disclosure, references to ROM are to be construed as also applying to flash memory and other substantially non-volatile memory types. Note that while the methods of the present invention disclosed herein have been illustrated as flowcharts, various elements of the flowcharts may be omitted or performed in different order in various embodiments. Note also that the methods of the present invention disclosed herein admit to variations in implementation. Some aspects of the invention as disclosed above may be implemented in hardware or software. Thus, some portions of the detailed descriptions herein are consequently presented in terms of a hardware implemented process and some portions of the detailed descriptions herein are consequently presented in terms of a software-implemented process involving symbolic representations of operations on data bits within a memory of a computing system or computing device. These descriptions and representations are the means used by those in the art to convey most effectively the substance of their work to others skilled in the art using both hardware and software. The process and operation of both require physical manipulations of physical quantities. In software, usually, though not necessarily, these quantities take the form of electrical, magnetic, or optical signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantifies. Unless specifically stated or otherwise as may be apparent, throughout the present disclosure, these descriptions refer to the action and processes of an electronic device, that manipulates and transforms data represented as physical (electronic, magnetic, or optical) quantities within some electronic device's storage into other data similarly represented as physical quantities within the storage, or in transmission or display devices. Exemplary of the terms denoting such a description are, without limitation, the terms"processing,""computing,""calculating,""determining," "displaying,"and the like. Note also that the software-implemented aspects of the invention are typically encoded on some form of program storage medium or implemented over some type of transmission medium. The program storage medium may be magnetic (e. g., a floppy disk or a hard drive) or optical (e. g., a compact disk read only memory, or"CD ROM"), and may be read only or random access. Similarly, the transmission medium may be twisted wire pairs, coaxial cable, optical fiber, or some other suitable transmission medium known to the art. The invention is not limited by these aspects of any given implementation. The particular embodiments disclosed above are illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope of the invention.Accordingly, the protection sought herein is as set forth in the claims below.
Memory devices, systems including memory devices, and methods of operating memory devices are described, in which a memory device may select an option for a host device to access a memory array including a first portion configured to store user data and a second portion configured to store different data based on whether an ECC function of the memory device is enabled or disabled — e.g., storing ECC data when the ECC function is enabled, storing additional user data, metadata, or both when the ECC function is disabled. The host device may disable the ECC function and transmit an input to the memory device as to how to access the memory array. The memory device, based on the input, may select the option for the host device to access the memory array and communicate with the host device in accordance with the selected option.
CLAIMSWhat is claimed is:1. An apparatus comprising: a memory array corresponding to a plurality of memory addresses, each memory address of the plurality of memory addresses associated with a first portion of the memory array configured to store user data and with a second portion of the memory array configured to store error-correcting code (ECC) data associated with the user data of the first portion when an ECC function of the apparatus is enabled; a register configured to store one or more bits corresponding to a plurality of options for a host device to access the memory array when the ECC function is disabled; and circuitry configured to: select an option from the plurality of options based on an input from the host device when the ECC function is disabled; update the one or more bits in the register based on the selected option; and communicate with the host device in accordance with the selected option when the ECC function is disabled.2. The apparatus of claim 1 , wherein the circuitry is further configured to bypass an ECC circuit that performs the ECC function for the user data.3. The apparatus of claim 1 , wherein the selected option includes retrieving the user data uncorrected by the ECC function or storing the user data without performing the ECC function.4. The apparatus of claim 1, wherein the selected option includes decoding a first segment of a memory address associated with an access command to identify the second portion of the memory array.5. The apparatus of claim 4, wherein the first segment of the memory address corresponds to one or more address pins that are separate from a quantity of address pins corresponding to the plurality of memory addresses.6. The apparatus of claim 1 , wherein the selected option includes accessing the second portion of the memory array based on a memory address of an access command, the memory address corresponding to a memory address of the plurality of memory addresses.7. The apparatus of claim 1 , wherein the selected option includes enabling a first set of data pins corresponding to additional data for the second portion of the memory array.8. The apparatus of claim 7, wherein the first set of data pins are separate from a second set of data pins corresponding to the user data for the first portion of the memory array.9. The apparatus of claim 1 , wherein the selected option includes determining a burst length for communicating with the host device, the burst length corresponding to the user data for the first portion and additional data for the second portion.10. The apparatus of claim 9, wherein communicating with the host device includes transmitting or receiving the user data and the additional data for the burst length.11. A method comprising: receiving, at a memory device, a signaling that indicates an option selected from a plurality of options for a host device to access a memory array of the memory device when an error-correcting code (ECC) function of the memory device is disabled, the memory array corresponding to a plurality of memory addresses each associated with a first portion of the memory array configured to store user data and with a second portion of the memory array configured to store ECC data associated with the user data of the first portion when an ECC function of the memory device is enabled; storing, in a register of the memory device, one or more bits corresponding to the option selected from the plurality of options; receiving, at the memory device, an access command associated with a memory address of the plurality of memory addresses; accessing the first portion of the memory array, the second portion of the memory array, or both in response to the access command and based at least in part on the selected option as indicated by the one or more bits stored in the register; and communicating with the host device in accordance with the selected option.12. The method of claim 11 , further comprising decoding a first segment of the memory address associated with the access command to identify the second portion of the memory array.13. The method of claim 11 , wherein accessing the first portion of the memory array comprises retrieving the user data uncorrected by the ECC function or storing the user data without performing the ECC function.14. The method of claim 11 , further comprising enabling a first set of data pins corresponding to additional data for the second portion of the memory array.15. The method of claim 11 , further comprising determining a burst length for communicating with the host device, the burst length corresponding to the user data for the first portion and additional data for the second portion.16. A memory system comprising: a host device; and a memory device including: a memory array corresponding to a plurality of memory addresses, each memory address of the plurality of memory addresses corresponding to a first portion of the memory array configured to store user data and to a second portion of the memory array configured to store error checking and correcting (ECC) data associated with the user data of the first portion when an ECC function of the memory device is enabled; and a register configured to store one or more bits corresponding to a plurality of options for the host device to access the memory array when the ECC function is disabled; wherein the host device is configured to transmit an input directed to the plurality of options to access the memory array, and wherein the memory device is configured to: select an option from the plurality of options based on the input from the host device; update the one or more bits in the register based on the selected option; and communicate with the host device in accordance with the selected option.17. The memory system of claim 16, wherein the host device is further configured to perform a separate ECC function that is different from the ECC function of the memory device.18. The memory system of claim 16, wherein the host device is further configured to generate a memory address comprising a first segment and a second segment, the first segment of the memory address corresponding to one or more address pins that are separate from a quantity of address pins corresponding to the second segment for the plurality of memory addresses.19. The memory system of claim 16, wherein the host device is further configured to activate one or more channels associated with a first set of data pins of the memory device, the first set of data pins corresponding to additional data for the second portion and separate from a second set of data pins corresponding to the user data for the first portion.20. The memory system of claim 16, wherein the host device is further configured to communicate with the memory device for a burst length corresponding to the user data for the first portion and additional data for the second portion.
SEMICONDUCTOR DEVICE WITH USER DEFINED OPERATIONS AND ASSOCIATED METHODS AND SYSTEMSTECHNICAL FIELD[0001] The present disclosure generally relates to semiconductor devices, and more particularly relates to a semiconductor device with user defined operations and associated methods and systems.BACKGROUND[0002] Memory devices are widely used to store information related to various electronic devices such as computers, wireless communication devices, cameras, digital displays, and the like. Memory devices are frequently provided as internal, semiconductor, integrated circuits and/or external removable devices in computers or other electronic devices. There are many different types of memory, including volatile and non-volatile memory. Volatile memory, including random-access memory (RAM), static random access memory (SRAM), dynamic random access memory (DRAM), and synchronous dynamic random access memory (SDRAM), among others, require a source of applied power to maintain its data. Non-volatile memory, by contrast, can retain its stored data even when not externally powered. Non-volatile memory is available in a wide variety of technologies, including flash memory ( e.g NAND and NOR), phase change memory (PCM), ferroelectric random access memory (FeRAM), resistive random access memory (RRAM), and magnetic random access memory (MRAM), among others. Improving memory devices, generally, may include increasing memory cell density, increasing read/write speeds or otherwise reducing operational latency, increasing reliability, increasing data retention, reducing power consumption, or reducing manufacturing costs, among other metrics.BRIEF DESCRIPTION OF THE DRAWINGS[0003] Figure 1 is a block diagram schematically illustrating a memory device in accordance with an embodiment of the present technology.[0004] Figure 2 is a block diagram schematically illustrating a memory device in accordance with an embodiment of the present technology. [0005] Figure 3 is a table illustrating various options for user defined operations in accordance with an embodiment of the present technology.[0006] Figure 4 is a block diagram schematically illustrating a memory system in accordance with an embodiment of the present technology.[0007] Figure 5 is a flow chart illustrating a method of operating a memory device in accordance with an embodiment of the present technology.DETAILED DESCRIPTION[0008] A memory device may include error checking and correcting (ECC) functions to generate reliable data — e.g., on-die ECC function. An algorithm, program, or circuitry that performs the ECC function may be referred to as or include aspects of error-correcting codes. Such a memory device may include an ECC circuit and a group of memory cells {e.g., a portion of memory array configured to store ECC parity bits, and which may be variously referred to as an ECC array, an ECC plane, and/or a parity plane) that supports the on-die ECC function. In some embodiments, the group of memory cells may be reserved to internally store ECC data {e.g., internal to the memory device and inaccessible by users) and the specified storage capacity of the memory device may not include the ECC array capacity. In some examples, the ECC array capacity may occupy an appreciable portion of a memory array of the memory device — e.g., approximately 6% of a total memory array space. In some memory systems that include a host device coupled with such memory devices, the host device (or the memory system) may perform its own ECC functions without entirely relying on the on-die ECC function. For example, the host device may be configured to perform a system level ECC function independent of the ECC data or the ECC algorithm of the memory devices. As a result, the on-die ECC function may not be required by the memory system (or the host device) in some embodiments, and the memory device may be configured to provide additional features that may be otherwise unavailable.[0009] Several embodiments of the present technology are directed to memory devices, systems including memory devices, and methods of operating memory devices in which a host device may be configured to disable an ECC function of a memory device and access a memory array of the memory device. In some embodiments, the memory array may include a first portion configured to store user data ( e.g ., main array, user data plane) and a second portion configured to store error checking and correcting (ECC) data associated with the user data of the first portion {e.g., ECC array, ECC plane, parity plane) when the ECC function of memory device is enabled. As set forth herein, a set of memory addresses may correspond to the memory array where each memory address of the set corresponds to the first portion and to the second portion of the memory array. In one embodiment, the memory device includes a register {e.g., a mode register) to indicate whether the ECC function is enabled or disabled. Further, the register (or a different register) may be configured to store one or more bits corresponding to a set of options for the host device to access the memory array when the ECC function is disabled.[0010] When the ECC function is disabled, the memory device may configure the second portion of the memory array to store additional user data, metadata, or both. Metadata in a memory device may refer to various data associated with operational aspects of the memory device, such as operating temperatures, latency settings, data transmission parameters. In some embodiments, the memory device may store the metadata in one or more registers, to which an output circuit of the memory device has access. In some embodiments, the memory device may store the metadata in the memory array (including the second portion of the memory array reserved for the ECC functionality, when the ECC functionality is disabled). Further, the memory device may bypass an ECC circuit that performs the ECC function for the user data. Additionally or alternatively, the memory device may provide a set of options for the host device to access {e.g., read from, write to, erase portions of, etc.) the memory array, such as accessing the first portion of the memory array only {e.g., disregarding the second portion of the memory array), enabling additional address pins that may separately identify the second portion of the memory array, accessing the second portion of the memory array based on the same set of memory addresses that corresponds to the first portion and to the second portion of the memory array, enabling additional data pins for communicating additional data {e.g., additional user data, metadata) for the second portion of the memory array, determining a different burst length {e.g., an increased burst length) for communicating with the host device, etc.[0011] In some embodiments, the host device may disable the ECC function of the memory device and transmit an input to the memory device as to how the host device may proceed to access the memory array. The memory device may select an option from the set of options based on the input from the host device and update one or more bits in the register based on the selected option. Further, the host device and the memory device may establish a proper protocol to communicate in accordance with the selected option. In some embodiments, the memory device may decode a modified memory address of an access command that utilizes extra address pins corresponding to the second portion. In other embodiments, the memory device may enable extra data pins in the data channels ( e.g ., bus, interface) to transmit or receive the additional data for the second portion. Further, the memory device may determine a burst length to transmit or receive data including the additional data for the second portion.[0012] A memory device that supports an embodiment of the present technology is described with reference to Figure 1. More detailed descriptions of the memory device are provided with reference to Figure 2. Figure 3 describes a table illustrating various options for user defined operations in accordance with an embodiment of the present technology. A memory system that supports an embodiment of the present technology is described with reference to Figure 4. A flowchart illustrating a method of operating the memory device is described with reference to Figure 5.[0013] Figure 1 is a block diagram schematically illustrating a memory device 100 in accordance with an embodiment of the present technology. The memory device 100 may include an array of memory cells, such as memory array 150. The memory array 150 may include a plurality of banks {e.g., banks 0-15 in the example of Figure 1), and each bank may include a plurality of word lines (WL), a plurality of bit lines (BL), and a plurality of memory cells (e.g., m n memory cells) arranged at intersections of the word lines (e.g., m word lines, which may also be referred to as rows) and the bit lines (e.g., n bit lines, which may also be referred to as columns). Memory cells can include any one of a number of different memory media types, including capacitive, magnetoresistive, ferroelectric, phase change, or the like. In some embodiments, a portion of the memory array 150 (e.g., ECC plane) may be configurable to store ECC parity bits. That is, the memory array 150 may include a first subset of memory cells configured to store user-accessible data and a second subset of memory cells (e.g., ECC parity bits) configured to store different kinds of data — e.g., ECC data when an ECC function is enabled, non-ECC data when the ECC function is disabled. The selection of a word line WL may be performed by a row decoder 140, and the selection of a bit line BL may be performed by a column decoder 145. Sense amplifiers (SAMP) may be provided for corresponding bit lines BL and connected to at least one respective local I/O line pair (LIOT/B), which may in turn be coupled to at least one respective main I/O line pair (MIOT/B), via transfer gates (TG), which can function as switches. The memory array 150 may also include plate lines and corresponding circuitry for managing their operation.[0014] In some embodiments, the memory array 150 includes a memory array including a set of memory cells. The set of memory cells may include a first portion configured to store user data. Moreover, the set of memory cells may include a second portion reserved to store ECC data to support the ECC function of the memory device 100. Accordingly, when the ECC functionality is enabled, a host device may not directly access the second portion of the memory array 150. In one embodiment, the memory array 150 may correspond to a set of memory addresses where each memory address of the set is associated with a first portion of the memory array and with a second portion of the memory array. Accordingly, when a memory address is provided by a host device, the memory address may concurrently identify the first portion and the second portion of the memory array 150. When the ECC function is enabled, a host device may rely on the ECC function performed by the memory device 100 using the ECC data in one embodiment. When the ECC function is disabled ( e.g ., by the host device that performs its own ECC function), however, the memory device 100 may configure the second portion to store additional user data, metadata associated with the memory device 100, or both. Further, the memory device 100 may provide a set of options for the host device to access the memory array 150 as described herein. In some embodiments, the memory device 100 may include one or more registers 118 {e.g., mode registers) configured to indicate whether the ECC function is enabled or disabled. Further, the registers 118 (or a different register) may be configured to store one or more bits corresponding to the set of options for the host device to access the memory array 150 when the ECC function is disabled.[0015] The memory device 100 may employ a plurality of external terminals that include command and address terminals coupled to a command bus and an address bus to receive command signals CMD and address signals ADDR, respectively. The memory device may further include a chip select terminal to receive a chip select signal CS, clock terminals to receive clock signals CK and CKF, data clock terminals to receive data clock signals WCK and WCKF, data terminals DQ, RDQS, DBI (for data bus inversion function), and DMI (for data mask inversion function), power supply terminals VDD, VSS, VDDQ, and VSSQ.[0016] The command terminals and address terminals may be supplied with an address signal and a bank address signal from outside. The address signal and the bank address signal supplied to the address terminals can be transferred, via a command/address input circuit 105, to an address decoder 110. The address decoder 110 can receive the address signals and supply a decoded row address signal (XADD) to the row decoder 140, and a decoded column address signal (YADD) to the column decoder 145. The address decoder 110 can also receive the bank address portion of the ADDR input and supply the decoded bank address signal (BADD) and supply the bank address signal to both the row decoder 140 and the column decoder 145.[0017] The command and address terminals may be supplied with command signals CMD, address signals ADDR, and chip select signals CS, from a memory controller. The command signals may represent various memory commands from the memory controller ( e.g ., including access commands, which can include read commands and write commands). The chip select signal CS may be used to select the memory device 100 to respond to commands and addresses provided to the command and address terminals. When an active CS signal is provided to the memory device 100, the commands and addresses can be decoded and memory operations can be performed. The command signals CMD may be provided as internal command signals ICMD to a command decoder 115 via the command/address input circuit 105. The command decoder 115 may include circuits to decode the internal command signals ICMD to generate various internal signals and commands for performing memory operations, for example, a row command signal to select a word line and a column command signal to select a bit line. The internal command signals can also include output and input activation commands, such as clocked command CMDCK (not shown in Figure 1).[0018] The command decoder 115, in some embodiments, may further include one or more registers 118 for tracking various counts or values {e.g., counts of refresh commands received by the memory device 100 or self-refresh operations performed by the memory device 100). In some embodiments, a subset of registers 118 may be referred to as mode registers and configured to store user-defined variables or indications to provide flexibility in performing various functions, features, and modes (e.g., ECC modes). For example, the subset of registers 118 may indicate whether an ECC mode of the memory device is enabled or disabled — e.g., whether the ECC function of the memory device 100 is enabled or disabled. In some examples, the subset of registers 118 (or different registers 118 other than the subset) may be configured to store one or more bits corresponding to a set of options for a host device to access the memory array when the ECC function of the memory device 100 is disabled.[0019] When a read command is issued to a bank with an open row and a column address is timely supplied as part of the read command, read data can be read from memory cells in the memory array 150 designated by the row address (which may have been provided as part of the Activate command identifying the open row) and column address. The read command may be received by the command decoder 115, which can provide internal commands to input/output circuit 160 so that read data can be output from the data terminals DQ, RDQS, DBI, and DMI via read/write amplifiers 155 and the input/output circuit 160 according to the RDQS clock signals. The read data may be provided at a time defined by read latency information RL that can be programmed in the memory device 100, for example, in a mode register (e.g., the register 118). The read latency information RL can be defined in terms of clock cycles of the CK clock signal. For example, the read latency information RL can be a number of clock cycles of the CK signal after the read command is received by the memory device 100 when the associated read data is provided.[0020] When a write command is issued to a bank with an open row and a column address is timely supplied as part of the write command, write data can be supplied to the data terminals DQ, DBI, and DMI according to the WCK and WCKF clock signals. The write command may be received by the command decoder 115, which can provide internal commands to the input/output circuit 160 so that the write data can be received by data receivers in the input/output circuit 160, and supplied via the input/output circuit 160 and the read/write amplifiers 155 to the memory array 150. The write data may be written in the memory cell designated by the row address and the column address. The write data may be provided to the data terminals at a time that is defined by write latency WL information. The write latency WL information can be programmed in the memory device 100, for example, in the mode register (e.g., register 118). The write latency WL information can be defined in terms of clock cycles of the CK clock signal. For example, the write latency information WL can be a number of clock cycles of the CK signal after the write command is received by the memory device 100 when the associated write data is received.[0021] Under the double data rate (DDR) scheme, a data burst having a burst length 2N (e.g., eight (8), sixteen (16), thirty-two (32)) includes 2N bits of data transmitted for each output pin (e.g., each data terminal DQ) of the memory device during N (e.g., four (4), eight (8), sixteen (16)) clock cycles (e.g., WCK and WCKF clock cycles). In some embodiments, the input/output circuit 160 may be configured to communicate with a host device (e.g., transmitting or receiving data via the data terminals DQ) for more than one burst length. For example, when the register (e.g., the mode register) indicates that the ECC function is enabled, the input/output circuit 160 may communicate with the host device for a burst length of sixteen (16) (which may also be referred to as BL16). The burst length (e.g., BL16) may be determined to communicate the user data for the first portion of the memory array 150 during the burst length. Moreover, the input/output circuit 160 may be configured to communicate with the host device for a different burst length (e.g., BL18) when the register indicates that the ECC function is disabled. The different burst length may be determined to communicate the user data for the first portion of the memory array 150 and the additional user data or the metadata for the second portion of the memory array 150 during the different burst length. Although the example described above illustrates an increment in burst length by two (2) that corresponds to one (1) additional clock cycle, the scope of the invention is not limited thereto. In some embodiments, the different burst length may be more than one (1) clock cycle longer than the burst length — e.g., two (2) clock cycles longer, three (3) clock cycles longer, or even more.[0022] The power supply terminals may be supplied with power supply potentials VDD and VSS. These power supply potentials VDD and VSS can be supplied to an internal voltage generator circuit 170. The internal voltage generator circuit 170 can generate various internal potentials VPP, VOD, VARY, VPERI, and the like based on the power supply potentials VDD and VSS. The internal potential VPP can be used in the row decoder 140, the internal potentials VOD and VARY can be used in the sense amplifiers included in the memory array 150, and the internal potential VPERI can be used in many other circuit blocks.[0023] The power supply terminal may also be supplied with power supply potential VDDQ. The power supply potential VDDQ can be supplied to the input/output circuit 160 together with the power supply potential VSS. The power supply potential VDDQ can be the same potential as the power supply potential VDD in an embodiment of the present technology. The power supply potential VDDQ can be a different potential from the power supply potential VDD in another embodiment of the present technology. However, the dedicated power supply potential VDDQ can be used for the input/output circuit 160 so that power supply noise generated by the input/output circuit 160 does not propagate to the other circuit blocks.[0024] The clock terminals and data clock terminals may be supplied with external clock signals and complementary external clock signals. The external clock signals CK, CKF, WCK, WCKF can be supplied to a clock input circuit 120. The CK and CKF signals can be complementary, and the WCK and WCKF signals can also be complementary. Complementary clock signals can have opposite clock levels and transition between the opposite clock levels at the same time. For example, when a clock signal is at a low clock level a complementary clock signal is at a high level, and when the clock signal is at a high clock level the complementary clock signal is at a low clock level. Moreover, when the clock signal transitions from the low clock level to the high clock level the complementary clock signal transitions from the high clock level to the low clock level, and when the clock signal transitions from the high clock level to the low clock level the complementary clock signal transitions from the low clock level to the high clock level.[0025] Input buffers included in the clock input circuit 120 can receive the external clock signals. For example, when enabled by a CKE signal from the command decoder 115, an input buffer can receive the CK and CKF signals and the WCK and WCKF signals. The clock input circuit 120 can receive the external clock signals to generate internal clock signals ICLK. The internal clock signals ICLK can be supplied to an internal clock circuit 130. The internal clock circuit 130 can provide various phase and frequency controlled internal clock signal based on the received internal clock signals ICLK and a clock enable signal CKE from the command decoder 115. For example, the internal clock circuit 130 can include a clock path (not shown in Figure 1) that receives the internal clock signal ICLK and provides various clock signals to the command decoder 115. The internal clock circuit 130 can further provide input/output (IO) clock signals. The IO clock signals can be supplied to the input/output circuit 160 and can be used as a timing signal for determining an output timing of read data and the input timing of write data. The IO clock signals can be provided at multiple clock frequencies so that data can be output from and input to the memory device 100 at different data rates. A higher clock frequency may be desirable when high memory speed is desired. A lower clock frequency may be desirable when lower power consumption is desired. The internal clock signals ICLK can also be supplied to a timing generator 135 and thus various internal clock signals can be generated.[0026] The memory device 100 can be connected to any one of a number of electronic devices capable of utilizing memory for the temporary or persistent storage of information, or a component thereof. For example, a host device of memory device 100 may be a computing device such as a desktop or portable computer, a server, a hand held device ( e.g ., a mobile phone, a tablet, a digital reader, a digital media player), or some component thereof {e.g., a central processing unit, a co-processor, a dedicated memory controller, etc.). The host device may be a networking device {e.g., a switch, a router, etc.) or a recorder of digital images, audio and/or video, a vehicle, an appliance, a toy, or any one of a number of other products. In one embodiment, the host device may be connected directly to memory device 100, although in other embodiments, the host device may be indirectly connected to memory device {e.g., over a networked connection or through intermediary devices).[0027] Figure 2 is a block diagram schematically illustrating a memory device 200 in accordance with an embodiment of the present technology. The memory device 200 may be an example or include aspects of the memory device 100 described with reference to Figure 1. The memory device 200 may include a periphery circuit 270, a register 275, an ECC circuit 280, and a memory array 250. The periphery circuit 270 may include aspects of various components described with reference to Figure 1. For example, the periphery circuit 270 may include aspects of the command/address input circuit 105, the address decoder 110, the command decoder 115, and the input/output circuit 160, among others. Moreover, the memory array 250 may be an example or include aspects of the memory array 150 described with reference to Figure 1.[0028] The memory array 250 may include a set of memory cells including a first portion 260 and a second portion 265. Further, the memory array 250 may correspond to a set of memory addresses, where each memory address of the set of memory addresses corresponds to the first portion 260 and to the second portion 265. The first portion 260 may be configured to store user data — e.g., data from the host device. In some embodiments, the first portion 260 may occupy a major portion of the storage capacity of memory array 250 — e.g., greater than 90% of the storage capacity in an embodiment. The first portion 260 may represent a portion of the memory array 250 accessible by the host device regardless of whether the on-die ECC function of the memory device 200 is enabled or disabled. In some embodiments, the second portion 265 may be configured to store ECC data that support the on-die ECC function when the on-die ECC function is enabled — hence, the second portion 265 may also be referred to as ECC parity bits or parity plane. The second portion 265 may occupy a relatively minor but appreciable portion of the storage capacity of memory array 250 — e.g., approximately 5 to 10% of the storage capacity in an embodiment. In some embodiments, the second portion 265 may be inaccessible by the host device when the ECC function is enabled. In other embodiments, the second portion 265 may be accessible by the host device when the ECC function is enabled such that the host device may access the ECC data.[0029] The second portion 265, however, when the ECC function is disabled, may be configured to store additional user data, metadata associated with the memory device 200, or both. When the second portion 265 is configured to store the additional user data, the memory device 200 may provide an increased storage capacity to the host device — e.g., almost 100% of the entire storage capacity (i.e., the entire storage capacity corresponding to the first portion 260 and the second portion 265). That is, the memory device 200 can provide an extra storage capacity (i.e., the storage capacity corresponding to the second portion 265) to the host device in addition to the storage capacity corresponding to the first portion 260 (which may be referred to as the specified storage capacity of the memory device). Moreover, the first portion 260 and the second portion 265 may provide user data uncorrected by the ECC function of the memory device 200. Such uncorrected user data may provide opportunities for the host device to optimize and/or modify its ECC algorithms should a change in error properties and/or characteristics is detected, in some cases. In some embodiments, the host device may be configured to perform a separate ECC function that is different from the ECC function of the memory device 200.[0030] Additionally or alternatively, the second portion 265 may be configured to store the metadata comprising information related to operational modes of the memory device 200, such as, operating temperatures, latency settings associated with access commands, parameters for data transmissions, test modes, or a combination thereof. In this manner, the memory device 200 may provide the metadata as part of access operations ( e.g ., read commands directed to the first portion 260) without having to incur commands {e.g., a mode register read (MRR) command) to retrieve the metadata that may be stored otherwise in various registers {e.g., mode registers) of the memory device 200. Such commands retrieving the metadata from the registers may introduce undesirable delay for the memory device 200 because the commands may put the memory device 200 in a specific mode {e.g., “status” mode) resulting in the memory array 250 in a certain condition {e.g., “idle” condition). Consequently, using such commands may be restricted and the host device’s visibility to the metadata may also be limited.[0031] In some embodiments, the second portion 265 may be organized to be physically adjacent (or in close proximity) to the first portion 260 such that certain components of the memory device 200 {e.g., row decoder 140, column decoder 145, read/write amplifier 155, sense amplifiers (SAMP)) that support the first portion 260 and the second portion 265 may be shared or efficiently laid out. In other embodiments, the second portion 265 may be organized to be separate from the first portion 260 such that the first portion 260 and the second portion 265 may operate relatively independently of each other — e.g., the first and the second subsets having separate power domain, separate routing of control and/or data paths.[0032] The register 275 (which may also be referred to as a mode register) may be configured to indicate whether an ECC function of the memory device 200 {e.g., on- die ECC function) is enabled or disabled. In some embodiments, a host device coupled with the memory device 200 may perform an ECC function without relying on the on-die ECC function of the memory device 200. In such cases, the register 275 may indicate that the on-die ECC function is disabled ( e.g ., by the host device) such that the memory device 200 may modify certain operational aspects to provide additional features to the host device. Further, the register 275 may be configured to store one or more bits corresponding to a set of options for the host device to access the memory array 250 when the ECC function is disabled. In some embodiments, the memory device 200 may include an additional register 276 (drawn in phantom in Figure 2) configured to store one or more bits corresponding to the set of options for the host device to access the memory array 250 when the ECC function is disabled.[0033] The ECC circuit 280 performs an ECC function for the memory device 200 when the ECC function is enabled. The ECC circuit 280 may be coupled with the second portion 265 and perform the ECC function for the user data stored in the first portion 260 using the ECC data stored in the second portion 265. In some embodiments, the ECC circuit 280 may be configured to detect two or more errors and/or to correct one or more errors in the user data. For example, the ECC circuit 280 may detect two bits of errors and correct one bit of error in the user data. In some embodiments, the ECC circuit 280 may be configured to indicate that the user data includes a quantity of errors greater than its detection and correction capability.[0034] The periphery circuit 270 may be configured to control overall aspects of communicating with the host device and accessing the memory array 250. For example, the periphery circuit 270 may receive an input from the host device directed to how the host device may proceed to access the memory array 250 when the ECC function is disabled. The periphery circuit 270 may select an option from a set of options available for the host device based on the input received from the host device. Subsequently, the periphery circuit 270 may update one or more bits in the register 275 (or the second register 276) based on the selected option and carry out an access command from the host device in accordance with the selected option as described in more detail with reference to Figure 3. In some embodiments, the periphery circuit 270 may bypass the ECC circuit 280 when the ECC function is disabled. [0035] Further, the periphery circuit 270 may communicate with the host device in accordance with the selected option. In some cases, the periphery circuit 270 may communicate with the host device without making any modification in a communication protocol. For example, the periphery circuit 270 may retrieve the user data uncorrected by the ECC function or storing the user data without performing the ECC function — e.g., accessing the first portion 260 without performing the ECC function. In other cases, the periphery circuit 270 may modify the communication protocol to establish a proper environment to communicate with the host device in accordance with the selected option — e.g., activating additional address pins {e.g., terminals) that are otherwise deactivated, enabling additional data pins {e.g., data terminals DQ) in the data channels {e.g., bus, interface), determining a burst length to transmit or receive data. Accordingly, the periphery circuit 270 may be configured to communicate with the host device for more than one burst lengths, in some embodiments.[0036] Although memory devices with memory arrays having first portions occupying greater than 90% of the storage capacity thereof and second portions occupying less than 10% of the storage capacity thereof have been described and illustrated in the foregoing exemplary embodiments, memory devices may be provided with memory arrays having different allocations of storage capacity in other embodiments. For example, first portions having less than 90% of the storage capacity {e.g., 75%, 66%, or even 50% or less) may be provided.[0037] Figure 3 is a table 300 illustrating various options for user defined operations in accordance with an embodiment of the present technology. The table 300 may be an example of or include aspects of the one or more bits in the register 275 (or the second register 276) configured to store a set of options for the host device to access the memory array 250 when the ECC function of the memory device 200 is disabled. The periphery circuit 270 may update the one or more bits based on a selected option in accordance with an input from the host device. The table 300 illustrates three (3) bits of the register 275 (or the second register 276) in the first column (SETTING column) to list a default condition and five (5) options. As the three bits may represent eight (8) different values (namely, 23different values), there may be up to two (2) additional options that are not described with reference to the table 300. Although the example described with reference to Figure 3 includes three (3) bits to indicate a set of options available to the host device to access memory array 250, the scope of the invention is not limited thereto. In some embodiments, the register 275 (or the second register 276) may include a different quantity of bits to represent different set of options — e.g., one (1) bit, two (2) bits, four (4) bits, five (5) bits.[0038] The table 300 further illustrates ECC states in the second column (ECC STATE column) and options for accessing the memory array in the third column (ECC ACCESS column). The ECC STATE indicates whether the ECC function of the memory device 200 is enabled {e.g., the default condition corresponding to the logic state of “000” stored in the register 275 or the second register 276) or disabled {e.g., one of the logic states “001 ,” “010,” “011 ,” “100,” or “101” stored in the register 275 or the second register 276). The ECC ACCESS provides brief description of options for the host device to access the memory array 250.[0039] The logic state “000” stored in the register 275 (or the second register 276) may correspond to a default condition for the memory device 200 to support access commands from the host device. Under the default condition, the host device may access the memory array 250 with the on-die ECC function enabled — e.g., retrieving user data from the first portion 260 that has been checked by the ECC data in the second portion 265, storing user data at the first portion 260 and associated ECC data (generated by the on-die ECC algorithm) stored at the second portion 265. The memory device 200 operating under the default condition may be regarded to provide a full quality specification.[0040] The logic state “001” stored in the register 275 (or the second register 276) may correspond to a first option for the memory device 200 to support access commands from the host device. Under the first option, the host device may access the memory array 250 by accessing the first portion 260 without having the ECC circuit 280 to perform the on-die ECC function {e.g., the ECC circuit 280 is bypassed or deactivated). Accordingly, the memory device 200 {e.g., the periphery circuit 270) may retrieve user data from the first portion 260 uncorrected by the ECC function or store user data at the first portion 260 without performing the ECC function — e.g., the periphery circuit 270 ignoring the second portion 265 when the logic state stored in the register 275 (or the second register 276) corresponds to “001.” In some cases, this option may be regarded as providing a modified quality specification (which may be ref erred to as operating under a reduced quality specification) when compared to the default condition.[0041] The logic state “010” stored in the register 275 (or the second register 276) may correspond to a second option for the memory device 200 to support access commands from the host device. Under the second option, the host device may access the memory array 250 by accessing both the first portion 260 and the second portion 265 of the memory array 250. As described with reference to Figures 1 and 2, each memory address of the set of memory addresses corresponding to the memory array 250 may identify both the first portion 260 and the second portion 265 such that each memory address may identify user data from the first portion 260 and associated ECC data from the second portion 265 under the default condition ( e.g ., when the ECC function is enabled). As such, the second portion 265 may not have been designated with its own set of memory addresses under the default condition. In some embodiments, however, the memory device 200 may include one or more address pins that are separate from a quantity of address pins corresponding to the set of memory addresses for the memory array 250.[0042] When the logic state stored in the register 275 (or the second register 276) corresponds to “010” (i.e., under the second option), the one or more address pins may be used to identify the second portion 265 — e.g., the second portion 265 may be designated with its own set of memory addresses, which may be independent of the first portion 260 of the memory array 250. Accordingly, a memory address associated with an access command may be modified to include a first segment and a second segment, where the first segment of the memory address corresponds to the one or more address pins identifying the second portion 266 and the second segment of the memory address may remain the same as the default condition — e.g., the second segment of the memory address corresponding to a quantity of address pins for the set of memory addresses corresponding to the memory array 250. In this manner, the memory address associated with the access command may be configured to separately identify the second portion 265 independent of the first portion 260 of the memory array 250. Under the second option, the memory device 200 {e.g., periphery circuit 270) may be configured to decode the first segment of the memory address (in addition to decoding the second segment of the memory address) to identify the second portion 265 of the memory array 250 such that the host device may access both the first portion 260 and the second portion 265 of the memory array 250.[0043] The logic state “011” stored in the register 275 (or the second register 276) may correspond to a third option for the memory device 200 to support access commands from the host device. Under the third option, the host device may access the memory array 250 by accessing the second portion 265 in lieu of accessing the first portion 260 of the memory array 250. In other words, the logic state “011” stored in the register 275 (or the second register 276) may function as a flag (or an indicator) for the periphery circuit 270 to access the second portion 265, instead of the first portion 260, based on a memory address associated with an access command for the memory array 250. As described herein, the memory address for the memory array 250 may be configured to identify the first portion 260 for user data and the second portion 265 for ECC data associated with the user data when operating under the default condition. As such, the memory device 200 ( e.g ., the periphery circuit 270) may be configured to access the second portion 265 of the memory array 250 based on the memory address of the access command instead of accessing the first portion 260 when the logic state stored in the register 275 (or the second register 276) corresponds to “011.”[0044] The logic state “100” stored in the register 275 (or the second register 276) may correspond to a fourth option for the memory device 200 to support access commands from the host device. Under the fourth option, the host device may access the memory array 250 by accessing the second portion 265 of the memory array 250 via a first set of data pins {e.g., data terminals DQ) that is separate from a second set of data pins corresponding to the user data for the first portion 260 of the memory array 250. As described herein with reference to Figures 1 and 2, the memory array 250 may be configured to communicate data {e.g., user data for the first portion 260 of the memory array 250) via the second set of data pins. In some embodiments, however, the memory device 200 may include the first set of data pins that are separate from the second set of data pins corresponding to the user data for the first portion 260 of the memory array 250. When the logic state “100” is stored in the register 275 (or the second register 276), the memory device 200 {e.g., the periphery circuit 270) may be configured to enable the first set of data pins in addition to (or in lieu of) the second set of data pins such that the memory device 200 may communicate additional data {e.g., additional user data, metadata) for the second portion 265 — e.g., transmitting the additional data from the second portion 265 via the first set of data pins, receiving the additional data to store at the second portion 265 via the first set of data pins.[0045] The logic state “101” stored in the register 275 (or the second register 276) may correspond to a fifth option for the memory device 200 to support access commands from the host device. Under the fifth option, the host device may access the memory array 250 by communicating for a burst length that may correspond to the user data for the first portion 260 and additional data for the second portion 265. When the logic state “101” is stored in the register 275 (or the second register 276), the memory device 200 {e.g., periphery circuit 270) may access both the first portion 260 and the second portion 265 of the memory array 250 and determine a burst length for communicating with the host device. The newly determined burst length {e.g., BL18) may be greater that the burst length {e.g., BL16) used under the default condition by a burst length {e.g., BL2) that corresponds to the additional data for the second portion 265.[0046] Figure 4 is a block diagram of a system 401 having a memory device 400 configured in accordance with an embodiment of the present technology. The memory device 400 may be an example of or include aspects of the memory devices 100 or 200 described with reference to Figures 1 and 2. As shown, the memory device 400 includes a main memory 402 {e.g., DRAM, NAND flash, NOR flash, FeRAM, PCM, etc.) and control circuitry 406 operably coupled to a host device 408 {e.g., an upstream central processor (CPU)). The main memory 402 may be an example of or include aspects of the memory array 150 or 250 described with reference to Figure 1 and 2. Further, the control circuitry 406 may be an example of or include aspects of the periphery circuit 270 described with reference to Figure 2. The main memory 402 includes a plurality of memory units 420, which each include a plurality of memory cells. The memory units 420 can be individual memory dies, memory planes in a single memory die, a stack of memory dies vertically connected with through-silicon vias (TSVs), or the like. For example, in one embodiment, each of the memory units 420 can be formed from a semiconductor die and arranged with other memory unit dies in a single device package. In other embodiments, multiple memory units 420 can be co located on a single die and/or distributed across multiple device packages. The memory units 420 may, in some embodiments, also be sub-divided into memory regions 428 ( e.g ., banks, ranks, channels, blocks, pages, etc.).[0047] The memory cells can include, for example, floating gate, charge trap, phase change, capacitive, ferroelectric, magnetoresistive, and/or other suitable storage elements configured to store data persistently or semi-persistently. The main memory 402 and/or the individual memory units 420 can also include other circuit components, such as multiplexers, decoders, buffers, read/write drivers, address registers, data out/data in registers, etc., for accessing and/or programming {e.g., writing) the memory cells and other functionality, such as for processing information and/or communicating with the control circuitry 406 or the host device 408. Although shown in the illustrated embodiments with a certain number of memory cells, rows, columns, regions, and memory units for purposes of illustration, the number of memory cells, rows, columns, regions, and memory units can vary, and can, in other embodiments, be larger or smaller in scale than shown in the illustrated examples. For example, in some embodiments, the memory device 400 can include only one memory unit 420. Alternatively, the memory device 400 can include two, three, four, eight, ten, or more {e.g., 16, 32, 64, or more) memory units 420. Although the memory units 420 are shown in Figure 4 as including four memory regions 428 each, in other embodiments, each memory unit 420 can include one, two, three, eight, or more {e.g., 16, 32, 64, 100, 128, 256 or more) memory regions.[0048] In one embodiment, the control circuitry 406 can be provided on the same die as the main memory 402 {e.g., including command / address / clock input circuitry, decoders, voltage and timing generators, input/output circuitry, etc.). In another embodiment, the control circuitry 406 can be a microcontroller, special purpose logic circuitry {e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), control circuitry on a memory die, etc.), or other suitable processor. In one embodiment, the control circuitry 406 can include a processor configured to execute instructions stored in memory to perform various processes, logic flows, and routines for controlling operation of the memory device 400, including managing the main memory 402 and handling communications between the memory device 400 and the host device 408. In some embodiments, the control circuitry 406 can include embedded memory with memory registers for storing, e.g., row counters, bank counters, memory pointers, fetched data, etc. In another embodiment of the present technology, a memory device 400 may not include control circuitry, and may instead rely upon external control ( e.g ., provided by the host device 408, or by a processor or controller separate from the memory device 400).[0049] The host device 408 can be any one of a number of electronic devices capable of utilizing memory for the temporary or persistent storage of information, or a component thereof. For example, the host device 408 may be a computing device such as a desktop or portable computer, a server, a hand-held device {e.g., a mobile phone, a tablet, a digital reader, a digital media player), or some component thereof {e.g., a central processing unit, a co-processor, a dedicated memory controller, etc.). The host device 408 may be a networking device {e.g., a switch, a router, etc.) or a recorder of digital images, audio and/or video, a vehicle, an appliance, a toy, or any one of a number of other products. In one embodiment, the host device 408 may be connected directly to memory device 400, although in other embodiments, the host device 408 may be indirectly connected to memory device {e.g., over a networked connection or through intermediary devices).[0050] In operation, the control circuitry 406 can directly write or otherwise program {e.g., erase) the various memory regions of the main memory 402. The control circuitry 406 communicates with the host device 408 over a host-device bus or interface 410. In some embodiments, the host-device bus or interface 410 may be configured to carry data bursts having variable burst lengths. For example, the host-device bus or interface 410 may carry data bursts having a first burst length {e.g., BL16) or a second burst length {e.g., BL18, BL20, BL22, BL24) based on whether an ECC function of the memory device 400 is enabled {e.g., BL16) or disabled {e.g., BL18, BL20, BL22, BL24). In some embodiments, the host device 408 and the control circuitry 406 can communicate over a dedicated memory bus {e.g., a DRAM bus). In other embodiments, the host device 408 and the control circuitry 406 can communicate over a serial interface, such as a serial attached SCSI (SAS), a serial AT attachment (SATA) interface, a peripheral component interconnect express (PCIe), or other suitable interface {e.g., a parallel interface). The host device 408 can send various requests (in the form of, e.g., a packet or stream of packets) to the control circuitry 406. A request can include a command to read, write, erase, return information, and/or to perform a particular operation ( e.g ., a refresh operation, a TRIM operation, a precharge operation, an activate operation, a wear-leveling operation, a garbage collection operation, etc.).[0051] In some embodiments, the control circuitry 406 can be configured to track operations {e.g., read operations, write operations, erase operations, activate operations, etc.) performed in the main memory 402 {e.g., in a register or table in an embedded memory of the control circuitry 406) in multiple memory units 420 to facilitate performing refresh operations on an as-needed basis. In this regard, the control circuitry 406 can be configured to compare the number or rate of operations experienced by different memory units 420 and to perform or schedule refresh operations on the memory units 420 based on a comparison between the number or rate of operations experienced by the memory units 420. Alternatively, the control circuitry 406 can be configured to perform or schedule refresh operations on the memory units 420 based on a comparison of each memory unit 420 to one or more predetermined thresholds {e.g., threshold numbers of operations, threshold rates of operations, etc.). Accordingly, a memory unit 420 which is the target of operations that exceed a threshold number or rate can be refreshed more frequently than another unit 420, due to the freedom with which different units 420 can be subjected to out-of-order refresh operations.[0052] In some embodiments, the memory system 401 may include the host device 408, a memory device 400 that includes a memory array {e.g., main memory 402) corresponding to a set of memory addresses, where each memory address of the set of memory addresses is associated with a first portion of the memory array configured to store user data and with a second portion of the memory array configured to store ECC data associated with the user data of the first portion when the ECC function of the memory device 400 is enabled. The memory device 400 further includes a register configured to store one or more bits corresponding to a set of options for the host device to access the memory array when the ECC function is disabled.[0053] In some embodiments, the host device 408 may be configured to transmit an input directed to the set of options to access the memory array. Further, the memory device 400 may be configured to select an option from the set of options based on the input from the host device 408, update the one or more bits in the register based on the selected option, and communicate with the host device 408 in accordance with the selected option. In some embodiments, the host device 408 may be configured to perform a separate ECC function that is different from the ECC function of the memory device 400. In some cases, the host device 408 may be configured to generate a memory address including a first segment and a second segment, where the first segment of the memory address corresponds to one or more address pins that are separate from a quantity of address pins corresponding to the second segment for the set of memory addresses.[0054] In some embodiments, the host device 408 may be configured to activate one or more channels associated with a first set of data pins of the memory device 400, where the first set of data pins corresponds to additional data for the second portion and is separate from a second set of data pins corresponding to the user data for the first portion of the memory array. In some embodiments, the host device 408 may be configured to communicate with the memory device 400 for a burst length corresponding to the user data for the first portion and additional data for the second portion.[0055] Figure 5 is a flow chart 500 illustrating a method of operating a memory device in accordance with an embodiment of the present technology. The flow chart 500 may be an example of or include aspects of a method that the memory device 200 (or the periphery circuit 270 of the memory device 200) may perform as described with reference to Figure 2. Such memory device may include a memory array ( e.g ., the memory array 250 of the memory device 200) corresponding to a set of memory addresses, where each memory address of the set of memory addresses is associated with a first portion {e.g., the first portion 260) of the memory array configured to store user data and with a second portion {e.g., the second portion 265) of the memory array configured to store ECC data associated with the user data of the first portion when an ECC function of the memory device is enabled. Further, the memory device may include a register {e.g., the register 275 or the second register 276 of the memory device 200) configured to store one or more bits corresponding to a set of options for a host device to access the memory array when the ECC function is disabled.[0056] The method includes receiving, at a memory device, a signaling that indicates an option selected from a set of options for a host device to access a memory array of the memory device when an ECC function of the memory device is disabled, the memory array corresponding to a set of memory addresses each associated with a first portion of the memory array configured to store user data and with a second portion of the memory array configured to store ECC data associated with the user data of the first portion when an ECC function of the memory device is enabled (box 510). In accordance with one aspect of the present technology, the receiving feature of box 510 can be performed by the command/address input circuit 105, a periphery circuit ( e.g ., the periphery circuit 270 of Figure 2), or control circuitry {e.g., the control circuitry 406 of Figure 4).[0057] The method further includes storing, in a register of the memory device, one or more bits corresponding to the option selected from the set of options (box 520). In accordance with one aspect of the present technology, the storing feature of box 520 can be performed by the periphery circuit {e.g., the periphery circuit 270 of Figure 2) or the control circuitry {e.g., the control circuitry 406 of Figure 4) in conjunction with a register {e.g., the register 275 of Figure 2).[0058] The method further includes receiving, at the memory device, an access command associated with a memory address of the set of memory addresses (box 530). In accordance with one aspect of the present technology, the receiving feature of box 530 can be performed by the command/address input circuit 105, a periphery circuit {e.g., the periphery circuit 270 of Figure 2), or control circuitry {e.g., the control circuitry 406 of Figure 4).[0059] The method further includes accessing the first portion of the memory array, the second portion of the memory array, or both in response to the access command and based on the selected option as indicated by the one or more bits stored in the register (box 540). In accordance with one aspect of the present technology, the accessing feature of box 540 can be performed by the periphery circuit {e.g., the periphery circuit 270 of Figure 2) or the control circuitry {e.g., the control circuitry 406 of Figure 4) in conjunction with an address decoder, a row decoder, a column decoder, and a read/write amplifier {e.g., the address decoder 110, the row decoder 140, the column decoder 145, and the read/write amplifier 155 of Figure 1).[0060] The method further includes communicating with the host device in accordance with the selected option (box 550). In accordance with one aspect of the present technology, the communicating feature of box 550 can be performed by the periphery circuit ( e.g ., the periphery circuit 270 of Figure 2) or the control circuitry ( e.g ., the control circuitry 406 of Figure 4) in conjunction with an input/output circuit {e.g., the input/output circuit 160 of Figure 1).[0061] The method can further include decoding a first segment of the memory address associated with the access command to identify the second portion of the memory array. In some embodiments, the first segment corresponds to one or more address pins that are separate from a quantity of address pins corresponding to the plurality of memory addresses. In accordance with one aspect of the present technology, the decoding feature can be performed by the periphery circuit {e.g., the periphery circuit 270 of Figure 2) or the control circuitry {e.g., the control circuitry 406 of Figure 4) in conjunction with an address decoder, a row decoder, and a column decoder {e.g., the address decoder 110, the row decoder 140, and the column decoder 145 of Figure 1).[0062] In some embodiments, accessing the first portion of the memory array includes retrieving the user data uncorrected by the ECC function or storing the user data without performing the ECC function. In some embodiments, accessing the second portion of the memory array may be based on the memory address associated with the access command. In accordance with one aspect of the present technology, the accessing feature can be performed by the periphery circuit {e.g., the periphery circuit 270 of Figure 2) or the control circuitry {e.g., the control circuitry 406 of Figure 4) in conjunction with an address decoder, a row decoder, a column decoder, and a read/write amplifier {e.g., the address decoder 110, the row decoder 140, the column decoder 145, and the read/write amplifier 155 of Figure 1).[0063] The method can further include enabling a first set of data pins corresponding to additional data for the second portion of the memory array. In accordance with one aspect of the present technology, the enabling feature can be performed by the periphery circuit {e.g., the periphery circuit 270 of Figure 2) or the control circuitry {e.g., the control circuitry 406 of Figure 4) in conjunction with an input/output circuit {e.g., the input/output circuit 160 of Figure 1). [0064] The method can further include determining a burst length for communicating with the host device, where the burst length corresponds to the user data for the first portion and additional data for the second portion. In accordance with one aspect of the present technology, the enabling feature can be performed by the periphery circuit ( e.g ., the periphery circuit 270 of Figure 2) or the control circuitry ( e.g ., the control circuitry 406 of Figure 4).[0065] It should be noted that the methods described above describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Furthermore, embodiments from two or more of the methods may be combined.[0066] Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Some drawings may illustrate signals as a single signal; however, it will be understood by a person of ordinary skill in the art that the signal may represent a bus of signals, where the bus may have a variety of bit widths.[0067] The devices discussed herein, including a memory device, may be formed on a semiconductor substrate or die, such as silicon, germanium, silicon-germanium alloy, gallium arsenide, gallium nitride, etc. In some cases, the substrate is a semiconductor wafer. In other cases, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOP), or epitaxial layers of semiconductor materials on another substrate. The conductivity of the substrate, or sub-regions of the substrate, may be controlled through doping using various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping may be performed during the initial formation or growth of the substrate, by ion- implantation, or by any other doping means.[0068] The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. Other examples and implementations are within the scope of the disclosure and appended claims. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations.[0069] As used herein, including in the claims, “or” as used in a list of items (for example, a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.”[0070] From the foregoing, it will be appreciated that specific embodiments of the invention have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. Rather, in the foregoing description, numerous specific details are discussed to provide a thorough and enabling description for embodiments of the present technology. One skilled in the relevant art, however, will recognize that the disclosure can be practiced without one or more of the specific details. In other instances, well-known structures or operations often associated with memory systems and devices are not shown, or are not described in detail, to avoid obscuring other aspects of the technology. In general, it should be understood that various other devices, systems, and methods in addition to those specific embodiments disclosed herein may be within the scope of the present technology.
A system and method for fabricating metal patterns are described. Multiple mandrels are formed on a first polysilicon layer which is on top of a first oxide layer. Each mandrel uses a second polysilicon on top of a first nitride. A spacer oxide and a spacer nitride are formed on the sidewalls of the mandrels to create double spacers. A second oxide layer is deposited followed by removing layers until the first nitride in the mandrels is reached. Areas are etched based on a selected method of multiple available methods until the first oxide layer is etched providing trenches for the metal patterns. Remaining materials on the first oxide layer are removed followed by metal being deposited in the trenches in the first oxide layer.
WHAT IS CLAIMED IS1. A semiconductor device fabrication process comprising:form a first nitride layer on top of a first polysilicon layer which is on top of a first oxide layer;deposit a second polysilicon layer on top of the first nitride layer;deposit a photoresist layer on top of the second polysilicon layer;etch photoresist trenches in the photoresist layer until the second polysilicon layer is reached, wherein at least one photoresist trench has a width used for a group of metal patterns to be formed;etch each of the second polysilicon layer and the first nitride layer in the photoresist trenches until the first polysilicon layer is reached which creates a first plurality of mandrels comprising remaining photoresist, remaining second polysilicon and remaining first nitride;remove the remaining photoresist from the first plurality of mandrels;deposit a conformal spacer oxide layer over the first plurality of mandrels and exposed areas of the first polysilicon layer;etch the conformal spacer oxide layer leaving sidewalls on each of the first plurality of mandrels;deposit a conformal spacer nitride layer over the first plurality of mandrels and exposed areas of the first polysilicon layer; andetch the conformal spacer nitride layer leaving sidewalls on the first plurality of mandrels to form a double spacer comprising remaining spacer nitride and remaining spacer oxide.2. The semiconductor device fabrication process as recited in claim 1, wherein the process further comprises:deposit a second oxide layer over the double spacers and exposed areas of the first polysilicon layer; andremove portions of the second oxide layer and the double spacer until the remaining first nitride in the first plurality of mandrels are reached, wherein the second polysilicon in the first plurality of mandrels is completely removed, wherein on top of the first polysilicon layer are alternating regions comprising the remaining spacer nitride, the remaining spacer oxide, the remaining first nitride, and remaining second oxide. The semiconductor device fabrication process as recited in claim 2, wherein at least one mandrel of the first plurality of mandrels has a width for spacing between two groups of metal patterns to be formed.The semiconductor device fabrication process as recited in claim 2, wherein the remaining spacer nitride in the double spacer has a width used for spacing between metal patterns of the group of metal patterns to be formed.The semiconductor device fabrication process as recited in claim 2, wherein the remaining spacer oxide in the double spacer has a width used for a width of metal patterns of the group of metal patterns to be formed.The semiconductor device fabrication process as recited in claim 2, wherein the process further comprises:remove each of the remaining spacer oxide and the remaining second oxide from the alternating regions on top of the first polysilicon layer; andetch the first polysilicon layer in areas unprotected by the remaining spacer nitride and the remaining first nitride of the alternating regions until the first oxide layer is reached, which creates a second plurality of mandrels comprising the remaining spacer nitride with remaining first polysilicon underneath or the remaining first nitride with remaining first polysilicon underneath.The semiconductor device fabrication process as recited in claim 6, wherein the process further comprises:remove each of the remaining spacer nitride and the remaining first nitride from the second plurality of mandrels;etch oxide trenches in the first oxide layer in areas where the first oxide layer is unprotected by the second plurality of mandrels.The semiconductor device fabrication process as recited in claim 6, wherein the process further comprises:etch oxide trenches in the first oxide layer in areas where the first oxide layer is unprotected by the second plurality of mandrels; andremove each of the remaining spacer nitride and the remaining first nitride from the second plurality of mandrels.9. The semiconductor device fabrication process as recited in claim 7, wherein the process further comprises:remove the remaining first polysilicon from the second plurality of mandrels; and deposit metal in the oxide trenches.10. A semiconductor structure comprising:a first polysilicon layer on top of a first oxide layer;a first plurality of mandrels on top of the first polysilicon layer, each mandrel comprising second polysilicon on top of first nitride;a first pair of sidewalls on each of the first plurality of mandrels, wherein each sidewall comprise spacer oxide; anda second pair of sidewalls on each of the first pair of sidewalls, wherein each sidewall comprise spacer nitride, and wherein on each side of each mandrel of the first plurality of mandrels, a double spacer comprises a combination of the spacer oxide and the spacer nitride.11. The semiconductor structure as recited in claim 10, further comprising a second oxide layer deposited over the double spacers and exposed areas of the first polysilicon layer.12. The semiconductor structure as recited in claim 11, wherein at least one mandrel of the first plurality of mandrels has a width for spacing between two groups of metal patterns to be formed.13. The semiconductor structure as recited in claim 11, wherein the remaining spacer nitride in the double spacer has a width used for spacing between metal patterns of the group of metal patterns to be formed.14. The semiconductor structure as recited in claim 11, wherein the remaining spacer oxide in the double spacer has a width used for a width of metal patterns of the group of metal patterns to be formed.15. A non-transitory computer readable storage medium storing program instructions, wherein the program instructions for performing a semiconductor fabrication process are executable by a processor to:form a first nitride layer on top of a first polysilicon layer which is on top of a first oxide layer; deposit a second polysilicon layer on top of the first nitride layer;deposit a photoresist layer on top of the second polysilicon layer;etch photoresist trenches in the photoresist layer until the second polysilicon layer is reached, wherein at least one photoresist trench has a width used for a group of metal patterns to be formed;etch each of the second polysilicon layer and the first nitride layer in the photoresist trenches until the first polysilicon layer is reached which creates a first plurality of mandrels comprising remaining photoresist, remaining second polysilicon and remaining first nitride;remove the remaining photoresist from the first plurality of mandrels;deposit a conformal spacer oxide layer over the first plurality of mandrels and exposed areas of the first polysilicon layer;etch the conformal spacer oxide layer leaving sidewalls on each of the first plurality of mandrels;deposit a conformal spacer nitride layer over the first plurality of mandrels and exposed areas of the first polysilicon layer; andetch the conformal spacer nitride layer leaving sidewalls on the first plurality of mandrels to form a double spacer comprising remaining spacer nitride and remaining spacer oxide.16. The non-transitory computer readable storage medium as recited in claim 15, wherein the program instructions are further executable by a processor to:deposit a second oxide layer over the double spacers and exposed areas of the first polysilicon layer; andremove portions of the second oxide layer and the double spacer until the remaining first nitride in the first plurality of mandrels are reached, wherein the second polysilicon in the first plurality of mandrels is completely removed, wherein on top of the first polysilicon layer are alternating regions comprising the remaining spacer nitride, the remaining spacer oxide, the remaining first nitride, and remaining second oxide.17. The non-transitory computer readable storage medium as recited in claim 16, wherein at least one mandrel of the first plurality of mandrels has a width for spacing between two groups of metal patterns to be formed.18. The non-transitory computer readable storage medium as recited in claim 16, wherein the program instructions are further executable by a processor to:remove each of the remaining spacer oxide and the remaining second oxide from the alternating regions on top of the first polysilicon layer;etch the first polysilicon layer in areas unprotected by the remaining spacer nitride and the remaining first nitride of the alternating regions until the first oxide layer is reached, which creates a second plurality of mandrels comprising the remaining spacer nitride with remaining first polysilicon underneath or the remaining first nitride with remaining first polysilicon underneath; andremove each of the remaining spacer nitride and the remaining first nitride from the second plurality of mandrels.19. The non-transitory computer readable storage medium as recited in claim 18, wherein the program instructions are further executable by a processor to:form a photoresist layer on top of the remaining polysilicon of the second plurality of mandrels;etch through each of the photoresist layer and the remaining polysilicon in areas for extra metal tracks; andremove the remaining photoresist.20. The non-transitory computer readable storage medium as recited in claim 19, wherein the program instructions are further executable by a processor to:etch oxide trenches in the first oxide layer in areas where the first oxide layer is unprotected by the second plurality of mandrels;remove the remaining first polysilicon from the second plurality of mandrels; and deposit metal in the oxide trenches.
DOUBLE SPACER IMMERSION LITHOGRAPHY TRIPLE PATTERNINGFLOW AND METHODBACKGROUNDDescription of the Relevant Art[0001] As both semiconductor manufacturing processes advance and on-die geometric dimensions reduce, semiconductor chips provide more functionality and performance while consuming less space. While many advances have been made, design issues still arise with modern techniques in processing and integrated circuit design that may limit potential benefits. For example, as the number and size of signal routes used in a design increase, the area consumed by the corresponding metal wires also increases. To achieve reductions in the width and pitch of metal wires, relatively expensive processing techniques are used. In addition, these relatively expensive processing techniques are also relatively new and accordingly have a relatively high defect rate.[0002] In view of the above, efficient methods and systems for fabricating metal wires while managing semiconductor processing yield and decreasing signal congestion are desired.BRIEF DESCRIPTION OF THE DRAWINGS[0003] The advantages of the methods and mechanisms described herein may be better understood by referring to the following description in conjunction with the accompanying drawings, in which:[0004] FIG. 1 is a generalized diagram of a top view of a standard cell layout.[0005] FIG. 2 is a generalized diagram of another top view of a standard cell layout highlighting the use of a group of signal tracks.[0006] FIG. 3 is a generalized diagram of a cross-sectional view of semiconductor metal patterns being fabricated.[0007] FIG. 4 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated.[0008] FIG. 5 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated.[0009] FIG. 6 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated. [0010] FIG. 7 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated.[0011] FIG. 8 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated.[0012] FIG. 9 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated.[0013] FIG. 10 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated.[0014] FIG. 11 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated.[0015] FIG. 12 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated.[0016] FIG. 13 is a generalized diagram of a method for fabricating metal patterns to be used for metal tracks.[0017] FIG. 14 is a generalized diagram of another method for fabricating metal patterns to be used for metal tracks.[0018] FIG. 15 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated.[0019] FIG. 16 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated.[0020] FIG. 17 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated.[0021] FIG. 18 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated.[0022] FIG. 19 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated.[0023] FIG. 20 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated.[0024] FIG. 21 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated.[0025] FIG. 22 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated.[0026] FIG. 23 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated. [0027] FIG. 24 is a generalized diagram of another method for fabricating metal patterns to be used for metal tracks.[0028] FIG. 25 is a generalized diagram of a cross-sectional view of semiconductor metal patterns being fabricated using alternative steps.[0029] FIG. 26 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated using alternative steps.[0030] FIG. 27 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated using alternative steps.[0031] FIG. 28 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated using alternative steps.[0032] FIG. 29 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated using alternative steps.[0033] FIG. 30 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated using alternative steps.[0034] FIG. 31 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated using alternative steps.[0035] FIG. 32 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated using alternative steps.[0036] FIG. 33 is a generalized diagram of another cross-sectional view of semiconductor metal patterns being fabricated using alternative steps.[0037] FIG. 34 is a generalized diagram of another method for fabricating metal patterns to be used for metal tracks.[0038] While the invention is susceptible to various modifications and alternative forms, specific embodiments are shown by way of example in the drawings and are herein described in detail. It should be understood, however, that drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents and alternatives falling within the scope of the present invention as defined by the appended claims.DETAILED DESCRIPTION[0039] In the following description, numerous specific details are set forth to provide a thorough understanding of the methods and mechanisms presented herein. However, one having ordinary skill in the art should recognize that the various embodiments may be practiced without these specific details. In some instances, well-known structures, components, signals, computer program instructions, and techniques have not been shown in detail to avoid obscuring the approaches described herein. It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements.[0040] In various embodiments, a standard cell uses one or more groups of signal wires for signal routing. In some embodiments, the standard cell uses a first group at the top of the cell in for horizontal signal routes and a second group at the bottom of the cell for horizontal signal routes. Each group uses two or more metal tracks for the signal wires. In some embodiments, these metal tracks use a local interconnect metal layer such as metal 0. The width of the metal and the spacing between the metal is significantly small and created by a semiconductor fabrication process with a relatively high resolution. The high resolution allows for multiple contacts to be placed on trench silicide contacts and metal gates where they interconnect with either of the top group and the bottom group of metal tracks using the local interconnect.[0041] The multiple locations provide efficient signal and power routing within the standard cell so the chance of using another metal layer other than the local interconnect is significantly reduced. For example, PMOS FETS (p-type metal oxide semiconductor field effect transistors, or pfets) at the top of the standard cell have access to multiple potential locations for contacts within the top group of metal tracks using the local interconnect. Similarly, the NMOS FETS (n- type metal oxide semiconductor field effect transistors, or nfets) at the bottom of the standard cell have access to multiple potential locations for contacts within the bottom group of metal tracks using the local interconnect. The flexibility offered by the multiple potential locations for contacts within these groups eliminates using other metal interconnects, such as Metal 1 or Metal 2, and the corresponding contacts for routing signals and power.[0042] In order to create the groups of metal tracks using the local interconnect, a semiconductor structure is fabricated using a first polysilicon layer on top of a first oxide layer. Multiple mandrels are on top of the first polysilicon layer where each mandrel includes a second polysilicon on top of a first nitride. The semiconductor structure includes a first pair of sidewalls on each of the multiple mandrels, wherein each sidewall uses spacer oxide. A second pair of sidewalls is on each of the first pair of sidewalls, where each of these sidewalls use spacer nitride. Therefore, on each side of each mandrel is a double spacer using a combination of the spacer oxide and the spacer nitride.[0043] A second oxide layer is deposited over the double spacers and exposed areas of the first polysilicon layer. At least one mandrel of the multiple mandrels has a width used for spacing between two groups of metal patterns to be formed. The remaining spacer nitride in the double spacer has a width used for spacing between metal patterns of the group of metal patterns to be formed. The remaining spacer oxide in the double spacer has a width used for a width of metal patterns of the group of metal patterns to be formed. A series of fabrication process steps follow where particular areas of the semiconductor structure are etched in a particular order to form the groups of metal patterns. In the following description, figures 1-2 illustrate the layout used for the standard cell using the groups of metal tracks. Figures 3-12 illustrate cross-sectional views of the semiconductor structure being fabricated. Figures 13-14 provide steps of a method for fabricating the semiconductor structure. Figures 15-23 illustrate cross-sectional views of the semiconductor structure being further fabricated to create the groups of metal patterns. Figure 24 provides steps of a method for further fabricating the semiconductor structure in order to create the groups of metal patterns. Figures 25-33 illustrate cross-sectional views of the semiconductor structure being further fabricated with alternate processing steps to create the groups of metal patterns. Figure 34 provides steps of a method for further fabricating the semiconductor structure with alternate processing steps in order to create the groups of metal patterns.[0044] Referring to FIG. 1, a generalized block diagram of a top view of a standard cell layout 100 is shown. Here, the active regions are not shown in the standard cell layout 100 for ease of illustration. In the illustrated embodiment, the standard cell layout 100 is for a six device multiplexer. However, the fabrication techniques shown in FIGs. 3-23 and 24-33 can be used for a variety of other standard cells used for other complex gates and functional units. As used herein, device is also referred to as transistor. For the six device multiplexer, the PMOS FETS (p-type metal oxide semiconductor field effect transistors, or pfets) are at the top of the standard cell layout 100. The MOS FETS (n-type metal oxide semiconductor field effect transistors, or nfets) are at the bottom of the standard cell layout 100.[0045] In various embodiments, the transistors in the standard cell layout 100 are non-planar transistors. Non-planar transistors are a relatively recent development in semiconductor processing for reducing short channel effects. Tri-gate transistors, Fin field effect transistors (FETs) and gate all around (GAA) transistors are examples of non-planar transistors. Next, the materials used in the layout 100 are described.[0046] As shown, the standard cell layout 100 uses metal gate 110 in a vertical direction, trench silicide contacts 120 for the source and drain regions in the vertical direction, and metal 0 (M0 or MetalO) 130 for local interconnections in the horizontal direction. In one embodiment, a self- aligned gate and local interconnect process in addition to a gate open contact process is used to create the full trench silicide straps. As shown, contacts 140 are used for connecting the metal gate 110 to MetalO 130 and contacts 142 are used for connecting the trench silicide contact 120 to MetalO 130. The standard cell layout 100 additionally uses metal 1 (Ml or Metal 1) 150 for local interconnections in the vertical direction and vias 152 for connecting the horizontal interconnect MetalO 130 to the vertical interconnect Metall 150.[0047] Layout 100 uses power pins at the top and ground pins at the bottom. As shown, layout 100 does not use power rails anywhere. The vertical Metall 150 routing at the top provides flexible connection to horizontal metal 2 (M2 or Metal2) 170 for creating power connections. The vertical Metall 150 routing at the bottom provides flexible connection to Metal2 170 tracks for creating ground connections. The vias 160 are used to connect the vertical Metall 150 tracks to the horizontal Metal2 170 tracks. As shown, connections using the vias 160 are made in each of the four corners of layout 100.[0048] In the illustrated embodiment, the layout 100 uses a group 102 at the top for routing three horizontal signal routes with the horizontal MetalO 130 local interconnect. In addition, the layout 100 uses a group 104 at the bottom for routing three horizontal signal routes with the horizontal MetalO 130 local interconnect. Each of the groups 102 and 104 uses three horizontal tracks for routing three horizontal signal wires with a given width and pitch. The groups 102 and 104 are also referred to as "triplet" groups. Although each of the groups 102 and group 104 is shown to use three horizontal tracks, in other embodiments, any other number of multiple horizontal tracks is used. A spacing exists between the two groups 102 and 104, which can be used for additional signal routing tracks beyond the multiple horizontal tracks used in the groups 102 and 104.[0049] In some embodiments, the devices in the standard cell layout 100 are fabricated by one of the many fabrication techniques. Examples of the fabrication techniques are one of many immersion lithography techniques, the double patterning technique, the extreme ultraviolet lithography (EUV) technique, and the directed self-assembly (DSA) lithography technique. In some embodiments, the EUV technique provides more flexibility relative to via and contact modules relative to other techniques.[0050] Fabrication techniques have a variety of issues. One issue is throughput, which is a rate of the number of wafers or dies produced per unit time such as per hour or per day. A second issue is yield, which is the number of productive dies able to be used in a product compared to the total number of dies fabricated. A third issue is resolution, which is the smallest feature the fabrication process is able to produce. For example, an example of the feature is the length of a transistor (device). The fabrication process is able to place a source region and a drain region, which are two separate but adjacent regions, next to each other with a smallest distance between them that the two regions are still distinguished from one another. The distance is the length of the transistor being fabricated, which is the feature (and the resolution).[0051] Another example of the feature is the distance between two metal wires. The smallest distance between the mid-point of a first metal wire of a particular metal layer and the mid-point of a second metal wire of the same particular metal layer is the pitch. In addition, another example is the smallest width of a metal wire for a particular metal layer. The fabrication process has multiple distances used to characterize the fabrication process. Each of the multiple distances is the smallest distance used for a particular material of the many different materials on the die to provide a target yield. The smallest distance of all of these distances is used to define the resolution of the fabrication process. The other distances are used for design rules to ensure reliable circuit fabrication based on the targeted yield.[0052] In the illustrated embodiment, the relatively high resolution provided by the selected fabrication technique allows for 3 locations for contacts to be placed on the trench silicide contact 120 and the metal gate 110 where they interconnect with either the group 102 or the group 104. The 3 locations provide efficient signal and power routing within the standard cell so that it becomes less likely to use another metal layer other than the horizontal MetalO 130 local interconnect. For example, the pfets at the top of layout 100 have access to three potential locations for contacts within the group 102.[0053] Similar to the pfets having access to three potential locations for contacts within the group 102, the nfets at the bottom of layout 100 have access to three potential locations for contacts within the group 104. The flexibility offered by the three potential locations for contacts within groups 102 and 104 eliminates using other metal interconnects, such as vertical Metal 1 or horizontal Metal 2, and the corresponding contacts for routing signals and power. Again, although each of the groups 102 and group 104 is shown to use three horizontal tracks, in other embodiments, any other number of multiple horizontal tracks is used. Therefore, another number of potential locations for using contacts in the groups 102 and 104 for trench silicide contact 120 and the metal gate 110 is also possible and contemplated.[0054] Referring to FIG. 2, a generalized block diagram of another top view of a standard cell layout 200 is shown. Layout elements described earlier are numbered identically. Here, the layout 200 is the same as the layout 100, but for ease of illustration, layout 200 only shows the metal gates 110, the trench silicide contacts 120, the MetalO 130, contacts 140 for connecting the metal gate 110 to MetalO 130, and contacts 142 for connecting the trench silicide contact 120 to MetalO 130. [0055] The horizontal groups 102 and 104 of MetalO 130 are shown again. The layout 200 uses group 102 at the top for routing three horizontal signal routes with the horizontal MetalO 130 local interconnect. In addition, the layout 200 uses group 104 at the bottom for routing three horizontal signal routes with the horizontal MetalO 130 local interconnect. A spacing 230 exists between the two groups 102 and 104, which can be used for additional signal routing tracks.[0056] The relatively high resolution provided by the selected fabrication technique allows for many locations for contacts to be placed on the trench silicide contact 120 and the metal gate110. Here, the number of locations is shown as 3 locations for the 3 horizontal tracks within each of the groups 102 and 104. However, any other number of multiple tracks, and thus potential locations for contacts, is possible and contemplated. The locations for contacts provide efficient signal and power routing within the standard cell so that it becomes less likely to use another metal layer other than the horizontal MetalO 130 local interconnect.[0057] In some embodiments, the extreme ultraviolet lithography (EUV) technique is used to provide the resolution of each of the width and the pitch of the horizontal MetalO 130 routes in the groups 102 and 104. The EUV technique uses an extreme ultraviolet wavelength to reach resolution below 40 nanometers. The extreme ultraviolet wavelength is approximately 13.5 nanometers. Relatively high temperature and high density plasma is used to provide the EUV beam.[0058] In other embodiments, the resolution of each of the width and the pitch of the horizontal MetalO 130 routes in the groups 102 and 104 is set by the immersion lithography technique. Immersion lithography uses a liquid medium, such as purified water, between the lens of the imaging equipment and the wafer surface. Previously, the gap space was simply air. The resolution achieved by this technique is the resolution of the imaging equipment increased by the refractive index of the liquid medium. In some examples, the increased resolution falls above 80 nanometers.[0059] In other embodiments, the double patterning technique is used to provide the resolution of each of the width and the pitch of the horizontal MetalO 130 routes in the triplet groups 102 and 104. The double patterning technique uses immersion lithography systems to define features with resolution between 40 and 80 nanometers. Either of the self-aligned doubled patterning (SADP) technique or the litho-etch-litho-etch (LELE) technique is used. The double patterning technique counteracts the effects of diffraction in optical lithography, which occurs when the minimum dimensions of features on a wafer are less than the 193 nanometer wavelength of the illuminating light source. Other examples of techniques used to counteract the effects of diffraction in optical lithography are phase-shift masks, optical-proximity correction (OPC) techniques, optical equipment improvements and computational lithography.[0060] When selecting between immersion lithography, double patterning, EUV and DSA techniques, and other techniques, cost is considered as the cost increases from immersion lithography to EUV. However, over time, the costs of these techniques adjust as well as additional and newer techniques are developed for providing relatively high resolution for the width and the pitch of the horizontal MetalO 130 routes in the groups 102 and 104. Accordingly, one of a variety of lithography techniques is used to provide relatively high resolution for the width and the pitch. In the upcoming description of FIGs. 3-23, the fabrication steps for a double spacer immersion lithography triple patterning technique are described which provide the resolution of each of the width and the pitch of the horizontal MetalO 130 routes in the groups 102 and 104.[0061] Turning to FIG. 3, a generalized block diagram of a cross-sectional view of semiconductor metal patterns being fabricated is shown. Here, a stack of layers is deposited on an oxide layer 310 of a controlled thickness. In various embodiments, the oxide layer 310 is an inter-level dielectric (ILD). The ILD is used to insulate metal layers which are used for interconnects. In some embodiments, the ILD is silicon dioxide. In other embodiments, the ILD is one of a variety of low-k dielectrics containing carbon or fluorine. The low-k dielectrics provide a lower capacitance between the metal layers, and thus, reduces performance loss, power consumption and cross talk between interconnect routes.[0062] In the illustrated embodiment, the stack of layers uses a polysilicon layer 320 on top of the oxide layer 310, a nitride layer 330 on top of the polysilicon layer 320, and another polysilicon layer 322 on top of the nitride layer 330. In various embodiments, the nitride layer 330 is silicon nitride (SiN).[0063] Referring to FIG. 4, a generalized block diagram of another cross-sectional view of semiconductor metal patterns being fabricated is shown. For FIGs. 4-23, process materials described earlier are numbered identically. Here in FIG. 4, a photoresist layer 410 is formed on top of the top-most polysilicon layer 322 and etched with repeating and relatively same-sized spacing. In various embodiments, the etching with this repeated spacing forms trenches 420 and 422 in the photoresist 410 that are approximately equally spaced. One of a variety of lithography techniques is used to reduce the pitch (increase the frequency) of the trenches 420 and 422 in the photoresist 410.[0064] The area on the polysilicon layer 322 within these trenches 420 and 422 in the photoresist 410 is the area to be used for creating metal wires by fabricating semiconductor metal patterns. For example, referring briefly again to FIG. 2, each of the groups 102 and 104 are shown with three horizontal signal tracks with the horizontal MetalO 130 local interconnect. In various embodiments, these three horizontal signal tracks are fabricated within the trenches 420 and 422, which will be shown in later steps of the fabrication process. Again, although each of the groups102 and 104 is shown to use three horizontal tracks, in other embodiments, any other number of multiple horizontal tracks is used. As described earlier, the spacing 230 shown in FIG. 2 between the two groups 102 and 104 provides additional signal routing tracks beyond the multiple horizontal tracks used in the groups 102 and 104. In FIG. 4, the width of the remaining photoresist 410 on the polysilicon layer 322 determines the spacing 230 between the groups 102 and 104. Therefore, to increase the spacing 230 between the groups 102 and 104, the width of the remaining photoresist 410 on the polysilicon layer 322 is made wider.[0065] Turning to FIG. 5, a generalized block diagram of another cross-sectional view of semiconductor metal patterns being fabricated is shown. As shown, the semiconductor device fabrication process etches trenches into areas of the top-most polysilicon layer 322 unprotected by the photoresist layer 410. Following, the process etches trenches into areas of the nitride layer 330 unprotected by the photoresist layer 410 resulting in the shown cross-sectional view.[0066] Referring to FIGs. 6-8, generalized block diagrams of other cross-sectional views of semiconductor metal patterns being fabricated are shown. In FIG. 6, the photoresist layer 410 is stripped. In FIG. 7, the semiconductor device fabrication process deposits a conformal spacer oxide layer 710 over the top-most polysilicon layer 322, the nitride layer 330 and the bottom polysilicon layer 320. In FIG. 8, the semiconductor device fabrication process, which is also referred to as the fabrication process, etches the spacer oxide layer 710 leaving sidewalls of spacer oxide 710 on either side of the top-most polysilicon layer 322 and the nitride layer 330.[0067] Turning now to FIGs. 9-10, generalized block diagrams of other cross-sectional views of semiconductor metal patterns being fabricated are shown. As shown in FIG. 9, a conformal nitride layer 910 is deposited over the spacer oxide layer 710 and the polysilicon layer 322. Following, the spacer nitride layer 910 is etched as shown in FIG. 10. Each of the spacer oxide layer 710 and the spacer nitride layer 910 form a double spacer around the mandrel which includes the polysilicon 322 and the nitride 330.[0068] Referring to FIGs. 11-12, generalized block diagrams of other cross-sectional views of semiconductor metal patterns being fabricated are shown. In FIG. 11, the oxide layer 1110 is deposited over the spacer nitride layer 910 and the mandrels. Where the oxide layer 710 and the oxide layer 1110 contact the polysilicon layer 320 is shown in a later fabrication step to define the areas where metal will be deposited for metal wires. In addition, shown in a later fabrication step, where the nitride layers 330 and 910 contact the polysilicon layer 320 define areas used for spacing between the metal wires to be deposited. Although the diagram is not drawn to scale, it can be seen adjusting the widths of the nitride layers 330 and 910 in addition to the widths of the oxide layers 710 and 1110 making contact with the polysilicon layer 320 defines the widths and spacing used for the upcoming metal patterns. This semiconductor structure illustrated in FIG. 11 is used by one of multiple further fabrication steps to create the groups of metal patterns and any extra metal tracks in the spacing between the groups of metal patterns.[0069] In FIG. 12, the fabrication process uses a chemical mechanical planarization (CMP) step to remove multiple layers shown earlier in FIG. 11 until the nitride layer 330 is reached. The multiple layers are the oxide layer 1110, the spacer nitride layer 910, the polysilicon 322, and the spacer oxide layer 710. The polysilicon layer 322 is completely removed in the illustrated embodiment. The CMP step polishes the remaining material corresponding to the layers 322, 710, 910 and 1110. The CMP step achieves a near-perfect flat and smooth surface upon which further layers are built. The flat and smooth surface contains alternating oxide and nitride regions on top of the polysilicon layer 320.[0070] Turning now to FIG. 13, one embodiment of a method 1300 for fabricating metal patterns to be used for metal tracks is shown. For purposes of discussion, the steps in this embodiment (as well as in figures 14, 24 and 34) are shown in sequential order. However, in other embodiments some steps occur in a different order than shown, some steps are performed concurrently, some steps are combined with other steps, and some steps are absent.[0071] In various embodiments, an oxide layer is formed on top of a substrate. In some embodiments, a plasma-enhanced chemical vapor deposition (PECVD) process is used to place the oxide layer on the substrate. A first polysilicon layer is deposited on top of the oxide layer (block 1302). Afterward, a nitride layer is formed on top of the first polysilicon layer (block 1304). In various embodiments, the nitride layer is silicon nitride (SiN). Following, a second polysilicon layer is formed on the nitride layer (block 1306). In some embodiments, the second polysilicon layer has a thickness greater than the thickness of the first polysilicon layer.[0072] A photoresist layer is formed on top of the second polysilicon layer (block 1308). A distance for spacing between groups of metal patterns to be formed is determined (block 1310). The determined distance sets the spacing between the groups of metal patterns to be formed later. Briefly referring again to FIG. 2, the spacing 230 can be used for additional signal routing tracks between the groups 102 and 104. The determined distance sets the width of the remaining photoresist on the second polysilicon layer after an etching fabrication step (block 1312). [0073] The etching is done to create particular spacing between the remaining photoresist and to set the width of the remaining photoresist based on the determined distance. The spacing between the remaining photoresist sets the area for a group of metal patterns to be formed later. Therefore, to increase the spacing between groups of later metal patterns, the determined distances is increased and the width of the remaining photoresist on the polysilicon layer will be made wider. Similarly, to decrease the spacing between groups of later metal patterns, the determined distance is decreased and the width of the remaining photoresist on the polysilicon layer will be reduced.[0074] Trenches are etched into areas of the second polysilicon layer unprotected by the photoresist layer (block 1314). Following, trenches are etched into areas of the nitride layer unprotected by the photoresist layer (block 1316). Afterward, the photoresist layer is stripped (block 1318). The resulting columns (mandrels) on the first polysilicon layer contain the second polysilicon layer on top of the nitride layer.[0075] Referring to FIG. 14, one embodiment of a method 1400 for fabricating metal patterns to be used for metal tracks is shown. A conformal spacer oxide layer is deposited over a first polysilicon layer and mandrels (columns) on top of the first polysilicon layer (block 1402). In various embodiments, the columns contain a second polysilicon layer on top of a nitride layer. The conformal spacer oxide layer is etched (block 1404) leaving sidewalls of spacer oxide on either sides of the mandrels. The thickness of the remaining spacer oxide layer on the sidewalls of the mandrels sets the width of a metal pattern to be formed later.[0076] A conformal spacer nitride layer is deposited over exposed areas of the first polysilicon layer and over the mandrels (columns) on top of the first polysilicon layer (block 1406). The conformal spacer nitride layer is etched (block 1408) leaving sidewalls of spacer nitride on either sides of the mandrels. The thickness of the remaining spacer nitride layer on the sidewalls of the mandrels sets the width of spacing between metal patterns to be formed later. Accordingly, this width is used to set the pitch between metal patterns to be formed later. Each of the remaining sidewall spacer oxide layer and spacer nitride layer form a double spacer around the mandrels.[0077] An oxide layer is deposited over the exposed areas of the first polysilicon layer and the double spacer (block 1410). Each of the deposited top-most oxide layer, the double spacer and the mandrels are removed until the nitride layer 330 is reached (block 1412). The multiple layers removed are the top-most deposited oxide layer, a portion of the spacer nitride layer within the double spacer, a portion of the spacer oxide layer within the double spacer, and the entire second polysilicon layer within the mandrels. In various embodiments, a chemical mechanical planarization (CMP) step is used to remove these multiple layers and to polish the remaining material. The CMP step achieves a near-perfect flat and smooth surface upon which further layers are built.[0078] Referring to FIGs. 15-16, generalized block diagrams of other cross-sectional views of semiconductor metal patterns being fabricated are shown. In FIG. 15, each of the oxide layers 710 and 1110 in addition to the polysilicon layer 320 are etched until the oxide layer 310 is reached. Regions for later metallization are further created. In FIG. 16, the nitride layer 330 and the spacer nitride layer 910 are stripped leaving the polysilicon layer 320 exposed.[0079] Turning to FIGs. 17-19, generalized block diagrams of other cross-sectional views of semiconductor metal patterns being fabricated are shown. In these diagrams, further etching is performed in addition to metallization. In FIG. 17, the fabrication process etches trenches into areas of the oxide layer 310 which are unprotected by the polysilicon layer 320. In FIG. 18, the polysilicon layer 320 is etched away followed by a metallization step shown in FIG. 19. The metallization step deposits the metal layer 1910 in the etched trenches. Referring briefly again to FIG. 10, it can be seen the width of the metal wires is set by the width of the oxide layer 710 of the double spacer making contact with the polysilicon layer 320 and the width of the oxide layer 1110 making contact with the polysilicon layer 320. The spacing between the metal wires is set by the width of the nitride layer 330 shown in FIG. 10. The spacing between the metal wires is also set by the width of the nitride layer 910 of the double spacer.[0080] In one embodiment, the metal layer 1910 is copper. In another embodiment, the metal layer 1910 is aluminum or a copper and aluminum mix. In some embodiments, the metal layer 1910 is formed by a dual damascene process. In other embodiments the metal layer 1910 formed by a single damascene process. Other techniques are possible and contemplated for forming the metal layer 1910. In embodiments with copper used as the metal layer 1910, a liner using a tantalum (Ta) based barrier material is deposited on the inter-level dielectric (ILD), which is the oxide layer 310, before the metal layer 1910 is formed. The liner prevents the copper from diffusing into the oxide layer 310 and acts as an adhesion layer for the copper. Next a thin copper seed layer is deposited by physical vapor diffusion (PVD) followed by electroplating of copper. In other embodiments, cobalt, tungsten, other metals or carbon nanotubes are used in place of copper.[0081] Referring to FIGs. 20-23, generalized block diagrams of other cross-sectional views of semiconductor metal patterns being fabricated are shown. FIGs. 20-23 illustrate alternative steps to use in the fabrication process compared to the steps described above for FIGs. 15-19. Here, FIG. 20 is the same as the earlier FIG. 15 where each of the oxide layers 710 and 1110 in addition to the polysilicon layer 320 are etched until the oxide layer 310 is reached, and thus, creating regions for later metallization. FIG. 20 shows each of the oxide layers 710 and 1110 in addition to the polysilicon 320 are etched until the oxide layer 310 is reached. FIG. 20 shows the etching steps after the CMP step to remove multiple layers shown earlier in FIG. 11 until the nitride layer 330 is reached. In FIG. 21, the fabrication process etches trenches into areas of the oxide layer 310 unprotected by the nitride layers 330 and 910 as well as the polysilicon layer320. In FIG. 22, each of the nitride layers 330 and 910 as well as the polysilicon layer 320 are etched away followed by a metallization step shown in FIG. 23.[0082] Turning now to FIG. 24, one embodiment of a method 2400 for fabricating metal patterns to be used for metal tracks is shown. A flat and smooth surface contains alternating oxide and nitride regions on top of a polysilicon layer. An oxide layer is below the polysilicon layer. Therefore, the multiple layers contain the oxide layer at the bottom and a polysilicon layer on top of the oxide layer. On top of the polysilicon layer are the alternating regions of polished oxide and nitride regions. In some embodiments, the widths of the alternating regions of polished oxide and nitride regions are relatively the same. The oxide region of the alternating oxide and nitride regions is etched and removed from the top of the polysilicon layer (block 2402).[0083] The exposed portions of the polysilicon layer in the same regions as the previously removed oxide are removed (etched) until the oxide layer underneath the polysilicon layer is reached (block 2404). In some embodiments, trenches are etched at this time into the oxide layer below the polysilicon layer. In other embodiments, the trenches are created later. If the trenches are etched later ("no" branch of the conditional block 2406), then the top alternating nitride regions are removed exposing the alternating polysilicon regions (block 2408). Following, the trenches are etched in the oxide layer below the alternating polysilicon regions where the below oxide layer is unprotected by the alternating polysilicon regions (block 2410). Next, the alternating polysilicon regions are removed (block 2412). Afterward, a metallization step deposits metal in the etched trenches (block 2418). In one embodiment, the metal is copper. In another embodiment, the metal is aluminum or a copper and aluminum mix. In other embodiments, cobalt, tungsten, other metals or carbon nanotubes are used.[0084] However, if the trenches are etched after the exposed portions of the polysilicon layer are removed ("yes" branch of the conditional block 2406), then the trenches are etched in the oxide layer below the alternating nitride and polysilicon mandrels where the below oxide layer is unprotected by the alternating mandrels (block 2414). Following, the top alternating nitride in the mandrels are removed exposing the alternating polysilicon regions (block 2416). Afterward, control flow of method 2400 moves to block 2412 where the alternating polysilicon regions are removed. [0085] Referring to FIGs. 25-33, generalized block diagrams of other cross-sectional views of semiconductor metal patterns being fabricated are shown. FIGs. 25-33 illustrate alternative steps to use in the fabrication process compared to the steps described above for FIGs. 15-23. Here,FIG. 25 is the same as the earlier FIG. 12 where the fabrication process uses a chemical mechanical planarization (CMP) step to remove multiple layers shown earlier in FIG. 11 until the nitride layer 330 is reached.[0086] As described earlier, the nitride layer 330 is not used within the double spacer constructed as shown earlier in FIGs. 10-11. Instead, the spacer nitride layer 910 is used to construct the double spacer. As shown in FIG. 25, the width of the nitride layer 330 in particular areas on the polysilicon layer 320, such as the far left, the far right and the center areas, is larger than the width of the nitride layer 330 used in other areas. As described earlier regarding FIG. 4, the width of the nitride layer 330 is used to define the width of spacing between metal patterns used for metal wires. The larger widths used for the nitride layer 330 in FIG. 25 are used to define spacing between the metal patterns to be fabricated.[0087] Referring now to FIG. 26, each of the oxide layers 710 and 1110 in addition to the polysilicon layer 320 are etched until the oxide layer 310 is reached. Regions for later metallization are created by this etching. As shown, the widths for spacing between metal patterns alternates between relatively narrow to relatively wide. For example, as shown, the far left, the far right and the middle columns (mandrels) are wider than the other columns. The columns use the nitride layers 330 and 910 on the top along with polysilicon layer 320 on the bottom. In FIG. 27, the nitride layer 330 and the spacer nitride layer 910 are stripped from the tops of the columns leaving the polysilicon layer 320 exposed.[0088] Turning to FIGs. 28-29, generalized block diagrams of other cross-sectional views of semiconductor metal patterns being fabricated are shown. In these diagrams, the fabrication process performs further etching. In FIG. 28, a photoresist layer 410 is formed on top of the polysilicon layer 320. As described earlier and briefly referring again to FIG. 2, the spacing 230 shown in FIG. 2 between the two groups 102 and 104 provides additional signal routing tracks beyond the multiple horizontal tracks used in the groups 102 and 104. In each figure of the previous FIGs. 25-27 and now FIG. 28, the widths of the remaining far left, far right and center polysilicon 320 on the oxide layer 310 determines the spacing between groups of metal patterns.[0089] As highlighted in FIG. 28, the area 2810 is the width of the center polysilicon 320 on the oxide layer 310. The width of the area 2810 determines the width of the later spacing 230 between the groups 102 and 104, and provides area to later form one or more additional metal tracks between the groups 102 and 104. Therefore, to increase the later spacing 230 between the groups 102 and 104, the width of the remaining polysilicon 320 on the oxide layer 310 within the area 2810 is made wider as shown in each figure of the previous FIGs. 25-27 and now FIG. 28.[0090] In the illustrated embodiment shown in FIG. 28, one extra metal track is being placed in the later spacing 230 to be formed. Therefore, within the area 2810, the photoresist 410 is etched until the polysilicon layer 320 is reached. Although etching for a single extra metal track is shown, in other embodiments, another number of etchings is performed in the photoresist layer410 for another number of extra metal tracks. The width of the etching in the area 2810 is equivalent to the width of the extra metal patterns to be formed later in the area 2810.Additionally highlighted in FIG. 28 is the area 2802, which is between the relatively wide remaining polysilicon layers 320. The area 2802 provides area to later form metal patterns such as the group 102. Similarly, the area 2804 provides area to later form metal patterns such as the group 104.[0091] In FIG. 28, the widths of the other remaining polysilicon 320 on the oxide layer 310 determines the spacing between the metal patterns formed later within the groups 102 and 104. These widths of the other remaining polysilicon 320 accordingly determine the pitch for the metal patterns later formed within the groups 102 and 104. The widths of the photoresist 410 making contact with the oxide layer 310 determines the widths of the later metal patterns to be formed. In FIG. 29, within the area 2810, the polysilicon layer 320 is etched away until the oxide layer 310 is reached. This etching creates area 2910 which provides area for a later extra single metal pattern to be formed.[0092] Turning to FIGs. 30-33, generalized block diagrams of other cross-sectional views of semiconductor metal patterns being fabricated are shown. In these diagrams, further etching is performed in addition to metallization. In FIG. 30, the fabrication process strips away the photoresist layer 410. In FIG. 31, the fabrication process etches trenches into areas of the oxide layer 310 which are unprotected by the polysilicon layer 320. In FIG. 32, the polysilicon layer 320 is etched away followed by a metallization step shown in FIG. 33. The metallization step deposits the metal layer 1910 in the etched trenches. As described earlier, in some embodiments, the metal layer 1910 is copper. In other embodiments, the metal layer 1910 is aluminum or a copper and aluminum mix. In other embodiments, cobalt, tungsten, other metals or carbon nanotubes are used.[0093] As shown, each of the pattern groups 3302 and 3304 use three metal patterns for three metal tracks. Although each of the groups 3302 and 3304 is shown to use three metal patterns, in other embodiments, any other number of metal patterns is used. In the illustrated embodiment, an extra metal pattern 3310 is located between the groups 3302 and 3304. The extra pattern 3310 provides an additional signal routing track beyond the groups 3302 and 3304. Although a single extra pattern is shown, any other number of extra patterns placed between the groups 3302 and3304 is possible and contemplated.[0094] Turning now to FIG. 34, one embodiment of a method 3400 for fabricating metal patterns to be used for metal tracks is shown. A flat and smooth surface contains alternating oxide and nitride regions on top of a polysilicon layer. An oxide layer is below the polysilicon layer. Therefore, the multiple layers contain the oxide layer at the bottom and a polysilicon layer on top of the oxide layer. On top of the polysilicon layer are the alternating regions of polished oxide and nitride regions. In some embodiments, the widths of some of the polished nitride regions are appreciably wider than the widths of other nitride regions and the polished oxide regions. As described earlier regarding the previous FIG. 4 and FIG. 25, the width of the nitride layer 330 is used to define the width of spacing between metal patterns used for metal wires. The larger widths used for the nitride layer 330 are used to define spacing between the metal patterns to be fabricated.[0095] The oxide region of the alternating oxide and nitride regions is etched and removed from the top of the polysilicon layer (block 3402). The exposed portions of the polysilicon layer in the same regions as the previously removed oxide are removed until the oxide layer underneath the polysilicon layer is reached (block 3404). The top alternating nitride regions are removed exposing the alternating polysilicon regions (block 3406).[0096] In some embodiments, one or more extra metal tracks between groups top and bottom metal tracks in the standard cell. However, if no extra metal tracks are being created for the standard cell ("no" branch of the conditional block 3408), then trenches are etched in the oxide layer below the alternating polysilicon regions where the below oxide layer is unprotected by the alternating polysilicon regions (block 3410). Next, the alternating polysilicon regions are removed (block 3412). Afterward, a metallization step deposits metal in the etched trenches (block 3414). In one embodiment, the metal is copper. In another embodiment, the metal is aluminum or a copper and aluminum mix.[0097] If extra metal tracks are being created for the standard cell ("yes" branch of the conditional block 3408), then a photoresist layer is formed on top of the alternating polysilicon regions (block 3416). In regions for the extra metal tracks, each of the photoresist layer and the relatively wide polysilicon region are etched until the oxide layer underneath the polysilicon region is reached (block 3418). The photoresist layer is removed (block 3420). Afterward, control flow of method 3400 moves to block 3410 where trenches are etched followed by the steps in blocks 3412-3414 for completing metallization for the metal tracks. [0098] The processing steps illustrated above in FIGs. 3-22 provide a partial Immersion Lithography solution and cost reduced alternative to full EUV printing of certain limited layers with Sub EUV resolution and enables more cost effective Moore's law scaling at 5nm and 3nm technology nodes. Other processing techniques use double pattered EUV with side wall image transfer, but these types of processing techniques use 3 EUV or 2 EUV masks + 1 Immersion masks compared to two immersion masks and 1 EUV mask. One EUV mask = 3-4 Immersion masks in terms of cost. The invention has 5-6 Immersion mask cost equivalents compared to 9-12 Immersion cost equivalents with the EUV only method. There is also still significant risk with EUV metal mask defect rates. The processing steps described above in FIGs. 3-22 use immersion only for the metal mask and EUV for the CUT mask which is significantly lower risk and in practice today. Using the processing steps described above in FIGs. 3-22, standard cells rout efficiently if they have triplet path groupings for each n-ch and p-ch device or 6 total tracks to route the gate and source/drain connections. Over scaling these tracks using the above processing steps mask that possible with less cost compared to EUV and reduces or eliminated added area bloat through CPP slips or added area to complete complex cells. This ultimately will reduce area and power at 5nm and 3nm.[0099] A novel Immersion Lithography process is described as an alternative to EUV that can achieve sub EUV patterning capability. Sub EUV patterning is possible but will be very expensive compared to the approach in this disclosure. EUV mask blank defectivity is still very high and makes metal layer masks difficult to print defect free compared to contact, via and cut masks. Ultimately the mask blank defectivity will be solved but it is a question of when and schedule. The primary motivation is cost reduction for sub EUV metal mask patterning. Secondary is potential pattern flexibility and better line width roughness control and reduced variability.[00100] It is noted that one or more of the above-described embodiments include software. In such embodiments, the program instructions that implement the methods and/or mechanisms are conveyed or stored on a computer readable medium. Numerous types of media which are configured to store program instructions are available and include hard disks, floppy disks, CD- ROM, DVD, flash memory, Programmable ROMs (PROM), random access memory (RAM), and various other forms of volatile or non-volatile storage. Generally speaking, a computer accessible storage medium includes any storage media accessible by a computer during use to provide instructions and/or data to the computer. For example, a computer accessible storage medium includes storage media such as magnetic or optical media, e.g., disk (fixed or removable), tape, CD-ROM, or DVD-ROM, CD-R, CD-RW, DVD-R, DVD-RW, or Blu-Ray. Storage media further includes volatile or non-volatile memory media such as RAM (e.g. synchronous dynamicRAM (SDRAM), double data rate (DDR, DDR2, DDR3, etc.) SDRAM, low-power DDR(LPDDR2, etc.) SDRAM, Rambus DRAM (RDRAM), static RAM (SRAM), etc.), ROM, Flash memory, non-volatile memory (e.g. Flash memory) accessible via a peripheral interface such as the Universal Serial Bus (USB) interface, etc. Storage media includes microelectromechanical systems (MEMS), as well as storage media accessible via a communication medium such as a network and/or a wireless link.[00101] Additionally, in various embodiments, program instructions include behavioral -level descriptions or register-transfer level (RTL) descriptions of the hardware functionality in a high level programming language such as C, or a design language (FIDL) such as Verilog, VHDL, or database format such as GDS II stream format (GDSII). In some cases the description is read by a synthesis tool, which synthesizes the description to produce a netlist including a list of gates from a synthesis library. The netlist includes a set of gates, which also represent the functionality of the hardware including the system. The netlist is then placed and routed to produce a data set describing geometric shapes to be applied to masks. The masks are then used in various semiconductor fabrication steps to produce a semiconductor circuit or circuits corresponding to the system. Alternatively, the instructions on the computer accessible storage medium are the netlist (with or without the synthesis library) or the data set, as desired. Additionally, the instructions are utilized for purposes of emulation by a hardware based type emulator from such vendors as Cadence®, EVE®, and Mentor Graphics®.[00102] Although the embodiments above have been described in considerable detail, numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.
A method or sensor arrangement for providing capacitive sensor detection with at least one capacitive sensor comprises a transmitting electrode and a receiving electrode. A stimulus at the transmitting electrode is generated and a signal is received from the receiving electrode and data packets are generated, each packet comprising a plurality of samples. The plurality of samples are weighted by providing less gain at a beginning and end of each packet with respect to a center of each packet; and the weighted samples are integrated to generate an output signal for each packet.
CLAIMS1. A method for providing capacitive sensor detection with at least one capacitive sensor comprising a transmitting electrode and a receiving electrode, the method comprising: generating a stimulus at the transmitting electrode,receiving a signal from the receiving electrode and generating data packets, each packet comprising a plurality of samples;weighting the plurality of samples by providing less gain at a beginning and end of each packet with respect to a center of each packet; andintegrating the weighted samples to generate an output signal for each packet.2. The method according to claim 1, wherein a stimulus comprises a sequence of pulses.3. The method according to claim 2, wherein each pulse alternates between ground and a supply voltage.4. The method according to one of the preceding claims, wherein a gain distribution is symmetrical with respect to the center of each packet and a gain distribution curve is selected from a group of gain curves consisting of a Gaussian curve, a Hamming window, a Hanning window, and a Blackman window.5. The method according to one of the preceding claims, wherein weighting is performed by applying gain to the analog signals received from the columns or rows. 6. The method according to one of the preceding claims, wherein weighting is performed by applying gain to the digital signals during post processing of each packet.7. The method according to one of the preceding claims, wherein the capacitive sensor is a touch sensor.8. The method according to claim 7, wherein a plurality of touch sensors are arranged in a matrix comprising columns and rows and packets of samples are sampled in parallel from each column or row. 9. The method according to claim 7, wherein a plurality of touch sensors are arranged in a matrix comprising columns and rows and packets of samples of different columns/rows are sampled sequentially using multiplexing.10. The method according to claim 7, wherein a plurality of touch sensors are formed by horizontal and vertical electrodes arranged in a matrix.11. The method according to claim 7, wherein a plurality of touch sensors are arranged in a matrix and wherein horizontal and vertical electrodes of the matrix are arranged in different layers.12. The method according to one of the preceding claims, wherein four receiving electrodes are associated with the transmitting electrode and form a three-dimensional position detection sensor. 13. The method according to claim 12, wherein the four receiving electrodes are arranged in a frame-like fashion.14. The method according to claim 12 or 13, wherein the four receiving electrodes surround a display or a touchpad sensor.15. A sensor arrangement with at least one capacitive sensor comprising:a transmitting electrode configured to receive a stimulus,a receiving electrode capacitively coupled with the transmitting electrode and configured to receive a signal from the transmitting electrode, andan evaluation circuit coupled with the receiving electrode and configured to generate data packets, each packet comprising a plurality of samples, wherein the plurality of samples are weighted by providing less gain at a beginning and end of each packet with respect to a center of each packet, and wherein the evaluating circuit is further configured to integrate the weighted samples to generate an output signal for each packet.16. The sensor arrangement according to claim 15, wherein a packet of the stimulus comprises a sequence of pulses.17. The sensor arrangement according to claim 16, wherein each pulse alternates between ground and a supply voltage.18. The sensor arrangement according to one of claims 15 - 17, wherein a gain distribution is symmetrical with respect to the center of each packet and a gain distribution curve is selected from a group of gain curves consisting of a Gaussian curve, a Hamming window, a Hanning window, and a Blackman window.19. The sensor arrangement according to one of claims 15 - 18, wherein gain is applied to the analog signals received from the receiving electrode. 20. The sensor arrangement according to one of claims 15 - 19, wherein gain is applied to the digital signals during post processing of each packet.21. The sensor arrangement according to one of claims 15 - 20, wherein a plurality of touch sensors are arranged in a matrix comprising columns and rows and packets of samples are sampled in parallel from each column or row.22. The sensor arrangement according to one of claims 15 - 21, wherein the capacitive sensor is a touch sensor.23. The sensor arrangement according to claim 22, wherein a plurality of touch sensors are arranged in a matrix comprising columns and rows and packets of samples of different columns/rows are sampled sequentially using multiplexing.24. The sensor arrangement according to claim 22, wherein a plurality of touch sensors are formed by horizontal and vertical electrodes arranged in a matrix.25. The sensor arrangement according to claim 22, comprising a plurality of touch sensors are arranged in a matrix and wherein horizontal and vertical electrodes of the matrix are arranged in different layers.26. The sensor arrangement according to one of claims 15 - 25, wherein four receiving electrodes are associated with the transmitting electrode and form a three- dimensional position detection sensor.27. The sensor arrangement according to claim 26, wherein the four receiving electrodes are arranged in a frame-like fashion.28. The sensor arrangement according to claim 26 or claim 27, wherein the four receiving electrodes surround a display or a touchpad sensor.
Capacitance Measurement Device With Reduced NoiseRELATED PATENT APPLICATION This application claims priority to commonly owned United States Provisional PatentApplication No. 62/238,318; filed October 7, 2015; which is hereby incorporated by reference herein for all purposes.TECHNICAL FIELDThe present disclosure relates to methods and systems for capacitance measurement, in particular capacitance measurement with reduced noise.BACKGROUNDProjected capacitive sensors are often incorporated in touch screens, touch pads or buttons. Similar sensors are used in non-touching three-dimensional position detection sensor arrangements. These sensors use receiving electrodes and in some embodiments also emitting electrodes. When using two electrodes, one electrode acts as a transmitter and the other electrode as a receiver. A matrix can be formed to allow for a plurality of keys to share transmitting and receiving lines. In practice, the measurement system connected to the receiving electrodes is then often used in a time multiplexing manner. To keep a good responsiveness to user inputs, projected capacitive devices must scan quickly several locations of a mesh of electrodes.For example, the standardized test "IEC61000-4-6 Immunity to Conducted Disturbances" reveals a common problem of projected capacitive sensors: to acquire a weak signal from the receive electrode at a given frequency when a disturbing noise overlaps the signal with a slightly different frequency. Furthermore, the requirement for short scan time exacerbates this problem of distinguishing signal and noise occupying nearby frequencies.SUMMARYAccording to an embodiment, a method for providing capacitive sensor detection with at least one capacitive sensor comprising a transmitting electrode and a receiving electrode may comprise the steps of: generating a stimulus at the transmitting electrode, receiving a signal from the receiving electrode and generating data packets, each packet comprising a plurality of samples; weighting the plurality of samples by providing less gain at a beginning and end of each packet with respect to a center of each packet; and integrating the weighted samples to generate an output signal for each packet. According to a further embodiment, a stimulus may comprise a sequence of pulses.According to a further embodiment, each pulse may alternate between ground and a supply voltage. According to a further embodiment, a gain distribution can be symmetrical with respect to the center of each packet and a gain distribution curve is selected from a group of gain curves consisting of a Gaussian curve, a Hamming window, a Hanning window, and a Blackman window. According to a further embodiment, weighting can be performed by applying gain to the analog signals received from the columns or rows. According to a further embodiment, weighting can be performed by applying gain to the digital signals during post processing of each packet. According to a further embodiment, the capacitive sensor can be a touch sensor. According to a further embodiment, a plurality of touch sensors can be arranged in a matrix comprising columns and rows and packets of samples are sampled in parallel from each column or row. According to a further embodiment, a plurality of touch sensors can be arranged in a matrix comprising columns and rows and packets of samples of different columns/rows are sampled sequentially using multiplexing. According to a further embodiment, a plurality of touch sensors can be formed by horizontal and vertical electrodes arranged in a matrix. According to a further embodiment, a plurality of touch sensors can be arranged in a matrix and wherein horizontal and vertical electrodes of the matrix are arranged in different layers. According to a further embodiment, four receiving electrodes can be associated with the transmitting electrode and form a three-dimensional position detection sensor. According to a further embodiment, the four receiving electrodes can be arranged in a frame-like fashion. According to a further embodiment, the four receiving electrodes may surround a display or a touchpad sensor.According to another embodiment, a sensor arrangement with at least one capacitive sensor may comprise a transmitting electrode configured to receive a stimulus, a receiving electrode capacitively coupled with the transmitting electrode and configured to receive a signal from the transmitting electrode, and an evaluation circuit coupled with the receiving electrode and configured to generate data packets, each packet comprising a plurality of samples, wherein the plurality of samples are weighted by providing less gain at a beginning and end of each packet with respect to a center of each packet, and wherein the evaluating circuit is further configured to integrate the weighted samples to generate an output signal for each packet.According to a further embodiment of the sensor arrangement, a packet of the stimulus may comprise a sequence of pulses. According to a further embodiment of the sensor arrangement, each pulse may alternate between ground and a supply voltage. According to a further embodiment of the sensor arrangement, a gain distribution can be symmetrical with respect to the center of each packet and a gain distribution curve is selected from a group of gain curves consisting of a Gaussian curve, a Hamming window, a Hanning window, and a Blackman window. According to a further embodiment of the sensor arrangement, gain can be applied to the analog signals received from the receiving electrode. According to a further embodiment of the sensor arrangement, gain can be applied to the digital signals during post processing of each packet. According to a further embodiment of the sensor arrangement, a plurality of touch sensors can be arranged in a matrix comprising columns and rows and packets of samples are sampled in parallel from each column or row. According to a further embodiment of the sensor arrangement, the capacitive sensor can be a touch sensor. According to a further embodiment of the sensor arrangement, a plurality of touch sensors can be arranged in a matrix comprising columns and rows and packets of samples of different columns/rows are sampled sequentially using multiplexing. According to a further embodiment of the sensor arrangement, a plurality of touch sensors can be formed by horizontal and vertical electrodes arranged in a matrix. According to a further embodiment of the sensor arrangement, the sensor arrangement may comprise a plurality of touch sensors are arranged in a matrix and wherein horizontal and vertical electrodes of the matrix are arranged in different layers. According to a further embodiment of the sensor arrangement, four receiving electrodes can be associated with the transmitting electrode and form a three-dimensional position detection sensor. According to a further embodiment of the sensor arrangement, the four receiving electrodes can be arranged in a frame-like fashion. According to a further embodiment of the sensor arrangement, the four receiving electrodes may surround a display or a touchpad sensor. BRIEF DESCRIPTION OF THE DRAWINGSFig. 1 shows an electrode matrix of a touch sensor arrangement;Fig. 2 shows a timing diagram of stimulus and received signals according to a first embodiment;Fig. 3 shows a timing diagram of stimulus and received signals according to a second embodiment;Fig. 4 shows a first embodiment of a weighting function applied to the received signals;Fig. 5 shows an exemplary circuit arrangement of a touch sensor according to an embodiment;Fig. 6 shows a timing diagram of various signals according to Fig. 5;Fig. 7 shows demodulation and weighting according to an embodiment;Fig. 8 shows spectral analysis with and without using weighting according to various embodiments;Fig. 9 shows an embodiment of a touchless sensor arrangement; andFig. 10 shows an embodiment of a combined touchless and touch sensor arrangement.DETAILED DESCRIPTIONAccording to various embodiments, a proposed solution is to acquire, for a given selection of active receiving or active emitting electrodes, multiple measurement samples and to integrate these samples with varying gain. One sample is for example a voltage sample converted by an A/D circuit, but the concept is not limited to digital, it also applies to analogue discrete time circuit like switched capacitor circuit and charge integration circuits. These multiple samples form a packet; and packets are delimited by change of the selection of active electrodes.According to various embodiments, for example, the following method is proposed: following a change of active electrodes, the system gradually increases the importance of measured samples until the middle of the packet and then gradually reduces their importance before the next change of electrodes. Therefore, samples collected after or before a change contribute less to the total result. When working with an A/D converter, a solution can be implemented with a weighted average of the collected samples, where weight values come from a look up table. It is surprising and remarkable that frequency separation of noise and signal can be achieved after the measurement is done, as a pure mathematical post processing operation. The same operation could be carried in analogue domain by varying, for each sample, the reference level of the A/D converter: more generally, an amplifier with a variable gain located before the signal integration also can be used to allow for a proper implementation.In the field or projected capacitance sensing, noise and lack of sensitivity is a prevalent concern. A common measure is to average the result over more ADC samples. Since acquiring more ADC samples cost power and time, the intuition is to use the full contribution of each sample, with the hope of getting more total signal. However, against intuition, the various embodiments propose to strongly reduce (but not entirely cancel) the contribution of head and tail samples.Figure 1 shows a typical exemplary projected capacitive device with one or more receiver electrodes (rxi), and one or more optional stimuli nodes (txj). Typically, the transmitting lines txnand the receiving lines rxnare arranged in a matrix such that the nodes where a transmission line crosses a receiving line form a capacitor that serves as the actual sensor. The matrix reduces the number of lines that would be otherwise needed. The example shown in Fig. 1 uses two receiving lines and four transmitting lines. However, any other number of lines may be used depending on the design. A measurement or evaluation circuit RX in the exemplary embodiment of Fig. 1 are connected to two receiver electrodes rxo and rxi, and stimulus circuit TX, such as, for example, I/O ports of a microcontroller, are connected to four emitting electrodes txo, txi, tx2, and tx3in this example.Figure 2 shows as an example bursts of stimuli pulses applied in a scan sequence to, for example, three emitting electrodes txo, txi, tx2and corresponding changes at times to, ti, t2... of active emitting electrode txO, txi, tx2which delimit packets of samples. Note, that samples do not necessarily synchronize with stimuli pulses. The fourth transmission line is here not used for a better overview only. In figure 2, one assumes each receiving electrode (rxO, rxi) has its own measurement circuit so measurements can be made in parallel. However, a single measurement circuit with multiplexer circuitry may also be possible but would require a repeated stimulus for each line. Figure 3 shows such an alternative to figure 2. The measure circuit is here multiplexed to different receiving electrodes, and packets of samples p0, pi, p2are delimited by changes of active receiving electrode rxo, rxi as well as active emitting electrodes txo, txi.Figure 4 shows a packet of samples acquired between a start and a stop time ti and ti+i, respectively. Here, each sample is weighted by a gain a (ao, ai..m, ae). The resulting output is shown as the weighted sum. It is shown how weight applied to samples near the transitions ti and ti+i get less importance in absolute value (ao, ae) compared to samples in the middle of the packet (am, an). According to some embodiments, a Gaussian weight curve may be applied. Other distribution weight curves may apply, such as Hamming, Hanning, Blackman etc., as long as the first and last measurements receive less gain than a center value.Figure 5 shows an example of projected capacitive system with a single capacitive sensor 530, 540, for example, when touched by a finger 550 during the acquisition of one packet. In non-touching embodiments, entering the detection space will influence the signals received at one or more electrodes. According to some embodiments, sensor electrodes 530, 540 may be part of a matrix of electrodes. The capacitive sensor 530, 540 is coupled with an evaluation circuit comprising, for example, a multiplexer 505, sample and hold circuit Ss, 510, an analog-to-digital converter 520 and a processing unit 570. In case of a single sensor, multiplexer 505 is of course not needed unless the ADC 520 is used to sample other analog signals. The transmitting electrode 530 or a selected transmitting electrode from a matrix is connected to a source generating a stimulus tx and the receive electrode 540 or one of the receiving electrodes from a matrix is selected from which a signal rx is fed, for example, by an analog multiplexer 505 to a sample and hold circuit with switch Ssand sample capacitor 510. The stimulus can be a series of pulses, wherein, for example, each pulse varies between ground and a supply voltage. A duty cycle of 50% may be used for a sequence of pulses. However, other duty cycles may apply. According to one embodiment, the pulses may be synchronized with the charging/discharging switches Sp, Sn as will be explained in more detail below.The sampled signal is then converted by an analog-to digital converter 520 into a digital value which is fed to a processing unit 570 for further processing. In this embodiment, a finger 550 touches the cover material 560 above the electrodes 530, 540 and behaves also as a source of noise (Vnoise) which will influence the received voltage (Vrx). However, other arrangement, for example with exposed electrodes are possible. Applications using the same principles for three-dimensional position detection will be discussed below. According to some embodiments, the receiving electrode 540 can also be momentarily connected to Vdd or to Gnd by switches Sn, Sp to generate a pair of sample values as will be explained below in more detail. Figure 6 shows a timing diagram of various signals of one embodiment which may for example use the arrangement shown in Fig. 5. Fig. 6 shows one embodiment of a switching sequence and acquisition process. In each sampling cycle, first, the receiving electrode 540 is momentarily connected to ground by switch Sn and signal Sn being high, while the sample and hold is tracking when signal Ss is high. When Sn is disconnected after signal Sn returns to low, a positive stimulus tx is applied on the emitting electrode 530, causing Vrx to rise. In addition to the voltage change caused by the stimulus tx, Vrx also changes -so long Sp or Sn switches are off- due to variation of the potential of the finger with respect to the ground. The sample and hold blocks the signal when signal Ss goes low, and a first or odd sample is acquired and converted. Then, while tx is still high and after the falling edge of Ss, switch Sp is closed for a short period by a positive pulse of signal Sp. Signal Ss then returns high, placing track and hold circuit again in tracking mode. Shortly thereafter, the stimulus tx returns to ground and thereafter, with the falling edge of Ss, a second or even sample is acquired. In this example values comprised between 0 and 4095. An arbitrary pivot value at 2048 is used to refer the amplitude of the samples. Fig. 6 shows that the signal acquired is alternately switched between ground and Vdd and altered from these starting points by the stimulus rx and the noise Vnoise. Thus, by charging the receiving electrode 540 alternately to ground or Vdd, an odd and an even sample is acquired. Depending on whether the noise signal is rising or falling between the falling edge of either Sn or Sp and falling edge of Ss, its contribution is either added or subtracted from the voltage signal Vrx as shown in Fig. 6. Figure 7 shows the signals acquired according to the timing diagram of figure 6 after demodulation. The measurement samples are demodulated in this example by replacing the odd samples by new values equal to 2048-value, and the even samples by new values equal to +value-2048. This demodulation operation corrects the fact that the stimulus tx applied on transmitting electrode 530 alternates positive and negative edges. Finally, this figure illustrates how the samples near the beginning and end of the packet are mathematically multiplied by a smaller weight compared to samples in the middle of the packet as shown with the result after weighting in the bottom curve of Fig. 7. The demodulation process is specific to the way of applying the stimulus tx. Other sampling schemes may apply. However, it shows that despite a change of the sign of some samples, their importance, or weight, still follows a gradually increasing and then decreasing importance. Figure 8 shows an experimental comparison of noise level recorded without using the principles of the various embodiments (dashed stroke), and using the principles of the various embodiments (solid stroke). As can be seen the noise floor is significantly improved.As discussed with respect to Fig. 1 and Fig. 5, the principles of the various embodiments can be applied to various capacitance measurement methods such as self and mutual capacitance measurements as used in many touch sensor application. Fig. 9 shows an example of a measurement sensor arrangement that can be used in a non-touching sensor application. Here a substrate 900 may comprise a transmitting electrode 920 and a plurality, here four, receiving electrodes 910a, b, c, d. While Fig. 9 shows a frame-like support structure 900 that can be for example arranged around a display, keyboard, or trackpad, other shapes and forms for the substrate may apply. The transmitting electrode 920 may cover the entire backside of the substrate 900 and the receiving electrodes 910a, b, c, d may be arranged on the top side. Such an arrangement can be provided by a double sided printed circuit board wherein the electrodes are formed by the copper layers. However, a single-sided printed circuit board may also be used, wherein the transmitting electrode may simply surround the receiving electrodes. All electrodes may be coupled with a gesture detection controller 940 which detects predefined gestures and touches and generates commands that are fed to a main processing system 930.Fig. 10 shows another embodiment of a similar system 1000 combined with a touch pad 1020. Here the electrodes A, B, C, and D surround the touchpad 1020 which may be similar to the embodiment shown in Fig. 1. The touchpad 1020 may be coupled with a touch controller 1010 whereas the electrodes A, B, C, D may be coupled with a 3D-gesture controller 1030. A transmission electrode (not shown) may be arranged below the sensor arrangement 1000 and coupled with the 3D-gesture controller 1030. The signals received from the various electrodes 910a, b, c, d of Fig. 9 or electrodes A, B, C, D of Fig. 10 may be received and converted in parallel or using a time-multiplexing scheme within the respective controller. However, the same various principles for evaluating sequential samples as discussed above also apply to these non-touching capacitive electrode sensor arrangements.
An imaging system can receive an image of a portion of an environment. The environment can include an object, such as a hand or a display. The imaging device can identify a data stream from an external device, for instance by detecting the data stream in the image or by receiving the data stream wirelessly from the external device. The imaging device can detect a condition based on the image and/or the data stream, for instance by detecting that the object is missing from the image, by detecting that a low resource at the imaging device, and/or by detecting visual media content displayed by a display in the image. Upon detecting the condition, imaging device automatically determines a location of the object (or a portion thereof) using the data stream and/or the image. The imaging device generates and/or outputs content that is based on the location of the object.
CLAIMSWHAT IS CLAIMED IS:1. An apparatus for processing image data, the apparatus comprising: a memory; and one or more processors coupled to the memory, the one or more processors configured to: receive an image of a portion of an environment captured by an image sensor, wherein the environment includes an object; identify a data stream from an external device; detect a condition based on at least one of the image, the data stream, and an operational status of the apparatus; in response to detecting the condition, determine a location of the object in the environment based on at least one of the image and the data stream; and generate an output based on the location of the object in the environment.2. The apparatus of claim 1, wherein, to detect the condition based on the image, the one or more processors are configured to determine that the object is missing from a portion of the environment in the image.3. The apparatus of claim 2, wherein, to determine that the object is missing from the portion of the environment in the image, the one or more processors are configured to determine that at least a part of the object is occluded in the image.4. The apparatus of claim 2, wherein the external device includes a second image sensor, wherein the data stream includes a second image of a second portion of the environment, and wherein determining the location of the object in the environment is based at least in part on a depiction of the object in the second image.5. The apparatus of claim 4, wherein the portion of the environment and the second portion of the environment overlap.6. The apparatus of claim 1, wherein, to detect the condition based on the operational status of the apparatus, the one or more processors are configured to determine that an availability of a resource is below a threshold.7. The apparatus of claim 6, wherein, to determine that the availability of the resource is below the threshold, the one or more processors are configured to determine that a battery level of a battery is below a battery level threshold.8. The apparatus of claim 6, wherein, to determine that the availability of the resource is below the threshold, the one or more processors are configured to determine that an available bandwidth is below a bandwidth threshold.9. The apparatus of claim 1, wherein, to detect the condition based on the operational status of the apparatus, the one or more processors are configured to receive user input corresponding to offloading processing to the external device.10. The apparatus of claim 1, wherein, to generate the output, the one or more processors are configured to generate content.11. The apparatus of claim 10, wherein the one or more processors are configured to: output the content based on the location of the object in the environment.12. The apparatus of claim 11, further comprising: a display; wherein, to output the content, the one or more processors are configured to send the content to the display to be displayed.13. The apparatus of claim 1, wherein the one or more processors are configured to: detect an additional condition based on at least one of an additional image captured by the image sensor, the data stream, and the operational status of the apparatus; and in response to detecting the additional condition, perform a function previously performed by the external device.14. The apparatus of claim 1, wherein, to generate the output, the one or more processors are configured to: control the apparatus based on a user input.15. The apparatus of claim 1, wherein, to detect the condition based on the image, the one or more processors are configured to determine one or more lighting conditions in the image.16. The apparatus of claim 15, wherein, to determine the one or more lighting conditions in the image, the one or more processors are configured to determine that one or more light values of the image are below a lighting threshold.17. The apparatus of claim 1, wherein, to determine the location of the object in the environment, the one or more processors are configured to: send a request for the external device to identify the location of the object in the environment; and receive a response from the external device identifying the location of the object in the environment.18. The apparatus of claim 1, wherein the object is a display of an external display device.19. The apparatus of claim 18, wherein, to detect the condition based on the image, the one or more processors are configured to identify, in the image, visual media content displayed on the display of the external display device.20. The apparatus of claim 18, wherein, to generate the output, the one or more processors are configured to generate content, and wherein the content virtually extends the display of the external display device.21. The apparatus of claim 1, wherein, to generate the output, the one or more processors are configured to: generate content at least in part by overlaying virtual content over a region of the image, wherein the region of the image is based on the location of the object in the environment.22. The apparatus of claim 21, wherein the object is a display of an external display device, and wherein the region of the image is adjacent to a depiction of the display of the external display device in the image.23. The apparatus of claim 21, wherein the object is a hand of a user of the apparatus, and wherein the hand is at least partially adjacent to the region of the image.24. The apparatus of claim 1 , wherein the one or more processors are further configured to: in response to detecting the condition, generate a merged dataset at least by combining data from the data stream with the image captured by the image sensor, wherein determining the location of the object is based at least in part on the merged dataset.25. The apparatus of claim 1, wherein the apparatus is a head-mounted display (HMD).26. The apparatus of claim 1, further comprising: an audio output device; wherein, to generate the output, the one or more processors are configured to generate content; and wherein the one or more processors are configured to send the content to the audio output device to be played.27. A method for processing image data, comprising: receiving an image of a portion of an environment captured by an image sensor, wherein the environment includes an object; identifying, by a device, a data stream from an external device; detecting a condition based on at least one of the image, the data stream, and an operational status of the device; in response to detecting the condition, determining a location of the object in the environment based on at least one of the image and the data stream; and generating an output based on the location of the object in the environment.28. The method of claim 27, wherein detecting the condition based on the image includes determining that the object is missing from a portion of the environment in the image.29. The method of claim 28, wherein determining that the object is missing from the portion of the environment in the image includes determining that at least a part of the object is occluded in the image.30. The method of claim 28, wherein the external device includes a second image sensor, wherein the data stream includes a second image of a second portion of the environment, and wherein determining the location of the object in the environment is based at least in part on a depiction of the object in the second image.31. The method of claim 30, wherein the portion of the environment and the second portion of the environment overlap.32. The method of claim 27, wherein detecting the condition based on the operational status of the device includes determining that an availability of a resource is below a threshold.33. The method of claim 32, wherein determining that the availability of the resource is below the threshold includes determining that a battery level of a battery is below a battery level threshold.34. The method of claim 32, wherein determining that the availability of the resource is below the threshold includes determining that an available bandwidth is below a bandwidth threshold.35. The method of claim 27, wherein detecting the condition based on the operational status of the device includes receiving user input corresponding to offloading processing to the external device.36. The method of claim 27, wherein generating the output includes generating content.37. The method of claim 36, further comprising outputting the content based on the location of the object in the environment.38. The method of claim 37, wherein outputting the content includes sending the content to a display of the device to be displayed.39. The method of claim 27, further comprising: detecting an additional condition based on at least one of an additional image captured by the image sensor, the data stream, and the operational status of the device; and in response to detecting the additional condition, performing a function previously performed by the external device.40. The method of claim 27, wherein generating the output includes controlling the device based on a user input.41. The method of claim 27, wherein detecting the condition based on the image includes determining one or more lighting conditions in the image.42. The method of claim 41, wherein determining the one or more lighting conditions in the image includes determining that one or more light values of the image are below a lighting threshold.43. The method of claim 27, wherein determining the location of the object in the environment includes: sending a request for the external device to identify the location of the object in the environment; and receiving a response from the external device identifying the location of the object in the environment.44. The method of claim 27, wherein the object is a display of an external display device.45. The method of claim 44, wherein detecting the condition based on the image includes identifying, in the image, visual media content displayed on the display of the external display device.46. The method of claim 44, wherein generating the output includes generating content, and wherein the content virtually extends the display of the external display device.47. The method of claim 27, wherein generating the output includes: generating content at least in part by overlaying virtual content over a region of the image, wherein the region of the image is based on the location of the object in the environment.48. The method of claim 47, wherein the object is a display of an external display device, and wherein the region of the image is adjacent to a depiction of the display of the external display device in the image. 49. The method of claim 47, wherein the object is a hand of a user of the device, and wherein the hand is at least partially adjacent to the region of the image.50. The method of claim 27, further comprising: in response to detecting the condition, generating a merged dataset at least by combining data from the data stream with the image captured by the image sensor, wherein determining the location of the object is based at least in part on the merged dataset.51. The method of claim 27, wherein generating the output includes generating content, and further comprising sending the content to an audio output device to be played.
COLLABORATIVE TRACKINGFIELD[0001] The present disclosure generally relates to image processing. For example, aspects of the disclosure relate to systems and techniques for combining data from multiple devices to perform object tracking within an environment and provide output based on the tracking.BACKGROUND[0002] An extended reality (XR) device is a device that displays an environment to a user, for example through a head-mounted display (HMD), glasses, a mobile handset, or other device. The environment is at least partially different from the real-world environment in which the user and the device are located, and may for instance include virtual content. The user can generally change their view of the environment interactively, for example by tilting or moving the XR device. Virtual reality (VR), augmented reality (AR), and mixed reality (MR) are examples of XR.[0003] XR devices can include one or more image sensors, for instance within one or more cameras. For example, cameras in XR devices can be used for capturing image data of a real-world environment in a direction in which a user is looking and from a perspective of the user’s location. Image sensors in XR devices can also be used to capture image data for tracking purposes (e.g., hand tracking, head tracking, body tracking, etc.).[0004] An XR device can display a representation of the user’s hands in the environment that the XR device displays to the user, so that the user feels as if they are in that environment. Hand tracking can allow the XR device to accurately represent the user’s hands in the environment, and can allow user to interact with real or virtual objects within the environment. However, hand tracking generally requires the user to keep their hands within the field of view (FOV) of the XR device’s image sensors. XR devices can suffer from errors if the user’s hands exit the FOV or are occluded. Hand tracking is generally a computationally expensive process that can draw battery power rapidly. BRIEF SUMMARY[0005] In some examples, systems and techniques are described for feature tracking based on data from multiple devices. An imaging device, such as an XR device, can make use of one or more data streams from one or more external devices. For instance, an image may be received from an image sensor of the imaging device. The image can be an image of a portion of an environment. The environment includes an object, such as a user’s hand or a display screen, though the object may or may not be present in the portion of the environment depicted in the image. The imaging device can identify a data stream from an external device, for instance based on the image (e.g., by identifying the data stream depicted in the image, such as visual media content displayed on an external display device depicted in the image), based on one or more transmissions of the data stream to the imaging device from the external device (e.g., over a wireless network or wired network), based on user input, and/or based on other factors. The imaging device can detect a condition, such as based on the image, the data stream, an operational status of the imaging device, any combination thereof, and/or based on other factors. In some examples, the condition can be based on the imaging device losing track of the object, the imaging device being low on computational resources (e.g., low on power and/or based on other operational status of the apparatus), the imaging device detecting visual media content (or a representation thereof) within the image, based on a user input or setting that requests using the external device rather than the imaging device (e.g., XR device) when available for a particular function (e.g., displaying content, tracking an object such as a hand, head, or body of a user), based on a user input or setting indicating a preference that a device (e.g., the external device) be used for a particular function when plugged into the imaging device, that a privacy and/or security is a factor (which could also be based on a user input or setting), based on a user input (e.g., a user input requesting that resources be offloaded to the external device, such as a user input requesting to turn off the imaging device, a user input requesting to turn an external device such as a light on or off through a home automation application running on the imaging device, etc.), based on capabilities of an image sensor of the imaging device (e.g., when an infrared (IR) sensor on one device is useful where ambient lighting is inadequate, when an object being tracked is moving fast and the image sensor with a higher frame rate is more appropriate, etc.), or any combination thereof. [0006] In some cases, the imaging device can merge the data from the data stream with the image captured by the image sensor, resulting in a merged dataset. Based on detecting the condition, the imaging device can determine a location of at least a part of the object in the environment based on the data stream, the image, and/or the merged dataset. The imaging device can generate an output (e.g., content, a command to control the imaging device, a command to control the external device, etc.). The imaging device can output content that is based on the location of at least the part of the object in the environment. In one example, if the object is the user’s hand, the content generated and/or output by the imaging device can position a virtual object held by the user’s hand accurately based on the location of the user’s hand (determined based on the data stream, the image, and/or the merged dataset), even if the user’s hand is not depicted in the image. If the object is a display screen and/or visual content displayed on the display screen, the content generated and/or output by the imaging device can position virtual content adjacent to the position of the display screen.[0007] In one example, an apparatus for image processing is provided. The apparatus includes a memory and one or more processors (e.g., implemented in circuitry) coupled to the memory. The one or more processors are configured to and can: receive an image of a portion of an environment captured by an image sensor, wherein the environment includes an object; identify a data stream from an external device; detect a condition based on at least one of the image, the data stream, and an operational status of the apparatus; in response to detecting the condition, determine a location of the obj ect in the environment based on at least one of the image and the data stream; and generate an output based on the location of the object in the environment.[0008] In another example, a method of image processing is provided. The method includes: receiving, by a device, an image of a portion of an environment captured by an image sensor, wherein the environment includes an object; identifying a data stream from an external device; detecting a condition based on at least one of the image, the data stream, and an operational status of the device; in response to detecting the condition, determining a location of the object in the environment based on at least one of the image and the data stream; and generating an output based on the location of the object in the environment. [0009] In another example, a non-transitory computer-readable medium of a device is provided that has stored thereon instructions that, when executed by one or more processors, cause the one or more processors to: receive an image of a portion of an environment captured by an image sensor, wherein the environment includes an object; identify a data stream from an external device; detect a condition based on at least one of the image, the data stream, and an operational status of the device; in response to detecting the condition, determine a location of the object in the environment based on at least one of the image and the data stream; and generate an output based on the location of the object in the environment.[0010] In another example, an apparatus for image processing is provided. The apparatus includes: means for receiving an image of a portion of an environment captured by an image sensor, wherein the environment includes an object; means identifying a data stream from an external device; means detecting a condition based on at least one of the image, the data stream, and an operational status of the apparatus; means for determining, in response to detecting the condition, a location of the object in the environment based on at least one of the image and the data stream; and means for generating an output based on the location of the object in the environment[0011] In some aspects, to detect the condition based on the image, the methods, apparatuses, and computer-readable medium described above further include determining that the object is missing from a portion of the environment in the image.[0012] In some aspects, to determine that the object is missing from the portion of the environment in the image, the methods, apparatuses, and computer-readable medium described above include determining that at least a part of the object is occluded in the image.[0013] In some aspects, the external device includes a second image sensor. In some cases, the data stream includes a second image of a second portion of the environment. In such cases, determining the location of the object in the environment can be based at least in part on a depiction of the object in the second image. In some aspects, the portion of the environment and the second portion of the environment overlap. [0014] In some aspects, to detect the condition based on the operational status of the apparatus, the methods, apparatuses, and computer-readable medium described above include determining that an availability of a resource is below a threshold. In some aspects, to determine that the availability of the resource is below the threshold, the methods, apparatuses, and computer- readable medium described above include determining that a battery level of a battery is below a battery level threshold.[0015] In some aspects, to determine that the availability of the resource is below the threshold, the methods, apparatuses, and computer-readable medium described above include determining that an available bandwidth is below a bandwidth threshold.[0016] In some aspects, to detect the condition based on the operational status of the apparatus, the methods, apparatuses, and computer-readable medium described above include receiving user input corresponding to offloading processing to the external device.[0017] In some aspects, to generate the output, the methods, apparatuses, and computer-readable medium described above include generating content. In some cases, the methods, apparatuses, and computer-readable medium described above include, the one or more processors are configured to output the content based on the location of the object in the environment.[0018] In some aspects, to output the content, the methods, apparatuses, and computer-readable medium described above include sending the content to a display (e.g., of the apparatus or the device) to be displayed.[0019] In some aspects, the methods, apparatuses, and computer-readable medium described above include: detecting an additional condition based on at least one of an additional image captured by the image sensor, the data stream, and the operational status of the apparatus; and in response to detecting the additional condition, performing a function previously performed by the external device.[0020] In some aspects, to generate the output, the methods, apparatuses, and computer-readable medium described above include controlling the apparatus based on a user input. [0021] In some aspects, to detect the condition based on the image, the methods, apparatuses, and computer-readable medium described above include determining one or more lighting conditions in the image.[0022] In some aspects, to determine the one or more lighting conditions in the image,, the methods, apparatuses, and computer-readable medium described above include determining that one or more light values of the image are below a lighting threshold.[0023] In some aspects, to determine the location of the object in the environment, the methods, apparatuses, and computer-readable medium described above include: sending a request for the external device to identify the location of the object in the environment; and receiving a response from the external device identifying the location of the object in the environment.[0024] In some aspects, the object is a display of an external display device.[0025] In some aspects, to detect the condition based on the image, the methods, apparatuses, and computer-readable medium described above include identifying, in the image, visual media content displayed on the display of the external display device.[0026] In some aspects, to generate the output, the methods, apparatuses, and computer-readable medium described above include generating content. In some cases, the content virtually extends the display of the external display device.[0027] In some aspects, to generate the output, the methods, apparatuses, and computer-readable medium described above include generate content at least in part by overlaying virtual content over a region of the image. In some cases, the region of the image is based on the location of the object in the environment.[0028] In some aspects, the object is a display of an external display device. In some cases, the region of the image is adjacent to a depiction of the display of the external display device in the image.[0029] In some aspects, the object is a hand of a user of the apparatus. In some cases, the hand is at least partially adjacent to the region of the image. [0030] In some aspects, the methods, apparatuses, and computer-readable medium described above include, in response to detecting the condition, generate a merged dataset at least by combining data from the data stream with the image captured by the image sensor. In some cases, determining the location of the object is based at least in part on the merged dataset.[0031] In some aspects, to generate the output, the methods, apparatuses, and computer-readable medium described above include generating content. In some cases, the output, the methods, apparatuses, and computer-readable medium described above include transmitting or sending the content to an audio output device (e.g., of the apparatus or the device) to be played.[0032] In some aspects, each of the apparatuses or devices described above is, can be part of, or can include an extended reality (XR) device (e.g., a virtual reality (VR) device, an augmented reality (AR) device, or a mixed reality (MR) device), a smart device or assistant, a vehicle, a mobile device (e.g., a mobile telephone or so-called “smart phone” or other mobile device), a wearable device, a personal computer, a laptop computer, a tablet computer, a server computer, or other device. In some aspects, the apparatus or device includes an image sensor (e.g., a camera) or multiple image sensors (e.g., multiple cameras) for capturing one or more images. In some aspects, the apparatus or device includes one or more displays for displaying one or more images, notifications, and/or other displayable data. In some aspects, the apparatus or device includes one or more speakers, one or more light-emitting devices, and/or one or more microphones. In some aspects, the apparatuses or devices described above can include one or more sensors. In some cases, the one or more sensors can be used for determining a location of the apparatuses, a state of the apparatuses (e.g., a tracking state, an operating state, a temperature, a humidity level, and/or other state), and/or for other purposes.[0033] This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used in isolation to determine the scope of the claimed subject matter. The subject matter should be understood by reference to appropriate portions of the entire specification of this patent, any or all drawings, and each claim.[0034] The foregoing, together with other features and embodiments, will become more apparent upon referring to the following specification, claims, and accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS[0035] Illustrative embodiments of the present application are described in detail below with reference to the following drawing figures:[0036] FIG. 1 is a block diagram illustrating an example architecture of an image capture and processing system, in accordance with some examples;[0037] FIG. 2 is a block diagram illustrating an example architecture of an extended reality (XR) system, in accordance with some examples;[0038] FIG. 3 A is a perspective diagram illustrating a head-mounted display (HMD) that is used as an XR system, in accordance with some examples; [0039] FIG. 3B is a perspective diagram illustrating the head-mounted display (HMD) of FIG.3 A being worn by a user, in accordance with some examples;[0040] FIG. 4A is a perspective diagram illustrating a front surface of a mobile handset that includes front-facing cameras and is used as an XR system, in accordance with some examples;[0041] FIG. 4B is a perspective diagram illustrating a rear surface of a mobile handset that includes rear-facing cameras and is used as an XR system, in accordance with some examples;[0042] FIG. 5 is a perspective diagram illustrating a user wearing a head-mounted display (HMD) that is used as an XR system and performs hand tracking to determine a gesture-based input based on the hand being in the field of view (FOV) of the HMD, in accordance with some examples; [0043] FIG. 6A is a perspective diagram illustrating a user wearing a head-mounted display(HMD) that is used as an XR system and that performs hand tracking to determine a gesture-based input based on a position of the hand of the user even though the hand is out of the field of view (FOV) of the HMD, based on the hand being in the FOV of an external camera, in accordance with some examples; [0044] FIG. 6B is a perspective diagram illustrating a user wearing a head-mounted display (HMD) that is used as an XR system and that performs hand tracking to determine a gesture-based input based on a position of the hand of the user even though an occlusion occludes the hand from the field of view (FOV) of the HMD, based on the hand being in the FOV of an external camera, in accordance with some examples;[0045] FIG. 7 is a perspective diagram illustrating an external head-mounted display (HMD) device providing assistance with hand-tracking a hand of a user of a HMD that is used as an XR system due to a low battery condition at the HMD, in accordance with some examples;[0046] FIG. 8A is a perspective diagram illustrating a user wearing a head-mounted display (HMD) that is used as an XR system and that positions virtual content based on the position of a display and/or visual content displayed on the display in the FOV of the HMD;[0047] FIG. 8B is a perspective diagram illustrating a user wearing a head-mounted display (HMD) that is used as an XR system and that positions a virtual representation of visual content displayed on a display based on a position of the display and/or the visual content even though the display and/or the visual content are out of the field of view (FOV) of the HMD, in accordance with some examples;[0048] FIG. 9 is a flow diagram illustrating operations for processing image data, in accordance with some examples; and[0049] FIG. 10 is a diagram illustrating an example of a computing system for implementing certain aspects described herein.DETAILED DESCRIPTION[0050] Certain aspects and embodiments of this disclosure are provided below. Some of these aspects and embodiments may be applied independently and some of them may be applied in combination as would be apparent to those of skill in the art. In the following description, for the purposes of explanation, specific details are set forth in order to provide a thorough understanding of embodiments of the application. However, it will be apparent that various embodiments may be practiced without these specific details. The figures and description are not intended to be restrictive.[0051] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the scope of the application as set forth in the appended claims.[0052] A camera is a device that receives light and captures image frames, such as still images or video frames, using an image sensor. The terms “image,” “image frame,” and “frame” are used interchangeably herein. Cameras can be configured with a variety of image capture and image processing settings. The different settings result in images with different appearances. Some camera settings are determined and applied before or during capture of one or more image frames, such as ISO, exposure time, aperture size, f/stop, shutter speed, focus, and gain. For example, settings or parameters can be applied to an image sensor for capturing the one or more image frames. Other camera settings can configure post-processing of one or more image frames, such as alterations to contrast, brightness, saturation, sharpness, levels, curves, or colors. For example, settings or parameters can be applied to a processor (e.g., an image signal processor or ISP) for processing the one or more image frames captured by the image sensor. [0053] An extended reality (XR) device is a device that displays an environment to a user, for example through a head-mounted display (HMD), glasses, a mobile handset, or other device. The displayed environment is at least partially different from the real-world environment in which the user and the device are located, and may for instance include virtual content. In some cases, the environment that the XR device displays to the user can be at least partially virtual. The user can generally change their view of the environment that the XR device displays to the user interactively, for example by tilting the XR device and/or or moving the XR device translationally or laterally. Tilting the XR device can include tilts or rotations along the pitch axis, the yaw axis, the roll axis, or a combination thereof. Translational/lateral movements of the XR device can include movements along paths charted within a 3 -dimensional volume having 3 perpendicular axes, such as an X, a Y axis, and a Z axis. XR devices that only track rotational movement of the XR device can be referred to XR devices with three degrees of freedom (3DoF). XR devices that track both rotational and translational movement of the XR device can be referred to as XR devices having six degrees of freedom (6DoF) tracking capabilities.[0054] An XR device can include sensors, such as image sensors, accelerometers, gyroscopes, inertial measurement units (IMUs), or combinations thereof. The XR device can use data captured by these sensors to detect movement of the XR device within the real-world environment, for instance so that the XR device can update the user’s view of the environment interactively based on rotational and/or translational movement of the XR device. Some XR devices can also use data captured by these sensors to detect and/or track features of one or more objects, such as a user’s hands. Even XR devices that display otherwise fully-virtual VR environments to users can still display representations of the user’s own hands in the environment. Displaying representations of the user’s hands in the environment can increase immersion in the environment for users of the XR device, helping the users feel that they are truly inside that environment. Displaying representations of the user’s hands in the environment can also allow the user’s hands to interact with virtual objects and/or interfaces (e.g., menus) in the environment displayed by the XR device.[0055] An XR device can perform object tracking, which can be useful to allow a user to interact with virtual objects and/or interfaces displayed by an XR device using their hands. For instance, an XR device can track one or more hands of a user of the XR device to determine a pose (e.g., position and orientation) of the one or more hands. Hand tracking can be useful to ensure that the pose of representations of the user’s hands used by the XR device (e.g., to determine a gesture- based input, for displaying the representation of the one or more hands, etc.) are accurately synchronized with the real-world positions of the user’s hands. Other types of tracking can also be performed, including head tracking, body tracking, torso tracking, tracking of a controller used to interact with an XR device, and/or tracking of other objects. . In one example, hand tracking can be useful to allow the XR device to accurately render occlusion of the environment by the user’s hands, occlusion of the hands by one or more real in the environment or virtual objects displayed by the XR device, occlusion of any real or virtual objects by the hand(s) based on the user holding the real or virtual objects in their hands, etc. In some cases, hand tracking can stop working properly if the user’s hands exit the field of view of an XR device’s sensors, for instance as illustrated in FIG. 6A discussed below. In other cases, hand tracking can stop working properly if the user’s hands are occluded from view of the XR device’s sensors, for instance as illustrated in FIG. 6B.[0056] Object tracking (e.g., hand tracking, head tracking, body tracking, etc.) is a computationally expensive process that can quickly drain a battery of an XR device. Thus, it may be useful to offload certain hand tracking tasks based on an operational status of the XR device, such as when an XR device is low on battery power or other computational resources (e.g., as illustrated in FIG. 7). In some XR systems, it may also be useful to track other types of objects. For instance, in some XR systems, it may be useful to track a display screen, for instance as illustrated in FIG. 8A-8B.[0057] Techniques are described herein for an imaging device (e.g., an XR device) to make use of one or more data streams from one or more external devices. For instance, an image may be received from an image sensor of the imaging device. The image can be an image of a portion of an environment that includes an object. The object may or may not be present in the portion of the environment depicted in the image. The object can be, for example, a hand of a user of the imaging device, a head of the user, a body of the user, another body part of the user of the imaging device, a display screen, image media content displayed on the display screen, video media content displayed on the display screen, a person, an animal, a vehicle, a plant, another XR device (in addition to the imaging device, which may be an XR device), another object, or a combination thereof.[0058] The imaging device can identify a data stream from an external device. For instance, the imaging device can identify the data stream from the external device based on the image received from the image sensor (e.g., by identifying the data stream depicted in the image, such as media content being displayed on an external display device that is depicted in the image), based on one or more transmissions of the data stream to the imaging device from the external device (e.g., over a wireless network or wired connection), based on user input, and/or based on other factors. The imaging device can detect a condition, such as based on the image, the data stream, an operational status of the imaging device, any combination thereof, and/or based on other factors. In some examples, the condition can be based on the imaging device losing track of the object (e.g., because the tracked object has moved out of an FOV of the imaging device, is occluded from the view of the imaging device by a real-world or virtual object, etc.), the imaging device being low on computational resources (e.g., low on power and/or based on other operational status of the apparatus), the imaging device detecting visual media content (or a representation thereof) within the image, based on a user input or setting that requests using the external device rather than the imaging device (e.g., XR device) when available for a particular function (e.g., displaying content, tracking an object such as a hand, head, or body of a user), based on a user input or setting indicating a preference that a device (e.g., the external device) be used for a particular function when plugged into the imaging device, that a privacy and/or security is a factor (which could also be based on a user input or setting), based on a user input (e.g., a user input requesting that resources be offloaded to the external device, such as a user input requesting to turn off the imaging device, a user input requesting to turn an external device such as a light on or off through a home automation application running on the imaging device, etc.), based on capabilities of an image sensor of the imaging device (e.g., when an infrared (IR) sensor on one device is useful where ambient lighting is inadequate, when an object being tracked is moving fast and the image sensor with a higher frame rate is more appropriate, etc.), or any combination thereof.[0059] In response to detecting the condition, the imaging device can generate an output. For instance, based on detecting the condition, the imaging device can generate a merged dataset by merging or combining data from the data stream with the image captured by the image sensor. In some cases, in response to detecting the condition, the imaging device can determine a location of at least a part of the object in the environment based on the data stream, the image, the merged dataset, or any combination thereof. The imaging device can generate and output content that is based on the location of at least the part of the object in the environment. For instance, if the object is the user’s hand, the content generated and/or output by the imaging device can position a virtual object held by the user’s hand accurately based on the location of the user’s hand, even if the user’s hand is not depicted in the image. If the object is a display screen and/or visual content displayed on the display screen, the content generated and/or output by the imaging device can position virtual content adjacent to, or with some other predetermined relative positioning relative to, the position of the display screen and/or the visual content displayed on the display screen. The content output by the imaging device can include at least a portion of the merged dataset. The imaging device and the external device may perform a privacy negotiation. For instance, the external device can identify to the imaging device what the imaging device can and cannot use the data stream from the external device for, and vice versa.[0060] In a first illustrative example, the external device includes an external camera, and the data stream from the external device includes a camera feed (e.g., one or more images) from the external camera. The external camera can be a camera from another imaging device (e.g., another XR device) or from another camera. The external camera can be in the same environment as the imaging device, and/or can have the same environment in its FOV as the imaging device has in its FOV. The condition may include, for example, that the imaging device has lost track of the user’s hand(s) and cannot properly perform hand tracking. For example, the user may have moved their hand(s) out of the field of view of the imaging device (e.g., as in FIG. 6A) and/or an occlusion may have occluded the user’s hand(s) from the perspective of the camera(s) of the imaging device (e.g., as in FIG. 6B). The user’s hand(s) may be depicted in the camera feed from the external camera, however. The imaging device can use the camera feed from the external camera to help identify where the user’s hands are relative to content depicted in the image captured by the image sensor of the imaging device. In some cases, the external device can include a processor that can perform preliminary processing, for instance by performing hand detection and/or hand tracking using images from the camera feed from the external camera. The external device can send image(s) from the camera feed and/or the data corresponding to the preliminary processing to the imaging device. The content generated and/or output by the imaging device can include modifications to the image based on the hand tracking, such as incorporation of virtual content into the image. The virtual content can be positioned on (or relative to) the display of the imaging device based on the position(s) of the user’s hand(s). [0061] In a second illustrative example, the external device includes an external camera, and the data stream from the external device includes a camera feed (e.g., one or more images) from the external camera. The external camera can be a camera from another imaging device (e.g., another XR device) or from another camera. The external camera can be in the same environment as the imaging device, and/or can have the same environment in its FOV as the imaging device has in its FOV. In such an example, the condition can be based an operational status of the XR device. For example, the condition can be based on detecting that the imaging device is low on battery power, data bandwidth, processing bandwidth, another computational resource, or a combination thereof. The imaging device can use the camera feed from the external camera to help perform hand tracking or other function(s) that might be battery-intensive, bandwidth-intensive, processing intensive, otherwise use a large amount of computational resources, or a combination thereof. As in the first illustrative example, the external device in the second illustrative example can perform preliminary processing (e.g., by performing hand detection and/or tracking on images from the camera feed from the external camera). The external device can send (pre-processed) image(s) from the camera feed and/or the data corresponding to the preliminary processing to the imaging device. The content generated and/or output by the imaging device can include modifications to the image based on the hand tracking, such as incorporation of virtual content into the image based on hand position(s).[0062] In a third illustrative example, the external device includes a display screen. The external device, in this example, can be a television, a laptop computer, a desktop computer monitor, a smart home device or assistant, a video game console monitor, a mobile handset with a display screen, a wearable device with a display screen, a television display screen, another device with a display screen, a display screen on its own, or a combination thereof. The data stream from the external device can include the visual media content displayed on the display screen. The image captured by the imaging device can include a representation of the display screen of the external device, and thus can include a representation of the visual media content displayed on the display screen of the external device. The condition may include detection of the representation of the display screen, and/or of the representation of the visual media content displayed on the display screen, within the image captured by the image sensor of the imaging device. For example, a user of the imaging device can see, through the user’s imaging device, the external device displaying the visual media content on its display screen. For example, the visual media content may be a television show, a movie, a video game, a slide show, another type of image, another type of video, or some combination thereof. Merging the data from the data stream (the visual media content) with the image can include adding information to the representation of the visual media content in the image. The added information can, for example, include information about actors in a scene of a television show or movie, information about deleted scenes, information about video game statistics such as health, and/or other information. To the user of the imaging device, the added information can appear adjacent the representation of the visual media content, or overlaid over the representation of the visual media content, or otherwise positioned relative to the representation of the visual media content.[0063] FIG. l is a block diagram illustrating an architecture of an image capture and processing system 100. The image capture and processing system 100 includes various components that are used to capture and process images of scenes (e.g., an image of a scene 110). The image capture and processing system 100 can capture standalone images (or photographs) and/or can capture videos that include multiple images (or video frames) in a particular sequence. A lens 115 of the system 100 faces a scene 110, such as a portion of a real-world environment, and receives light from the scene 110. The lens 115 bends the light toward the image sensor 130. The light received by the lens 115 passes through an aperture controlled by one or more control mechanisms 120 and is received by an image sensor 130.[0064] The one or more control mechanisms 120 may control exposure, focus, and/or zoom based on information from the image sensor 130 and/or based on information from the image processor 150. The one or more control mechanisms 120 may include multiple mechanisms and components; for instance, the control mechanisms 120 may include one or more exposure control mechanisms 125A, one or more focus control mechanisms 125B, and/or one or more zoom control mechanisms 125C. The one or more control mechanisms 120 may also include additional control mechanisms besides those that are illustrated, such as control mechanisms controlling analog gain, flash, high dynamic range (HDR), depth of field, and/or other image capture properties. [0065] The focus control mechanism 125B of the control mechanisms 120 can obtain a focus setting. In some examples, focus control mechanism 125B stores the focus setting in a memory register. Based on the focus setting, the focus control mechanism 125B can adjust the position of the lens 115 relative to the position of the image sensor 130. For example, based on the focus setting, the focus control mechanism 125B can move the lens 115 closer to the image sensor 130 or farther from the image sensor 130 by actuating a motor or servo, thereby adjusting focus. In some cases, additional lenses may be included in the system 100, such as one or more microlenses over each photodiode of the image sensor 130, which each bend the light received from the lens 115 toward the corresponding photodiode before the light reaches the photodiode. The focus setting may be determined via contrast detection autofocus (CDAF), phase detection autofocus (PDAF), or some combination thereof. The focus setting may be determined using the control mechanism 120, the image sensor 130, and/or the image processor 150. The focus setting may be referred to as an image capture setting and/or an image processing setting.[0066] The exposure control mechanism 125A of the control mechanisms 120 can obtain an exposure setting. In some cases, the exposure control mechanism 125A stores the exposure setting in a memory register. Based on this exposure setting, the exposure control mechanism 125A can control a size of the aperture (e.g., aperture size or f/stop), a duration of time for which the aperture is open (e.g., exposure time or shutter speed), a sensitivity of the image sensor 130 (e.g., ISO speed or film speed), analog gain applied by the image sensor 130, or any combination thereof. The exposure setting may be referred to as an image capture setting and/or an image processing setting.[0067] The zoom control mechanism 125C of the control mechanisms 120 can obtain a zoom setting. In some examples, the zoom control mechanism 125C stores the zoom setting in a memory register. Based on the zoom setting, the zoom control mechanism 125C can control a focal length of an assembly of lens elements (lens assembly) that includes the lens 115 and one or more additional lenses. For example, the zoom control mechanism 125C can control the focal length of the lens assembly by actuating one or more motors or servos to move one or more of the lenses relative to one another. The zoom setting may be referred to as an image capture setting and/or an image processing setting. In some examples, the lens assembly may include a parfocal zoom lens or a varifocal zoom lens. In some examples, the lens assembly may include a focusing lens (which can be lens 115 in some cases) that receives the light from the scene 110 first, with the light then passing through an afocal zoom system between the focusing lens (e.g., lens 115) and the image sensor 130 before the light reaches the image sensor 130. The afocal zoom system may, in some cases, include two positive (e.g., converging, convex) lenses of equal or similar focal length (e.g., within a threshold difference) with a negative (e.g., diverging, concave) lens between them. In some cases, the zoom control mechanism 125C moves one or more of the lenses in the afocal zoom system, such as the negative lens and one or both of the positive lenses.[0068] The image sensor 130 includes one or more arrays of photodiodes or other photosensitive elements. Each photodiode measures an amount of light that eventually corresponds to a particular pixel in the image produced by the image sensor 130. In some cases, different photodiodes may be covered by different color filters, and may thus measure light matching the color of the filter covering the photodiode. For instance, Bayer color filters include red color filters, blue color filters, and green color filters, with each pixel of the image generated based on red light data from at least one photodiode covered in a red color filter, blue light data from at least one photodiode covered in a blue color filter, and green light data from at least one photodiode covered in a green color filter. Other types of color filters may use yellow, magenta, and/or cyan (also referred to as “emerald”) color filters instead of or in addition to red, blue, and/or green color filters. Some image sensors may lack color filters altogether, and may instead use different photodiodes throughout the pixel array (in some cases vertically stacked). The different photodiodes throughout the pixel array can have different spectral sensitivity curves, therefore responding to different wavelengths of light. Monochrome image sensors may also lack color filters and therefore lack color depth.[0069] In some cases, the image sensor 130 may alternately or additionally include opaque and/or reflective masks that block light from reaching certain photodiodes, or portions of certain photodiodes, at certain times and/or from certain angles, which may be used for phase detection autofocus (PDAF). The image sensor 130 may also include an analog gain amplifier to amplify the analog signals output by the photodiodes and/or an analog to digital converter (ADC) to convert the analog signals output of the photodiodes (and/or amplified by the analog gain amplifier) into digital signals. In some cases, certain components or functions discussed with respect to one or more of the control mechanisms 120 may be included instead or additionally in the image sensor 130. The image sensor 130 may be a charge-coupled device (CCD) sensor, an electron-multiplying CCD (EMCCD) sensor, an active-pixel sensor (APS), a complimentary metal-oxide semiconductor (CMOS), an N-type metal-oxide semiconductor (NMOS), a hybrid CCD/CMOS sensor (e.g., sCMOS), or some other combination thereof.[0070] The image processor 150 may include one or more processors, such as one or more image signal processors (ISPs) (including ISP 154), one or more host processors (including host processor 152), and/or one or more of any other type of processor 5010 discussed with respect to the computing device 5000. The host processor 152 can be a digital signal processor (DSP) and/or other type of processor. In some implementations, the image processor 150 is a single integrated circuit or chip (e.g., referred to as a system-on-chip or SoC) that includes the host processor 152 and the ISP 154. In some cases, the chip can also include one or more input/output ports (e.g., input/output (I/O) ports 156), central processing units (CPUs), graphics processing units (GPUs), broadband modems (e.g., 3G, 4G or LTE, 5G, etc.), memory, connectivity components (e.g., Bluetooth™, Global Positioning System (GPS), etc.), any combination thereof, and/or other components. The EO ports 156 can include any suitable input/output ports or interface according to one or more protocol or specification, such as an Inter-Integrated Circuit 2 (I2C) interface, an Inter-Integrated Circuit 3 (13 C) interface, a Serial Peripheral Interface (SPI) interface, a serial General Purpose Input/Output (GPIO) interface, a Mobile Industry Processor Interface (MIPI) (such as a MIPI CSI-2 physical (PHY) layer port or interface, an Advanced High-performance Bus (AHB) bus, any combination thereof, and/or other input/output port. In one illustrative example, the host processor 152 can communicate with the image sensor 130 using an I2C port, and the ISP 154 can communicate with the image sensor 130 using an MIPI port.[0071] The image processor 150 may perform a number of tasks, such as de-mosaicing, color space conversion, image frame downsampling, pixel interpolation, automatic exposure (AE) control, automatic gain control (AGC), CDAF, PDAF, automatic white balance, merging of image frames to form an HDR image, image recognition, object recognition, feature recognition, object detection, object tracking, descriptor generation, receipt of inputs, managing outputs, managing memory, or some combination thereof. The image processor 150 may store image frames and/or processed images in random access memory (RAM) 140/5020, read-only memory (ROM) 145/5025, a cache, a memory unit, another storage device, or some combination thereof.[0072] Various input/output (I/O) devices 160 may be connected to the image processor 150. The I/O devices 160 can include a display screen, a keyboard, a keypad, a touchscreen, a trackpad, a touch-sensitive surface, a printer, any other output devices 5035, any other input devices 5045, or some combination thereof. In some cases, a caption may be input into the image processing device 105B through a physical keyboard or keypad of the I/O devices 160, or through a virtual keyboard or keypad of a touchscreen of the I/O devices 160. The I/O 160 may include one or more ports, jacks, or other connectors that enable a wired connection between the system 100 and one or more peripheral devices, over which the system 100 may receive data from the one or more peripheral device and/or transmit data to the one or more peripheral devices. The I/O 160 may include one or more wireless transceivers that enable a wireless connection between the system 100 and one or more peripheral devices, over which the system 100 may receive data from the one or more peripheral device and/or transmit data to the one or more peripheral devices. The peripheral devices may include any of the previously-discussed types of I/O devices 160 and may themselves be considered I/O devices 160 once they are coupled to the ports, jacks, wireless transceivers, or other wired and/or wireless connectors.[0073] In some cases, the image capture and processing system 100 may be a single device. In some cases, the image capture and processing system 100 may be two or more separate devices, including an image capture device 105 A (e.g., a camera) and an image processing device 105B (e.g., a computing device coupled to the camera). In some implementations, the image capture device 105 A and the image processing device 105B may be coupled together, for example via one or more wires, cables, or other electrical connectors, and/or wirelessly via one or more wireless transceivers. In some implementations, the image capture device 105 A and the image processing device 105B may be disconnected from one another.[0074] As shown in FIG. 1, a vertical dashed line divides the image capture and processing system 100 of FIG. 1 into two portions that represent the image capture device 105 A and the image processing device 105B, respectively. The image capture device 105A includes the lens 115, control mechanisms 120, and the image sensor 130. The image processing device 105B includes the image processor 150 (including the ISP 154 and the host processor 152), the RAM 140, the ROM 145, and the I/O 160. In some cases, certain components illustrated in the image capture device 105 A, such as the ISP 154 and/or the host processor 152, may be included in the image capture device 105 A.[0075] The image capture and processing system 100 can include an electronic device, such as a mobile or stationary telephone handset (e.g., smartphone, cellular telephone, or the like), a desktop computer, a laptop or notebook computer, a tablet computer, a set-top box, a television, a smart home device or assistant, a camera, a display device, a digital media player, a video gaming console, a video streaming device, an Internet Protocol (IP) camera, or any other suitable electronic device. In some examples, the image capture and processing system 100 can include one or more wireless transceivers for wireless communications, such as cellular network communications, 802.11 Wi-FI communications, wireless local area network (WLAN) communications, or some combination thereof. In some implementations, the image capture device 105 A and the image processing device 105B can be different devices. For instance, the image capture device 105 A can include a camera device and the image processing device 105B can include a computing device, such as a mobile handset, a desktop computer, or other computing device.[0076] While the image capture and processing system 100 is shown to include certain components, one of ordinary skill will appreciate that the image capture and processing system 100 can include more components than those shown in FIG. 1. The components of the image capture and processing system 100 can include software, hardware, or one or more combinations of software and hardware. For example, in some implementations, the components of the image capture and processing system 100 can include and/or can be implemented using electronic circuits or other electronic hardware, which can include one or more programmable electronic circuits (e.g., microprocessors, GPUs, DSPs, CPUs, and/or other suitable electronic circuits), and/or can include and/or be implemented using computer software, firmware, or any combination thereof, to perform the various operations described herein. The software and/or firmware can include one or more instructions stored on a computer-readable storage medium and executable by one or more processors of the electronic device implementing the image capture and processing system 100. [0077] Systems, apparatuses, processes, and computer-readable media are described herein for identifying and tracking locations of objects within one or more images. Each of the images may be captured using an image sensor 130 of an image capture device 150A, an image capture and processing system 100, or a combination thereof. Each of the images may be processed using an image processing device 105B, an image capture and processing system 100, or a combination thereof. The image capture and processing system 100 may be a part of an XR system or XR device, such as the XR system 210 of FIG. 2. The image capture and processing system 100 may be a sensor of an XR system or XR device, such as the sensors 215 of the XR system 210 of FIG. 2. The image capture and processing system 100 may be a part of an external device, such as the external device 220 of FIG. 2. The image capture and processing system 100 may be a sensor of an external device, such as the sensors 225 of the external device 220 of FIG. 2.[0078] FIG. 2 is a block diagram 200 illustrating an example architecture of an extended reality (XR) system 210. The XR system 210 of FIG. 2 includes one or more sensors 215, a processing engine 205, an output content generation engine 280, and an output device 290.[0079] The processing engine 205 of the XR system 210 can receive sensor data from one or more sensors 215 of the XR system 210. The one or more sensors 215 of the XR system 210 can include, for example, one or more image sensors 130, one or more accelerometers, one or more gyroscopes, one or more inertial measurement units (IMUs), one or more light detection and ranging (LIDAR) sensors, one or more radio detection and ranging (RADAR) sensors, one or more sound detection and ranging (SODAR) sensors, one or more sound navigation and ranging (SONAR) sensors, one or more time-of-flight (ToF) sensors, one or more structured light sensors, one or more microphones, one or more other sensors described herein, or combinations thereof. In some examples, the one or more sensors 215 can be coupled to the processing engine 205 through one or more wired and/or wireless sensor connectors. In some examples, the sensor data can include one or more images. The one or more images can include still images, video frames of one or more videos, or combinations thereof. The one or more images can be referred to as still images, image frames, video frames, frames, or a combination thereof. A box with a dashed line is illustrated around the one or more sensors 215 of the XR system 210 to indicate that the one or more sensors 215 may be considered a part of the XR system 210 and/or of the processing engine 205.[0080] The processing engine 205 of the XR system 210 can receive sensor data from one or more sensors 225 of an external device 220. The one or more sensors 225 of the external device 220 can include, for example, one or more image sensors 130, one or more accelerometers, one or more gyroscopes, one or more IMUs, one or more LIDAR sensors, one or more RADAR sensors, one or more SODAR sensors, one or more SONAR sensors, one or more ToF sensors, one or more structured light sensors, one or more microphones, one or more other sensors described herein, or combinations thereof. In some examples, the external device 220 and/or one or more sensors 225 can be coupled to the processing engine 205 through one or more wired and/or wireless connections. The one or more images can be referred to as still images, image frames, video frames, frames, or a combination thereof.[0081] The processing engine 205 of the XR system 210 includes an inter-device negotiation engine 230 that can negotiate with the external device 220. The inter-device negotiation engine 230 can include a communication transceiver 235. The communication transceiver 235 can include one or more wired communication transceivers, one or more wireless communication transceivers, or combinations thereof. The inter-device negotiation engine 230 of the XR system 210 can use the communication transceiver 235 to receive the sensor data from the sensors 225 of the external device 220. The inter-device negotiation engine 230 of the XR system 210 can also use the communication transceiver 235 to send negotiation data to the external device 220 and/or receive negotiation data from the external device 220 as part of one or more negotiations, such as a synchronization negotiation, a security negotiation, a privacy negotiation, or a combination thereof.[0082] The inter-device negotiation engine 230 of the XR system 210 can include a synchronization negotiation engine 240 that synchronizes sensor data received from the one or more sensors 225 of the external device 220 with sensor data received from the one or more sensors 215 of the XR system 210. For instance, the sensor data received from the one or more sensors 225 of the external device 220 can be tagged with timestamps at which individual elements (e.g., individual images) of the sensor data were captured by the one or more sensors 225 of the external device 220. Likewise, the sensor data received from the one or more sensors 215 of the XR system 210 can be tagged with timestamps at which individual elements (e.g., individual images) of the sensor data were captured by the one or more sensors 215 of the XR system 210. The synchronization negotiation engine 240 can match an element of the sensor data from the one or more sensors 225 of the external device 220 with a corresponding element of the sensor data from the one or more sensors 215 of the XR system 210 based on the corresponding timestamps matching as closely as possible. In an illustrative example, the one or more sensors 215 of the XR system 210 can capture an image with a timestamp of 4:30.3247, and the one or more sensors 225 of the external device 220 can capture images with timestamps of 4:29.7930, 4:30.0139, 4:30.3923, and 4:30.8394. The synchronization negotiation engine 240 can identify that the 4:30.3923 timestamp from the sensor data of the one or more sensors 225 of the external device 220 matches most closely to the 4:30.3247 timestamp from the sensor data of the one or more sensors 215 of the XR system 210. Thus, the synchronization negotiation engine 240 can synchronize the image corresponding to the 4:30.3923 timestamp from the sensor data of the one or more sensors 225 with the image corresponding to the 4:30.3247 timestamp from the sensor data of the one or more sensors 215 of the XR system 210. In some examples, the synchronization negotiation engine 240 can send a request to the external device 220 for sensor data most closely matching a timestamp of sensor data from the one or more sensors 215 of the XR system 210. The synchronization performed by the synchronization negotiation engine 240 can be based on sensor capabilities. For example, if the sensors 215 of the XR system 210 capture images at 90 frames per second (fps), while the sensors 225 of the external device 220 capture images at 30 fps, then the synchronization negotiation engine 240 can synchronize every third image captured by the sensors 215 of the XR system 210 with an image captured by the sensors 225 of the external device 220.[0083] The inter-device negotiation engine 230 of the XR system 210 can include a security negotiation engine 245. The security negotiation engine 245 can perform a security handshake between the XR system 210 and the external device 220. The security handshake can include, for example, a transport layer security (TLS) handshake, a secure sockets layer (SSL) handshake, or a combination thereof. The security handshake can identify a version of an encryption protocol to be used between the XR system 210 and the external device 220, decide on a cipher suite to be used between the XR system 210 and the external device 220, authenticate the identities of the XR system 210 and/or the external device 220 using one or more digital signatures (and/or one or more certificate authorities). The security handshake can generate session keys in order to use symmetric encryption after the handshake is complete. The security handshake can generate or retrieve an asymmetric keypair for each of the XR system 210 and the external device 220, and can transfer public keys from each keypair from the device on which they are generated or retrieved to the other device. The XR system 210 and the external device 220 can then communicate via encrypted communication, using asymmetric and/or symmetric encryption, following the security handshake.[0084] The inter-device negotiation engine 230 of the XR system 210 can include a privacy negotiation engine 247. The privacy negotiation engine 247 can request sensor data from the sensors 225 of the external device 220 for use for an identified purpose, for instance for hand tracking as in FIGs. 6A, 6B, or 7. The external device 220 can grant or deny the XR system 210 access to the sensor data from the sensors 225 of the external device 220 for the identified purpose. In some examples, the external device 220 can include a whitelist of purposes for which the external device 220 can permit sharing of sensor data from the sensors 225 of the external device 220. In some examples, the external device 220 can include a blacklist of purposes for which the external device 220 cannot permit (and instead must deny) sharing of sensor data from the sensors 225 of the external device 220. In some examples, the privacy negotiation engine 247 can request sensor data from the sensors 225 of the external device 220 for use for multiple purposes, but external device 220 can respond indicating that the external device 220 only permits sharing the sensor data from the sensors 225 of the external device 220 for a subset of the multiple purposes. The privacy negotiation engine 247 can respect any limitations that the external device 220 identifies on purposes for which the sensor data from the sensors 225 of the external device 220 can be used.[0085] In some examples, the external device 220 can make certain requests or demands of the XR system 210 if the XR system 210 is to be sent the sensor data from the sensors 225 of the external device 220, which the privacy negotiation engine 247 can agree to and execute actions corresponding to. For instance, in some examples, the external device 220 can request that the XR system 210 delete the sensor data from the sensors 225 of the external device 220 immediately after use, or a predetermined time period after use. The privacy negotiation engine 247 can agree to this requirement, and can ensure that the XR system 210 delete the sensor data from the sensors 225 of the external device 220 immediately after use, or the predetermined time period after use. In some examples, the external device 220 can request that the XR system 210 not use, discard, or replace certain portions of aspects of the sensor data from the sensors 225 of the external device 220. F or instance, the external device 220 can request that the XR system 210 not use or anonymize names, faces, or other sensitive information in the sensor data from the sensors 225 of the external device 220. The privacy negotiation engine 247 can agree to this requirement, and can ensure that the XR system 210 not use, discard, or replace certain portions of aspects of the sensor data from the sensors 225 of the external device 220.[0086] The processing engine 205 of the XR system 210 includes a feature management engine 250. The feature management engine 250 receives the sensor data from the one or more sensors 215 of the XR system 210. The feature management engine 250 receives the sensor data from the one or more sensors 225 of the external device 220. The inter-device negotiation engine 230 may synchronize the sensor data from the one or more sensors 215 of the XR system 210 with the sensor data from the one or more sensors 225 of the external device 220 prior to or contemporaneously with receipt of the sensor data by the feature management engine 250. The inter-device negotiation engine 230 may identify any security and/or privacy limitations, restrictions, and/or requirements prior to or contemporaneously with receipt of the sensor data by the feature management engine 250.[0087] The feature management engine 250 includes a feature extraction engine 255. The feature extraction engine 255 can detect and/or extract features from the sensor data from the one or more sensors 215 of the XR system 210. In some cases, the feature extraction engine 255 can detect and/or extract features from the sensor data from the one or more sensors 225 of the external device 220. For instance, if the sensor data include images, the feature extraction engine 255 can detect and/or extract visual features. Visual features can include distinctive, unique, and/or identifiable parts of an image, such as a part of an image depicting a corner, an edge, a gradient, and/or a blob. A blob may be defined as area in which one or more image properties (e.g., brightness, color, tone, hue, saturation, or a combination thereof) is constant or approximately constant. To detect features and/or extract features in the image, the feature extraction engine 255 can perform a scale-space search, for which the feature extraction engine 255 can use a frame buffer for scale-space search. To detect features in the image, the feature extraction engine 255 can use edge detection, comer detection, blob detection, ridge detection, affine invariant feature detection, or a combination thereof. Edge detection can include, for example, Canny, Deriche, Differential, Sobel, Prewitt, and/or Roberts cross edge detection. Corner Detection can include, for example, Harris operator, Shi and Tomasi, level curve curvature, Hessian feature strength measures, smallest univalue segment assimilating nucleus (SUSAN), and/or features from accelerated segment test (FAST) corner detection. Blob detection can include, for example, Laplacian of Gaussian (LoG), Difference of Gaussians (DoG), Determinant of Hessian (DoH), Maximally stable extremal regions, and/or Principal curvature-based region detector (PCBR) blob detection. Affine invariant feature detection can include Affine shape adaptation, Harris affine, and/or Hessian affine feature detection.[0088] To extract features, the feature extraction engine 255 can generate descriptors for the features. A descriptor for a feature may be generated based on extraction of a local image patch around the feature, and description of the feature as depicted in the local image patch. The feature descriptor may, for example, describe the feature as a collection of one or more feature vectors. Features may be extracted using any suitable technique, such as Scale Invariant Feature Transform (SIFT), Learned Invariant Feature Transform (LIFT), Speed Up Robust Features (SURF), Gradient Location-Orientation histogram (GLOH), Histogram of Oriented Gradients (HOG), Oriented Fast and Rotated Brief (ORB), Binary Robust Invariant Scalable Keypoints (BRISK), Fast Retina Keypoint (FREAK), KAZE, Accelerated KAZE (AKAZE), Normalized Cross Correlation (NCC), descriptor matching, another suitable technique, or a combination thereof. In some examples, feature detection and/or feature extraction using the feature extraction engine 255 can include identifying a location of the feature within the image, identifying a location of the feature within a 3D environment, or both.[0089] The feature management engine 250 includes a feature tracking engine 260. The feature extraction engine 255 can track features detected and/or extracted by the feature extraction engine 255 from one image to another image. Feature tracking, as performed by the feature tracking engine 260, can include frame-to-frame tracking, box tracking, Kanade-Lucas-Tomasi (KLT) feature tracking, mean-shift feature tracking, or combinations thereof. Some features represent portions of an object within the environment, such as a hand or a display screen. The feature tracking engine 260 can track movement of the object within the environment by tracking of features of the object within the environment relative to the features of the environment.[0090] The feature management engine 250 includes a data fusion engine 265. In some examples, the data fusion engine 265 can match features detected and/or extracted by the feature extraction engine 255 from the sensor data received from the one or more sensors 215 of the XR system 210 with features detected and/or extracted by the feature extraction engine 255 from the sensor data received from the one or more sensors 225 of the external device 220. In some cases, the one or more sensors 215 of the XR system 210 and the one or more sensors 225 of the external device 220 may be arranged such that at least some overlap exists between scenes of the real-word environment captured (in the case of image sensors) and/or sensed (in the case of non-imaging sensors) by the respective sensors. In some examples, the data fusion engine 265 can match features tracked by the feature tracking engine 260 from the sensor data from the one or more sensors 215 of the XR system 210 with features tracked by the feature tracking engine 260 from the sensor data from the one or more sensors 225 of the external device 220. For instance, the data fusion engine 265 can identify a single three-dimensional point (with a three-dimensional set of coordinates) of a particular feature detected, extracted, and/or tracked in both the sensor data from the one or more sensors 215 of the XR system 210 and the sensor data from the one or more sensors 225 of the external device 220. By matching a few features in common in both sets of sensor data, the data fusion engine 265 can also map features that are in one set of sensor data but not the other relative to the features that are in both sets of sensor data. Thus, the data fusion engine 265 can locate features in the sensor data from the one or more sensors 225 of the external device 220 that are not present in the sensor data from the one or more sensors 215 of the XR system 210 relative to features that are present in the sensor data from the one or more sensors 215 of the XR system 210. Likewise, the data fusion engine 265 can locate features in the sensor data from the one or more sensors 215 of the XR system 210 that are not present in the sensor data from the one or more sensors 225 of the external device 220 relative to features that are present in the sensor data from the one or more sensors 225 of the external device 220. In some examples, certain operations discussed herein as performed by the data fusion engine 265, such as feature mapping, can be performed regardless of whether or not the processing engine 205 of the XR system 210 receives the sensor data from the one or more sensors 225 of the external device 220. In some examples, certain operations discussed herein as performed by the data fusion engine 265, such as feature mapping, can be performed by the feature extraction engine 255, the feature tracking engine 260, or another part of the feature management engine 250.[0091] In some examples, the feature management engine 250 can perform pose estimation of the pose of the XR system 210 (and/or of each of the sensors 215 of the XR system 210) within the real-world environment that the XR system 210 is in. Pose can include location in 3- dimensional space, such as a set of 3-dimensional translational coordinates (e.g., in a horizontal (x) direction, vertical (y) direction, and depth (z) direction). Additionally or alternatively, pose can include orientation (e.g., pitch, yaw, and/or roll). The feature management engine 250 can estimate the pose based on features that have been detected and/or extracted by the feature extraction engine 255, based on features that have been tracked by the feature tracking engine 260, based on features that have been fused and/or mapped by the data fusion engine 265, or a combination thereof. In some aspects, the feature management engine 250 can perform stereo matching for features, for instance where the sensors 215 and/or the sensors 225 include groups (e.g., pairs) of image sensors representing multiscopic views of the same scene. In some aspects, the feature management engine 250 can perform mapping, such as map densification, key frame addition, key frame removal, bundle adjustment, loop closure detection, relocalization, and/or one or more other simultaneous localization and mapping (SLAM) operations. In some examples, the pose of the XR system 210 (and/or each of the sensors 215 and/or sensors 225) can be determined independently of feature detection and/or extraction. For instance, a pose may be determined using a positioning procedure, such as using positioning reference signals (PRS), beacon signals, ToF measurements, or the like. For stationary sensors or external devices, a pose may be retrieved from a memory of the sensor or external device or a separate server where it may have been previously stored (e.g., during a calibration process, during setup of a device based on user input indicating a location of a sensor or external device, etc.).[0092] The feature management engine 250 can output feature information 270 based on features detected, extracted, and/or tracked from the sensor data from the one or more sensors 215 of the XR system 210 using the feature extraction engine 255 and/or the feature tracking engine 260. The feature management engine 250 can output enhanced feature information 275 based on features detected, extracted, tracked, and/or merged (combined) from both the sensor data from the one or more sensors 215 of the XR system 210 and the sensor data from the one or more sensors 225 of the external device 220 using the feature extraction engine 255 and/or the feature tracking engine 260, or using the feature extraction engine 255, the feature tracking engine 260, and/or the data fusion engine 265. In some cases, the enhanced feature information 275 can identify additional features not included in the feature information 270, and can thus represent a more complete feature mapping of an environment represented within the sensor data from the one or more sensors 215 of the XR system 210 and/or the sensor data from the one or more sensors 225 of the external device 220. The enhanced feature information 275 can identify more accurate positions for the features than the feature information 270, and can thus represent a more accurate feature mapping of an environment represented within the sensor data from the one or more sensors 215 of the XR system 210 and/or the sensor data from the one or more sensors 225 of the external device 220.[0093] The XR system 210 can include an output content engine 280. The output content engine 280 can generate output content 285 based on the sensor data from the one or more sensors 215 of the XR system 210, the sensor data from the one or more sensors 225 of the external device 220, and/or virtual content. In some examples, the output content 285 can include an output image that is a modified version of an input image from the sensor data from the one or more sensors 215 of the XR system 210 that is modified in order to add virtual content positioned based on the enhanced feature information 275 (which includes feature information extracted from the sensor data from the one or more sensors 225 of the external device 220). For example, features corresponding to a certain object - such as a hand, or a display screen - in the environment could be in the enhanced feature information 275 but not in the feature information 270 if the object is in the field of view of the one or more sensors 225 of the external device 220 but not in the field of view of the one or more sensors 215 of the XR system 210.[0094] The XR system 210 can output the output content 285 to an output device 290 of the XR system 210. The output device 290 can include, for example, a display, an audio output device, any of the output devices 1035 of FIG. 10, a connector that can couple the XR system 210 to one of the previously-listed types of output devices. In some examples, the output content 285 can include one or more images and/or one or more videos, which the XR system 210 can display using the display of the output device 290. The display can include a display screen, such as a liquid crystal display (LCD) display, a plasma display, a light emitting diode (LED) display, an organic LED (OLED) display, an electronic paper display, an electronic ink display, or a combination thereof. The display can include a projector and/or a projection surface onto which the projector projects an image. The projection surface can be opaque, transparent, or translucent. The display can be a display of a head-mounted display (HMD) 310, a display of XR glasses (e.g., AR glasses), a display 345 of a mobile handset 410, and/or other device. In some examples, the output content 285 can include one or more images of a video, which the XR system 210 can display using the display of the output device 290. In some examples, the output content 285 can include one or more audio clips, which the XR system 210 can play using the audio output device of the output device 290. The audio output device can include, for example, a speaker, a headphone, or a combination thereof.[0095] In some examples, the XR system 210 receives the sensor data of the sensors 225 of the external device 220 directly from the external device 220. In some examples, the XR system 210 receives the sensor data of the sensors 225 of the external device 220 indirectly, from an intermediate device. Examples of an intermediate device can include, for example, a server and/or cloud service that the external device 220 uploads its sensor data to. The negotiations discussed herein as performed between the inter-device negotiation engine 230 of the XR system 210 and the external device 220 can, in some cases, be performed instead between the inter-device negotiation engine 230 of the XR system 210 and the intermediate device.[0096] FIG. 3 A is a perspective diagram 300 illustrating a head-mounted display (HMD) 310 that is used as an extended reality (XR) system 210. The HMD 310 may be, for example, an augmented reality (AR) headset (e.g., AR glasses or smart glasses), a virtual reality (VR) headset, a mixed reality (MR) headset, another type of XR headset, or some combination thereof. The HMD 310 may be an example of an XR system 210 or be part of an XR system 210. The HMD 310 includes a first camera 330A and a second camera 330B along a front portion of the HMD 310. The first camera 330A and the second camera 330B may be examples of the sensors 215 of the XR system 210. In some examples, the HMD 310 may only have a single camera. In some examples, the HMD 310 may include one or more additional cameras in addition to the first camera 330A and the second camera 330B, which may also be examples of the sensors 215 of the XR system 210. In some examples, the HMD 310 may include one or more additional sensors in addition to the first camera 330A and the second camera 330B, which may also be examples of the sensors 215 of the XR system 210.[0097] The HMD 310 may include one or more displays 340 that are visible to a user 320 wearing the HMD 310 on the user 320’s head. The one or more displays 340 of the HMD 310 can be examples of the output device 290 of the XR system 210. In some examples, the HMD 310 may include one display 340 and two viewfinders. The two viewfinders can include a left viewfinder for the user 320’ s left eye and a right viewfinder for the user 320’ s right eye. The left viewfinder can be oriented so that the left eye of the user 320 sees a left side of the display. The right viewfinder can be oriented so that the right eye of the user 320 sees a right side of the display. In some examples, the HMD 310 may include two displays 340, including a left display that displays content to the user 320’ s left eye and a right display that displays content to a user 320’ s right eye.[0098] FIG. 3B is a perspective diagram 350 illustrating the head-mounted display (HMD) of FIG. 3 A being worn by a user 320. The user 320 wears the HMD 310 on the user 320’ s head over the user 320’s eyes. The HMD 310 can capture images with the first camera 330A and the second camera 330B. In some examples, the HMD 310 displays one or more output images toward the user 320’s eyes. The output images may be examples of the output content 285. The output images can be based on the images captured by the first camera 330A and the second camera 330B. The output images may provide a stereoscopic view of the environment, in some cases with information overlaid and/or with other modifications. For example, the HMD 310 can display a first display image to the user 320’ s right eye, the first display image based on an image captured by the first camera 330A. The HMD 310 can display a second display image to the user 320’s left eye, the second display image based on an image captured by the second camera 330B. For instance, the HMD 310 may provide overlaid information in the display images overlaid over the images captured by the first camera 330A and the second camera 330B.[0099] FIG. 4A is a perspective diagram 400 illustrating a front surface of a mobile handset 410 that includes front-facing cameras and is used as an extended reality (XR) system 210. The mobile handset 410 may be an example of an XR system 210. The mobile handset 410 may be, for example, a cellular telephone, a satellite phone, a portable gaming console, a music player, a health tracking device, a wearable device, a wireless communication device, a laptop, a mobile device, any other type of computing device or computing system 1100 discussed herein, or a combination thereof. The front surface 420 of the mobile handset 410 includes a display 440. The front surface 420 of the mobile handset 410 may include a first camera 430 A and a second camera 430B. The first camera 430 A and the second camera 430B may be examples of the sensors 215 of the XR system 210. The first camera 430A and the second camera 430B are illustrated in a bezel around the display 440 on the front surface 420 of the mobile handset 410. In some examples, the first camera 430A and the second camera 430B can be positioned in a notch or cutout that is cut out from the display 440 on the front surface 420 of the mobile handset 410. In some examples, the first camera 430A and the second camera 430B can be under-display cameras that are positioned between the display 440 and the rest of the mobile handset 410, so that light passes through a portion of the display 440 before reaching the first camera 430A and the second camera 430B. The first camera 430 A and the second camera 430B of the perspective diagram 400 are front-facing cameras. The first camera 430A and the second camera 430B face a direction perpendicular to a planar surface of the front surface 420 of the mobile handset 410. The first camera 430A and the second camera 430B may be two of one or more cameras of the mobile handset 410. In some examples, the front surface 420 of the mobile handset 410 may only have a single camera. In some examples, the mobile handset 410 may include one or more additional cameras in addition to the first camera 430A and the second camera 430B, which may also be examples of the sensors 215 of the XR system 210. In some examples, the mobile handset 410 may include one or more additional sensors in addition to the first camera 430 A and the second camera 430B, which may also be examples of the sensors 215 of the XR system 210. The front surface 420 of the mobile handset 410 also includes a display 440. In some cases, the front surface 420 of the mobile handset 410 includes more than one display 440. The one or more displays 440 of the front surface 420 of the mobile handset 410 can be examples of the output device 290 of the XR system 210.[0100] FIG. 4B is a perspective diagram 450 illustrating a rear surface of a mobile handset that includes rear-facing cameras and is used as an extended reality (XR) system 210. The mobile handset 410 includes a third camera 430C and a fourth camera 430D on the rear surface 460 of the mobile handset 410. The third camera 430C and the fourth camera 430D of the perspective diagram 450 are rear-facing. The third camera 430C and the fourth camera 430D may be examples of the sensors 215 of the XR system 210. The third camera 430C and the fourth camera 430D face a direction perpendicular to a planar surface of the rear surface 460 of the mobile handset 410. While the rear surface 460 of the mobile handset 410 does not have a display 440 as illustrated in the perspective diagram 450, in some examples, the rear surface 460 of the mobile handset 410 may include one or more rear displays. In examples where the rear surface 460 of the mobile handset 410 includes one or more rear displays, the one or more rear displays can be examples of the output device 290 of the XR system 210. If the rear surface 460 of the mobile handset 410 includes one or more rear displays, any positioning layouts of the third camera 430C and the fourth camera 430D relative to the one or more rear displays may be used as discussed with respect to the first camera 430A and the second camera 430B relative to the display 440 of the front surface 420 of the mobile handset 410. The third camera 430C and the fourth camera 430D may be two of one or more cameras of the mobile handset 410. In some examples, the rear surface 460 of the mobile handset 410 may only have a single camera. In some examples, the mobile handset 410 may include one or more additional cameras in addition to the first camera 430 A, the second camera 430B, the third camera 430C, and the fourth camera 430D, which may also be examples of the sensors 215 of the XR system 210. In some examples, the mobile handset 410 may include one or more additional sensors in addition to the first camera 430A, the second camera 430B, the third camera 430C, and the fourth camera 430D, which may also be examples of the sensors 215 of the XR system 210.[0101] FIG. 5 is a perspective diagram illustrating a user wearing a head-mounted display (HMD) 310 that is used as an extended reality (XR) system 210 and that performs hand tracking determines a gesture-based input based on a position of the hand 525 of the user 320 being in the field of view (FOV) 520 of the HMD 310. In other examples, the HMD 310 can be used to position a virtual object based on the position of the hand 525 being in the FOV 520 of the HMD 310. The first camera 330A and/or the second camera 330B of the HMD 310 are used as the sensors 215 of the XR system 210. The FOV 520 of the HMD 310 represents the FOV of the first camera 330A and/or the second camera 330B. The FOV 520 of the HMD 310 is illustrated using dashed lines. The hand 525 of the user 320 is in the FOV 520 of the sensors 215 of the HMD 310. Thus, the XR system 210 of the HMD 310 detects, extracts, and/or tracks features of the hand 525 of the user 320 relative to other features of the real-world environment that the user 320 and HMD 310 are located within to identify a pose of the hand 525 of the user 320 relative to the real-world environment that the user 320 and HMD 310 are located within. The pose of the hand 525 can include the location of the hand and/or the orientation (e.g., pitch, yaw, and/or roll) of the hand 525. Based on the pose of the hand 525, the HMD 310 can determine a gesture-based input, such as for controlling a user interface (UI) of the HMD 310.[0102] As noted above, in some cases, the HMD 310 can determine where to display a virtual object relative to the hand 525 based on the determined pose of the hand 525. The virtual object represents a virtual object that the HMD 310 displays to the user 320 using the displays 340, but that does not exist in the real world environment in which the user 320 and the HMD 310 are in. In one illustrative example, the virtual object is a sword, and can be displayed by the HMD 310 as if it is being held by the hand 525 of the user 320. The pose - the location and orientation - of the virtual object depends on the pose of the hand 525. The output content generation engine 280 of the XR system 210 of the HMD 310 can add the virtual object 540 to the output content 285 before the output content 285 is displayed on the display(s) 340 (output on the output devices 290).[0103] FIG. 6A is a perspective diagram 600 illustrating a user 320 wearing a head-mounted display (HMD) 310 that is used as an extended reality (XR) system 210 and that performs hand tracking to determine a gesture-based input based on a position of the hand 525 of the user 320 even though the hand 525 is out of the field of view (FOV) 620 of the HMD 310. The HMD 310 can perform the hand tracking even when the hand 525 is out of the FOV 602 based on the hand 515 being in the FOV 615 of an external camera 610. The FOV 620 of the HMD 310 represents the FOV of one or more cameras and/or other sensors of the HMD 310. The FOV 620 of the HMD 310 is illustrated using dashed lines. The hand 525 of the user 320 is not in the FOV 620 of the HMD 310 because the user 320 has moved the hand 525 too far away from the FOV 620 of the HMD 310. Thus, using its own cameras and/or other sensors, the HMD 310 would be unable to identify and/or track the location of the hand 525 of the user in its position in FIG. 6A. Even though the hand 525 of the user 320 is not in the FOV 620 of the HMD 310, the hand 525 can still be tracked to determine any gesture-based inputs, to determine where to display a virtual object relative to the hand 525 when at least part of the virtual object is still to be displayed in the FOV 620 of the HMD 310 (depending on the illustrated pose of the hand 525 of the user 320), and/or to perform some other function based on a tracked pose of the hand 525.[0104] The XR system 210 of the HMD 310 losing track of the hand 525 (or another object being tracked by the XR system 210) can be a condition that the XR system 210 detects and uses to determine when to perform one or more other functions. The XR system 210 of the HMD 310 can detect this condition in the situation illustrated in FIG. 6A due to the hand 525 exiting the FOV 620 of the HMD 310 or due to no longer detecting the hand 525 in the FOV 620. The XR system 210 of the HMD 310 can send a request for assistance with hand tracking 640 to an external camera 610. The external camera 610 can be an example of the external device 220 of FIG. 2. For instance, the external camera 610 can be part of an external device, such as a laptop computer, a desktop computer, a television, a smart home device or assistant, a mobile device (e.g., a smartphone), a tablet computer, or other external device. One or more image sensors and/or other sensors of the external camera 610 can be examples of the sensors 225 of the external device 220. The XR system 210 of the HMD 310 can perform an inter-device negotiation with the external camera 610 as discussed with respect to the inter-device negotiation engine 230. In response, the external camera 610 can send hand-tracking data 645 as part of a data stream to the XR system 210 of the HMD 310. The hand-tracking data 645 can include sensor data captured by one or more sensors of the external camera 610, such as one or more image sensors. A FOV 615 of the external camera 610 is illustrated using lines with a series of dots and dashes. The FOV 615 of the external camera 610 includes the hand 525 of the user 325. In some examples, the hand-tracking data 645 can be at least partially processed by the external camera 610, for example to detect features, extract features, track features, and/or perform one or more other operations of the feature management engine 250 before the external camera 610 sends the hand-tracking data 645 to the XR system 210 of the HMD 310, which can reduce computational resources (e.g., battery consumption on the HMD 310, amount of processing resources being used, etc.). The XR system 210 of the HMD 310 can use the hand-tracking data 645 to identify the pose of the hand 525 of the user 320 despite the hand 525 not being in the FOV 620 of the HMD 310. Despite the hand 525 not being in the FOV 620 of the HMD 310, the XR system 210 of the HMD 310 can use the hand pose determined based on the hand-tracking data 645 to determine one or more gesture-based inputs being performed by the user (e.g., to control a UI of the HMD 310, such as an application running on the HMD 310), to determine where to display a virtual object in the FOV 620 of the HMD 310 with an accurate pose based on the pose of the hand 525 of the user 320, and/or to perform one or more other functions.[0105] FIG. 6B is a perspective diagram 650 illustrating a user 320 wearing a head-mounted display (HMD) 310 that is used as an extended reality (XR) system 210 and that performs hand tracking to determine a gesture-based input based on a position of the hand 525 of the user 320 when an occlusion 660 (e.g., a real -word object) occludes the hand 525 within the field of view (FOV) 670 of the HMD 310. The HMD 310 can perform the hand tracking even when the hand 525 is occluded based on the hand 525 being in the FOV 615 of an external camera 610. The FOV 670 of the HMD 310 represents the FOV of one or more cameras and/or other sensors of the HMD 310. The FOV 670 of the HMD 310 is illustrated using dashed lines. The hand 525 of the user 320 is in the FOV 670 of the HMD 310 but occluded from the view of the HMD 310 because the FOV 670 is partially occluded by the occlusion 660. The occlusion 660 occludes the hand 525 within the FOV 670 of the HMD 310. Thus, on its own, the HMD 310 would be unable to identify and/or track the location of the hand 525 of the user in its position in FIG. 6B. Even though the hand 525 of the user 320 is occluded in the FOV 670 of the HMD 310, the hand 525 can still be tracked to determine any gesture-based inputs, to determine where to display a virtual object relative to the hand 525 when at least part of the virtual object is still to be displayed in the FOV 670 of the HMD 310 (depending on the illustrated pose of the hand 525 of the user 320), and/or to perform some other function based on a tracked pose of the hand 525.[0106] The XR system 210 of the HMD 310 losing track of the hand 525 (or another object being tracked by the XR system 210) can be a condition that the XR system 210 detects and uses to determine when to perform one or more other functions. The XR system 210 of the HMD 310 can detect this condition in the situation illustrated in FIG. 6B due to the occlusion 660 occluding the hand 525 in the FOV 670 of the HMD 310. As in FIG. 6A, the XR system 210 of the HMD 310 of FIG. 6B can send a request for assistance with hand tracking 640 to an external camera 610. The XR system 210 of the HMD 310 can perform an inter-device negotiation with the external camera 610 as discussed with respect to the inter-device negotiation engine 230. In response, the external camera 610 can send hand-tracking data 645 as part of a data stream to the XR system 210 of the HMD 310. The hand-tracking data 645 can include sensor data captured by one or more sensors of the external camera 610, such as one or more image sensors. The FOV 615 of the external camera 610 is illustrated using lines with a series of dots and dashes. The FOV 615 of the external camera 610 includes the hand 525 of the user 325. In some examples, the hand-tracking data 645 can be at least partially processed by the external camera 610, for example to detect features, extract features, track features, and/or perform one or more other operations of the feature management engine 250 before the external camera 610 sends the hand-tracking data 645 to the XR system 210 of the HMD 310, which can reduce computational resources (e.g., battery consumption on the HMD 310, amount of processing resources being used, etc.). Despite the hand 525 not being in the FOV 620 of the HMD 310, the XR system 210 of the HMD 310 can use the hand-tracking data 645 to identify the pose of the hand 525 of the user 320 despite the hand 525 being occluded in the FOV 670 of the HMD 310. The determined hand pose can be used to determine one or more gesture-based inputs being performed by the user (e.g., to control a UI of the HMD 310, such as an application running on the HMD 310), to determine where to display a virtual object in the FOV 620 of the HMD 310 with an accurate pose based on the pose of the hand 525 of the user 320, and/or to perform one or more other functions.[0107] In some examples, the external camera 610 can be standalone camera device, such as a security camera, as illustrated in FIGs. 6 A and 6B. In some examples, the external camera 610 of FIGs. 6A and 6B can be one or more cameras of another HMD 710 (as in FIG. 7), of mobile handset 410, of a laptop computer, of a desktop computer, or of any other type of external device 220[0108] FIG. 7 is a perspective diagram 700 illustrating an external head-mounted display (HMD) 710 device providing assistance with hand-tracking a hand 525 of a user 320 of an HMD 310 that is used as an extended reality (XR) system 210 due to a low battery condition 735 (as an example of an operational status of the XR device) at the HMD 310. The FOV (not illustrated) of the HMD 310 can be a FOV of one or more cameras and/or one or more sensors of the HMD 310. The FOV (not illustrated) of the HMD 310 may include the hand 525, or may be missing the hand 525. The FOV (not illustrated) of the external HMD 710 can be a FOV of one or more cameras and/or one or more sensors of the external HMD 710. The FOV (not illustrated) of the external HMD 710 may include the hand 525, or may be missing the hand 525.[0109] The XR system 210 of the HMD 310 can detect a condition at the HMD 310 corresponding to a level of a computing resource of the HMD 310 meeting, or being less than, a threshold level. The XR system 210 of the HMD 310 can detect a condition at the HMD 310 corresponding to a level of usage of a computing resource of the HMD 310 meeting, or exceeding, a threshold level. For example, FIG. 7 illustrates the HMD 310 detecting a low battery condition 735 indicating that a battery level of one or more batteries of the HMD 310 meets, or is less than, a threshold battery level (e.g., 50% of full battery level, 40% of full battery level, or other level). An alert 730 is illustrated based on the HMD 310 detecting the low battery condition 735. The XR system 210 of the HMD 310 can send a request for assistance with hand tracking 740 to the external HMD 710. The external HMD 710 can be an example of the external device 220 of FIG. 2. One or more image sensors and/or other sensors of the external HMD 710 can be examples of the sensors 225 of the external device 220. The XR system 210 of the HMD 310 can perform an inter device negotiation with the external HMD 710 as discussed with respect to the inter-device negotiation engine 230. In response, the external HMD 710 can send hand-tracking data 745 as part of a data stream to the XR system 210 of the HMD 310. The hand-tracking data 745 can include sensor data captured by one or more sensors of the external HMD 710, such as one or more image sensors. In some examples, the hand-tracking data 745 can be at least partially processed by the external HMD 710, for example to detect features, extract features, track features, and/or perform one or more other operations of the feature management engine 250 to reduce computational resources (e.g., reduce battery consumption on the HMD 310, reduce an amount of processing resources being used, etc.), before the external HMD 710 sends the hand-tracking data 745 to the XR system 210 of the HMD 310. The XR system 210 of the HMD 310 can use the hand-tracking data 745 to identify the pose of the hand 525 of the user 320 and/or whether or not the hand 525 is in the FOV (not pictured) of the HMD 310.[0110] Because the HMD 310 is able to offload at least some of its hand tracking tasks to the external HMD 710, the HMD 310 can reduce its battery load and use battery less quickly, and thus can last longer despite its low battery condition 735. In some examples, the HMD 310 can turn off or otherwise disable its cameras and/or other sensors. In some examples, the HMD 310 can reduce capture quality or rate of the sensor data from its sensors, for example reducing from 90 fps image capture to 30 fps capture. In some examples, the HMD 310 can rely, partially or entirely, on the cameras and/or other sensors of the external HMD 710. In some examples the HMD 310 can at least partially turn off or otherwise disable at least some of the functions of the feature management engine 250, such as the feature extraction engine 255, the feature tracking engine 260, and/or the data fusion engine 265. In some examples, the HMD 310 can rely, partially or entirely, on the external HMD 710 to perform at least some of the functions of the feature management engine 250, such as the feature extraction engine 255, the feature tracking engine 260, and/or the data fusion engine 265. In some examples, the HMD 310 can turn off or otherwise disable the displays 340 of the HMD 310. In some examples, the HMD 310 can send its output content 285 to another display device, such as a smartwatch, a laptop, or another display device. These adjustments to the operation of the XR system 210 of the HMD 310 can allow the HMD 310 can reduce its battery load and use battery less quickly, and thus can last longer despite its low battery condition 735.[0111] In some examples, the XR system 210 of the HMD 310 can detect other conditions than the low battery condition 735 of FIG. 7. For instance, detection of the condition can include detection of levels of other computing resources of the HMD 310 meeting, or being less than, a threshold level. Detection of the condition can include detection of levels of usage of a computing resource of the HMD 310 meeting, or exceeding, a threshold level. For example, the condition can be the available memory (e.g., memory 1015, ROM 1020, and/or RAM 1025) of the HMD 310 meeting, or being less than, a threshold memory level. The condition can be the available storage space (e.g., on storage device 1030) of the HMD 310 meeting, or being less than, a threshold level. The condition can be the available network bandwidth of the HMD 310 meeting, or being less than, a threshold network bandwidth level. The condition can be the available processor bandwidth of the HMD 310 meeting, or being less than, a threshold processor bandwidth level. The condition can be the processor usage of the HMD 310 meeting, or exceeding, a threshold processor usage level.[0112] In some examples, the external HMD 710 of FIG. 7 can be an HMD as illustrated in FIG. 7. In some examples, the external HMD 710 can instead be a standalone camera device, (e.g., a security camera) (as in the external camera 610 of FIGs. 6 A and 6B), mobile handset 410, or any other type of external device 220.[0113] FIG. 8 A is a perspective diagram 800 illustrating a user 320 wearing a head-mounted display (HMD) 310 that is used as an extended reality (XR) system 210 and that positions virtual content 815 in an image displayed by the display(s) 340 of the HMD 310 based on the position of an external display 810 (external relative to the HMD 310) and/or visual (media) content 812 displayed on the external display 810 in the FOV 835 of the HMD 310. As shown in FIG. 8A, the user 320 wearing the HMD 310 is facing the external display 810, which is displaying visual (media) content 812. The external display 810 includes a camera 814. The FOV 835 of the HMD 310 represents the FOV of one or more cameras and/or other sensors of the HMD 310. The FOV 835 of the HMD 310 is illustrated using dashed lines. The external display 810, and the visual (media) content 812 displayed on the display 810, are both in the FOV 835 of the HMD 310.[0114] The XR system 210 of the HMD 310 can detect the external display 810 and/or can detect the visual (media) content 812 displayed on the external display 810 (e.g., in one or more images captured by the one or more cameras and/or other sensors of the HMD 310). Detection of the external display 810 and/or detection of the visual (media) content 812 displayed on the external display 810 can be a condition that the XR system 210 of the HMD 310 detects and uses to determine when to perform one or more other functions (e.g., determining a location of the external display 810 and/or other object in the environment surrounding the HMD 310, perform a function based on the location, etc.). The XR system 210 of the HMD 310 can detect this condition in the situation illustrated in FIG. 8A due to the display 810 and the visual (media) content 812 being in the FOV 835 of the HMD 310.[0115] In some examples, in response to detecting the condition, the XR system 210 of the HMD 310 can send a request 840 for additional (media) content 845 to one or more servers 847. In some examples, the request 840 can be based on the specific visual (media) content 812 detected by the XR system 210 of the HMD 310, for example based on a media recognition system of the XR system 210 of the HMD 310. The request 840 can identify the visual (media) content 812 detected by the XR system 210 of the HMD 310. The one or more servers 847 can provide the additional (media) content 845 to the XR system 210 of the HMD 310. The additional (media) content 845 can be specific to the visual (media) content 812. In some cases, the request 840 can include a representation of the visual (media) content 812 captured by the sensors of the HMD 310, and the one or more servers 847 can recognize the specific visual (media) content 812 based on a media recognition system of the one or more servers 847. The XR system 210 of the HMD 310 can generate virtual content 815 using the additional (media) content 845. The XR system 210 of the HMD 310 can determine the pose (e.g., location and/or orientation) of the virtual content 815 within the FOV 835 of the HMD 310 within the output content 285 based on the pose (e.g., location and/or orientation) of the display 810 and/or visual (media) content 812 within the FOV 835 of the HMD 310. The virtual content 815 may include a title 820 of the visual (media) content 812, identified as “Speedy Pursuit” in FIG. 8A. The title 820 can be displayed adjacent to and above the display 810 and the visual (media) content 812. In one example, the virtual content 815 may include a display extension 825 that extends the display 810 adjacent to and to the right of the display 810 and the visual (media) content 812, for example based on additional widescreen video data in the additional (media) content 845. The virtual content 815 may include metadata 830 about the virtual content 815 adjacent to and to the left of the display 810 and the visual (media) content 812. The metadata 830 may identify a release date (1998) of the virtual content 815 and identify that the visual (media) content 812 stars a famous actor. In some examples, the virtual content 815 can include additional information or content related to the visual (media) content 812, such as deleted scenes. In some examples, at least some of the virtual content 815 can be overlaid over the display 810 and/or the visual (media) content 812. For example, the virtual content 815 can be used to highlight or circle a particular actor or object in the visual (media) content 812. For example, if the visual (media) content 812 is a sports game, the virtual content 815 can highlight or circle a hard-to-see but important object, such as a ball or a hockey puck.[0116] In the context of FIG. 2, the external display 810 can act as the external device 220, and the visual (media) content 812 can act as a data stream from the external device 220 akin to the sensor data from the sensors 225. In some cases, the display 810 can transmit the visual (media) content 812 to the XR system 210 of the HMD 310 instead of or in addition to displaying the visual content 812, so that the XR system 210 of the HMD 310 can more easily detect and/or recognize the visual (media) content 812 in images and/or other sensor data captured by the image sensors and/or other sensors of the HMD 310. In some examples, the one or more servers 847 may act as the external device 220, and the additional (media) content 845 can act as a data stream from the external device 220 akin to the sensor data from the sensors 225.[0117] In another example, the user wearing the HMD 310 can be facing the external display 810 such that the external display 810 is within the FOV of one or more cameras and/or other image sensors. The one or more cameras (and/or other image sensors) of the HMD 310 and the camera 814 (and/or other image sensor) of the external display 810 can be used for object tracking. Similar to that discussed with respect to FIG. 6A and FIG. 6B, based on detecting a condition as noted above, the HMD 310 can determine whether to use the camera/image sensor(s) of the HMD 310, to use the camera/image sensor(s) of the external display 810, or to use the camera/image sensor(s) of the both the HMD and the external display 810 for tracking purposes.[0118] FIG. 8B is a perspective diagram 850 illustrating a user 320 wearing a head-mounted display (HMD) 310 that is used as an extended reality (XR) system 210 and that positions, in an image displayed by the display(s) 340 of the HMD 310 , a virtual representation 860 of visual (media) content 812 displayed on a display 810 based on a position of the display 810 and/or the visual (media) content 812 even though the display 810 and/or the visual (media) content 812 are out of the field of view (FOV) 890 of the HMD 310. The user 320 wearing the HMD 310 no longer faces the display 810 that is displaying visual (media) content 812. The FOV 890 of the HMD 310 represents the FOV of one or more cameras and/or other sensors of the HMD 310. The FOV 890 of the HMD 310 is illustrated using dashed lines. The display 810, and the visual (media) content 812 displayed on the display 810, are not within (and are thus missing from) the FOV 890 of the HMD 310.[0119] In one example, the XR system 210 of the HMD 310 can detect the presence of display 810 in the proximity of the HMD 310 (e.g., in wireless communication range of the HMD 310 or detected within the FOV of the HMD 310 at an earlier time), which can be a condition that the XR system 210 of the HMD 310 detects and uses to determine when to perform one or more other functions. In one example, the XR system 210 of the HMD 310 can determine that it has lost track of the display 810 and/or the visual (media) content 812 (e.g., based on determining that the display 810 and/or visual content 812 is no longer within the FOV 890 of the HMD 310), which can be a condition that the XR system 210 of the HMD 310 detects and uses to determine when to perform one or more other functions. The XR system 210 of the HMD 310 can detect such conditions in the situation illustrated in FIG. 8B due to the display 810 and the visual (media) content 812 no longer being in the FOV 890 of the HMD 310, for example because the user 320 has turned his or her head and/or body to the right. In response to detecting the condition, the XR system 210 of the HMD 310 can automatically send a request 880 for the visual (media) content 812 to the display 810 and/or to one or more computing devices associated with the display 810 (e.g., an entertainment device, media center device, or computing system 1000 connected to the display 810). The display 810, and/or the one or more computing devices associated with the display 810, can respond to the request 880 by providing the visual (media) content 812 as part of a data stream. The XR system 210 of the HMD 310 can generate a virtual representation 860 of the visual (media) content 812 as virtual content 815 within the FOV 890 of the HMD 310. In some cases, the XR system 210 of the HMD 310 can generates a directional indicator 870 as virtual content 815 within the FOV 890 of the HMD 310. The directional indicator 870 points toward the position of the display 810 that is displaying the visual (media) content 812. The virtual representation 860 of the visual content 812 can allow the user 320 of the HMD 310 to continue watching the visual (media) content 812 even if the user 320 turns away from the display 810. The user 320 thus does not have to miss any of the visual (media) content 812 even if the user 320 needs to briefly turn away. The directional indicator 870, which points to the left, can let the user 320 know to turn left to face the display 810 that displays the visual (media) content 812 again. Additional virtual content 815 based on the additional (media) content 845 from the one or more servers 847 can also be displayed in the FOV 890 of the HMD 310, such as the title 820 of the virtual (media) content 812.[0120] In the context of FIG. 2, the display 810 can act as the external device 220, and the visual (media) content 812 can act as a data stream from the external device 220 akin to the sensor data from the sensors 225. In some cases, the display 810 can transmit the visual (media) content 812 to the XR system 210 of the HMD 310 instead of or in addition to displaying the visual (media) content 812, so that the XR system 210 of the HMD 310 can more easily detect and/or recognize the visual (media) content 812 in images and/or other sensor data captured by the image sensors and/or other sensors of the HMD 310. In some examples, the one or more servers 847 may act as the external device 220, and the additional (media) content 845 can act as a data stream from the external device 220 akin to the sensor data from the sensors 225.[0121] Other examples of conditions that can cause the HMD 310 to perform one or more functions (e.g., determine a location of an object, request that resources be offloaded to the external device, request for assistance from an external device with hand tracking, etc.) can include a user input or setting that requests using the external device rather than the imaging device (e.g., XR device) when available for a particular function (e.g., displaying content, tracking an object such as a hand, head, or body of a user), a user input or setting indicating a preference that a device (e.g., the external device) be used for a particular function when plugged into the imaging device, that a privacy and/or security is a factor (which could also be based on a user input or setting), based on a user input (e.g., a user input requesting that resources be offloaded to the external device, such as a user input requesting to turn off the imaging device, a user input requesting to turn an external device such as a light on or off through a home automation application running on the imaging device, etc.), based on capabilities of an image sensor of the imaging device (e.g., when an infrared (IR) sensor on one device is useful where ambient lighting is inadequate, when an object being tracked is moving fast and the image sensor with a higher frame rate is more appropriate, etc.), or any combination thereof.[0122] For instance, the HMD 310 or an application running on the HMD 310 can be programmed with a setting (e.g., based on a user input provided to the HMD 310 and/or application, set by default, etc.) indicating a preference to use an external device for a particular function when the external device is available (e.g., physically or wirelessly connected to the HMD 310) and/or when the external device is capable of performing the function. In one example, based on such a setting being selected or otherwise enabled by a user (or set by default in some cases), an external display (e.g., a television, laptop computer, smart home device or assistant, tablet computer, desktop computer, external XR device, etc.) connected to the HMD 310 can be used to display content for the HMD 310. In another example, based on such a setting being selected or otherwise enabled by a user (or set by default in some cases), one or more cameras and/or other sensors of an external device connected to the HMD 310 can be used to track an object (e.g., a hand, head, or body of a user, an additional external device other than the external device performing the tracking).[0123] In some examples, the HMD 310 or an application running on the HMD 310 can be programmed with a privacy or security setting (e.g., based on a user input provided to the HMD 310 and/or application, set by default, etc.) indicating a preference to use an external device when security and/or privacy may be compromised by using the HMD 310. For instance, based on the privacy or security setting being selected or otherwise enabled by a user (or set by default in some cases), the HMD 310 can determine that content displayed on the HMD 310 is viewable by other people and/or cameras and is thus not private or secure. In response to determining that the content is not private/secure, the HMD 310 can send a command to an external device requesting that the external device display the content.[0124] In some cases, the HMD 310 can request assistance from an external device based on the capabilities and/or components of the external device. For instance, the external device may include an image sensor that is not present on the HMD 310. In one example, the image sensor may include an IR sensor that can perform object tracking (e.g., hand tracking, head tracking, body tracking, etc.) when ambient lighting is inadequate (e.g., in low light conditions). In such an example, the HMD 310 can detect when a low light condition is present (e.g., based on analyzing an image captured by a camera of the HMD 310), such as when one or more light values of the image are below a lighting threshold (e.g., below a particular luminance, lux, or other lighting value, such 3 lux or less). In response to detecting the low-light condition, the HMD 310 can send a command to the external device requesting that the external device capture images using the IR sensor and/or any other sensors and either perform object tracking using the images (in which case the external device can send the pose information to the HMD 310) or send the images to the HMD 310 to perform tracking. In another example, the image sensor may include a camera that can capture images at a high frame rate, which can be used to track an object that is moving fast. In such an example, the HMD 310 can detect the object is moving fast and can send a command to the external device requesting that the external device capture images using the high frame rate camera and/or any other sensors and either perform object tracking using the images or send the images to the HMD 310 to perform tracking.[0125] In some examples, a user can provide user input (e.g., a gesture input, pressing a virtual or physical button, etc.) to control whether the HMD 310 or an external device performs a particular function. In one example, even if the HMD 310 battery is above a threshold and the hands are within a FOV of the HMD 310, the user may provide user input to the HMD 310 requesting that the HMD 310 offload object tracking functionality (e.g., hand tracking, head tracking, body tracking, etc.) to an external device (e.g., a television, laptop computer, smart home device or assistant, tablet computer, desktop computer, external XR device, etc.). For instance, the user may plan on using the HMD 310 for an extended period of time (e.g., play a game for a long period of time), which would at some point require a battery based handoff to the external device. In another example, a user may prefer to use the HMD 310 for a function even when the function will drain the battery where performance of the function may be better by the HMD 310 rather than an external device (e.g., based on one or more capabilities or components of the HMD 310). In such an example, a user can provide user input to the HMD 310 to override handoff of a function to an external device.[0126] In some cases, the HMD 310 can detect a condition indicating that an external device will be needed to perform a function or that the HMD 310 is needed to perform a function. In one illustrative example, while performing hand tracking of the hands of a user of the HMD 310, the HMD 310 can determine that the hands are moving toward the edge of the FOV of the HMD 310 and thus (e.g., based on past usage or the nature of the task) that the user will continue moving the hands beyond the FOV of the HMD 310. Before or as the hands move past the FOV, the HMD 310 can send a command to an external device to turn on one or more cameras and begin capturing images or video of the hands. The HMD 310 can request that the external device perform the obj ect tracking and send the pose information of the hands to the HMD 310 or that the external device the images/video to the HMD 310 so that the HMD 310 can perform the tracking. In such an example, the HMD 310 can resume performing the tracking once the hands return into a known FOV of one or more cameras of the HMD 310. In another illustrative example, the HMD 310 can determine that the user is moving away (or will move away) from a FOV one or more sensors (e.g., cameras or other sensors) that are fixed in place (e.g., a camera on a laptop) and that are being used for object tracking. Based on determining the user will exit the FOV of the one or more sensors, the HMD 310 can transition to performing tracking using its own cameras or other sensors (in which case the HMD 310 send a command to the external device to stop performing tracking using its sensors). In some cases, once the HMD 310 and/or external device determines not to use one or more sensors (e.g., cameras) for tracking, the HMD 310 and/or external device can turn off the sensors, which can conserve power, improve privacy/security, etc.[0127] In some examples, the HMD 310 can detect an additional condition that can trigger that HMD 310 to perform a function or resume performance of a function that was previously offloaded to an external device. For instance, as described with respect to the example of FIG. 7, the HMD 310 can offload one or more object tracking tasks (e.g., hand tracking, head tracking, body tracking, etc.) to an external device based on an operational status of the HMD 310 (e.g., when the HMD 310 battery is low on power or other computational resources, such as below a threshold battery level). The HMD 310 can subsequently be charged so that a battery level of the HMD 310 battery is greater than the threshold battery level. Based on detecting that the battery level has exceeded the threshold battery level, the HMD 310 can send a command to the external device requesting that the one or more object tracking tasks be performed, at least in part, by the HMD 310. In response to the command, the external device can stop performing the object tracking task(s) and the HMD 310 can begin or resume performance of the object tracking task(s).[0128] FIG. 9 is a flow diagram illustrating a process 900 for processing image data. The process 900 may be performed by an imaging system. In some examples, the imaging system can be the XR system 210 of FIG. 2. In some examples, the imaging system can include, for example, the image capture and processing system 100, the image capture device 105 A, the image processing device 105B, the image processor 150, the ISP 154, the host processor 152, the XR system 210, the processing engine 205, the inter-device negotiation engine 230, the feature management engine 250, the output content generation engine 280, the output device 290, a head-mounted display (HMD) device (e.g., HMD 310), the mobile handset 410, the external HMD device 710, the one or more servers 847, the computing system 1000, or a combination thereof.[0129] At operation 905, the process 900 includes receiving, by a device (e.g., the imaging system), an image of a portion of an environment captured by an image sensor (e.g., an image sensor of the device). The environment includes an object. At operation 910, the process 900 includes identifying a data stream from an external device. Examples of the external device can include the external device 220, the sensors 225 of the external device 220, the HMD 310 of FIG. 3, the mobile handset 410, the external camera 610, the external HMD 710, the display 810, the one or more servers 847, a computing system 1000, or a combination thereof.[0130] At operation 915, the process 900 includes detecting a condition based on the image, the data stream, an operational status of the apparatus, or any combination thereof. In some cases, detecting the condition based on the image includes determining that the object is missing from a portion of the environment in the image. In one example, determining that the object is missing from the portion of the environment in the image includes determining that at least a part of the object is occluded in the image (e.g., as shown in FIG. 6B). In some cases, detecting the condition based on the operational status of the device includes determining that an availability of a resource is below a threshold. In one example, determining that the availability of the resource is below the threshold includes determining that a battery level of a battery is below a battery level threshold. In another example, determining that the availability of the resource is below the threshold includes determining that an available bandwidth is below a bandwidth threshold. In some cases, detecting the condition based on the operational status of the device includes receiving user input corresponding to offloading processing to the external device. For example, as described above, a user can provide user input (e.g., a gesture input, pressing a virtual or physical button, etc.) to control whether the HMD 310 or an external device performs a particular function.[0131] In some examples, detecting the condition based on the image includes determining one or more lighting conditions in the image (e.g., a low-light condition). In some cases, determining the one or more lighting conditions in the image can include determining that one or more light values of the image are below a lighting threshold (e.g., a lighting threshold of 3 lux).[0132] In some examples, the object is a display of an external display device. In some cases, the process 900 includes detecting the condition based on the image at least in part by identifying, in the image, visual media content displayed on the display of the external display device.[0133] At operation 920, the process 900 includes determining, in response to detecting the condition, a location of the object in the environment based on at least one of the image and the data stream. In some cases, the external device includes a second image sensor. In some cases, the data stream includes a second image of a second portion of the environment, and determining the location of the object in the environment is based at least in part on a depiction of the object in the second image. In some examples, the portion of the environment in the image and the second portion of the environment overlap.[0134] In some examples, determining the location of the object in the environment includes sending a request for the external device to identify the location of the object in the environment. In some examples, the process 900 can include receiving a response from the external device identifying the location of the object in the environment.[0135] In some examples, in response to detecting the condition, the process 900 can include generating a merged dataset at least by combining data from the data stream with the image captured by the image sensor. In such examples, determining the location of the object can be based at least in part on the merged dataset.[0136] At operation 925, the process 900 includes generating an output based on the location of the object in the environment. In some examples, generating the output includes generating content. In some cases, the process 900 includes outputting the content based on the location of the object in the environment. For instance, outputting the content includes can include transmitting or sending the content to a display of the device to be displayed. In some examples, the content virtually extends the display of the external display device. In some cases, process 900 can include sending the content to an audio output device to be played.[0137] In some examples, generating the output includes controlling the device based on a user input. For instance, the HMD 310 can receive a user input to control the device or the HMD 310 (e.g., a user input requesting to turn an external device such as a light on or off through a home automation application running on the imaging device, a user input requesting the HMD 310 turn off, etc.).[0138] In some examples, generating the output includes generating content at least in part by overlaying virtual content over a region of the image. In such examples, the region of the image is based on the location of the object in the environment. In cases where the object is a display of the external display device, the region of the image is adjacent to a depiction of the display of the external display device in the image. In some examples, the object is a hand of a user of the device, where the hand is at least partially adjacent to the region of the image.[0139] In some examples, the process 900 can include detecting an additional condition based on at least one of an additional image captured by the image sensor, the data stream, and the operational status of the device. In response to detecting the additional condition, the process 900 can include performing a function previously performed by the external device. For instance, the HMD 310 described above can detect an additional condition that can trigger that HMD 310 to perform a function or resume performance of a function that was previously offloaded to an external device (e.g., hand tracking, head tracking, body tracking, etc.).[0140] In some examples, the processes described herein (e.g., process 900 and/or other process described herein) may be performed by a computing device or apparatus. In one example, the process 900 can be performed by the XR system 210 of FIG. 2. In another example, the process 900 can be performed by a computing device with the computing system 1000 shown in FIG. 10. For instance, a computing device with the computing system 1000 shown in FIG. 10 can include the components of the image processing engine 205 of the XR system 210 and can implement the operations of FIG. 10.[0141] The computing device can include any suitable device, such as a mobile device (e.g., a mobile phone), a desktop computing device, a tablet computing device, a wearable device (e.g., a VR headset, an AR headset, AR glasses, a network-connected watch or smartwatch, or other wearable device), a server computer, an autonomous vehicle or computing device of an autonomous vehicle, a robotic device, a television, and/or any other computing device with the resource capabilities to perform the processes described herein, including the process 900. In some cases, the computing device or apparatus may include various components, such as one or more input devices, one or more output devices, one or more processors, one or more microprocessors, one or more microcomputers, one or more cameras, one or more sensors, and/or other component(s) that are configured to carry out the steps of processes described herein. In some examples, the computing device may include a display, a network interface configured to communicate and/or receive the data, any combination thereof, and/or other component(s). The network interface may be configured to communicate and/or receive Internet Protocol (IP) based data or other type of data.[0142] The components of the computing device can be implemented in circuitry. For example, the components can include and/or can be implemented using electronic circuits or other electronic hardware, which can include one or more programmable electronic circuits (e.g., microprocessors, graphics processing units (GPUs), digital signal processors (DSPs), central processing units (CPUs), and/or other suitable electronic circuits), and/or can include and/or be implemented using computer software, firmware, or any combination thereof, to perform the various operations described herein.[0143] The process 900 is illustrated as logical flow diagrams, the operation of which represents a sequence of operations that can be implemented in hardware, computer instructions, or a combination thereof. In the context of computer instructions, the operations represent computer- executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer- executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes.[0144] Additionally, the process 900 and/or other process described herein may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware, or combinations thereof. As noted above, the code may be stored on a computer-readable or machine- readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors. The computer-readable or machine-readable storage medium may be non-transitory.[0145] FIG. 10 is a diagram illustrating an example of a system for implementing certain aspects of the present technology. In particular, FIG. 10 illustrates an example of computing system 1000, which can be for example any computing device making up internal computing system, a remote computing system, a camera, or any component thereof in which the components of the system are in communication with each other using connection 1005. Connection 1005 can be a physical connection using a bus, or a direct connection into processor 1010, such as in a chipset architecture. Connection 1005 can also be a virtual connection, networked connection, or logical connection. [0146] In some embodiments, computing system 1000 is a distributed system in which the functions described in this disclosure can be distributed within a datacenter, multiple data centers, a peer network, etc. In some embodiments, one or more of the described system components represents many such components each performing some or all of the function for which the component is described. In some embodiments, the components can be physical or virtual devices.[0147] Example system 1000 includes at least one processing unit (CPU or processor) 1010 and connection 1005 that couples various system components including system memory 1015, such as read-only memory (ROM) 1020 and random access memory (RAM) 1025 to processor 1010. Computing system 1000 can include a cache 1012 of high-speed memory connected directly with, in close proximity to, or integrated as part of processor 1010.[0148] Processor 1010 can include any general purpose processor and a hardware service or software service, such as services 1032, 1034, and 1036 stored in storage device 1030, configured to control processor 1010 as well as a special-purpose processor where software instructions are incorporated into the actual processor design. Processor 1010 may essentially be a completely self- contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric.[0149] To enable user interaction, computing system 1000 includes an input device 1045, which can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech, etc. Computing system 1000 can also include output device 1035, which can be one or more of a number of output mechanisms. In some instances, multimodal systems can enable a user to provide multiple types of input/output to communicate with computing system 1000. Computing system 1000 can include communications interface 1040, which can generally govern and manage the user input and system output. The communication interface may perform or facilitate receipt and/or transmission wired or wireless communications using wired and/or wireless transceivers, including those making use of an audio jack/plug, a microphone jack/plug, a universal serial bus (USB) port/plug, an Apple® Lightning® port/plug, an Ethernet port/plug, a fiber optic port/plug, a proprietary wired port/plug, a BLUETOOTH® wireless signal transfer, a BLUETOOTH® low energy (BLE) wireless signal transfer, an IBEACON® wireless signal transfer, a radio-frequency identification (RFID) wireless signal transfer, near-field communications (NFC) wireless signal transfer, dedicated short range communication (DSRC) wireless signal transfer, 802.11 Wi-Fi wireless signal transfer, wireless local area network (WLAN) signal transfer, Visible Light Communication (VLC), Worldwide Interoperability for Microwave Access (WiMAX), Infrared (IR) communication wireless signal transfer, Public Switched Telephone Network (PSTN) signal transfer, Integrated Services Digital Network (ISDN) signal transfer, 3G/4G/5G/LTE cellular data network wireless signal transfer, ad- hoc network signal transfer, radio wave signal transfer, microwave signal transfer, infrared signal transfer, visible light signal transfer, ultraviolet light signal transfer, wireless signal transfer along the electromagnetic spectrum, or some combination thereof. The communications interface 1040 may also include one or more Global Navigation Satellite System (GNSS) receivers or transceivers that are used to determine a location of the computing system 1000 based on receipt of one or more signals from one or more satellites associated with one or more GNSS systems. GNSS systems include, but are not limited to, the US-based Global Positioning System (GPS), the Russia-based Global Navigation Satellite System (GLONASS), the China-based BeiDou Navigation Satellite System (BDS), and the Europe-based Galileo GNSS. There is no restriction on operating on any particular hardware arrangement, and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.[0150] Storage device 1030 can be a non-volatile and/or non-transitory and/or computer- readable memory device and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, a floppy disk, a flexible disk, a hard disk, magnetic tape, a magnetic strip/stripe, any other magnetic storage medium, flash memory, memristor memory, any other solid-state memory, a compact disc read only memory (CD-ROM) optical disc, a rewritable compact disc (CD) optical disc, digital video disk (DVD) optical disc, a blu-ray disc (BDD) optical disc, a holographic optical disk, another optical medium, a secure digital (SD) card, a micro secure digital (microSD) card, a Memory Stick® card, a smartcard chip, a EMV chip, a subscriber identity module (SIM) card, a mini/micro/nano/pico SIM card, another integrated circuit (IC) chip/card, random access memory (RAM), static RAM (SRAM), dynamic RAM (DRAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash EPROM (FLASHEPROM), cache memory (L1/L2/L3/L4/L5/L#), resistive random-access memory (RRAM/ReRAM), phase change memory (PCM), spin transfer torque RAM (STT-RAM), another memory chip or cartridge, and/or a combination thereof.[0151] The storage device 1030 can include software services, servers, services, etc., that when the code that defines such software is executed by the processor 1010, it causes the system to perform a function. In some embodiments, a hardware service that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as processor 1010, connection 1005, output device 1035, etc., to carry out the function.[0152] As used herein, the term “computer-readable medium” includes, but is not limited to, portable or non-portable storage devices, optical storage devices, and various other mediums capable of storing, containing, or carrying instruction(s) and/or data. A computer-readable medium may include a non-transitory medium in which data can be stored and that does not include carrier waves and/or transitory electronic signals propagating wirelessly or over wired connections. Examples of a non-transitory medium may include, but are not limited to, a magnetic disk or tape, optical storage media such as compact disk (CD) or digital versatile disk (DVD), flash memory, memory or memory devices. A computer-readable medium may have stored thereon code and/or machine-executable instructions that may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted using any suitable means including memory sharing, message passing, token passing, network transmission, or the like.[0153] In some embodiments the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.[0154] Specific details are provided in the description above to provide a thorough understanding of the embodiments and examples provided herein. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software. Additional components may be used other than those shown in the figures and/or described herein. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.[0155] Individual embodiments may be described above as a process or method which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed, but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.[0156] Processes and methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer- readable media. Such instructions can include, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or a processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, source code, etc. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on.[0157] Devices implementing processes and methods according to these disclosures can include hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof, and can take any of a variety of form factors. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the necessary tasks (e.g., a computer-program product) may be stored in a computer-readable or machine- readable medium. A processor(s) may perform the necessary tasks. Typical examples of form factors include laptops, smart phones, mobile phones, tablet devices or other small form factor personal computers, personal digital assistants, rackmount devices, standalone devices, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example.[0158] The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are example means for providing the functions described in the disclosure.[0159] In the foregoing description, aspects of the application are described with reference to specific embodiments thereof, but those skilled in the art will recognize that the application is not limited thereto. Thus, while illustrative embodiments of the application have been described in detail herein, it is to be understood that the inventive concepts may be otherwise variously embodied and employed, and that the appended claims are intended to be construed to include such variations, except as limited by the prior art. Various features and aspects of the above- described application may be used individually or jointly. Further, embodiments can be utilized in any number of environments and applications beyond those described herein without departing from the scope of the specification. The specification and drawings are, accordingly, to be regarded as illustrative rather than restrictive. For the purposes of illustration, methods were described in a particular order. It should be appreciated that in alternate embodiments, the methods may be performed in a different order than that described.[0160] One of ordinary skill will appreciate that the less than (“<”) and greater than (“>”) symbols or terminology used herein can be replaced with less than or equal to (“<”) and greater than or equal to (“>”) symbols, respectively, without departing from the scope of this description.[0161] Where components are described as being “configured to” perform certain operations, such configuration can be accomplished, for example, by designing electronic circuits or other hardware to perform the operation, by programming programmable electronic circuits (e.g., microprocessors, or other suitable electronic circuits) to perform the operation, or any combination thereof.[0162] The phrase “coupled to” refers to any component that is physically connected to another component either directly or indirectly, and/or any component that is in communication with another component (e.g., connected to the other component over a wired or wireless connection, and/or other suitable communication interface) either directly or indirectly. [0163] Claim language or other language reciting “at least one of’ a set and/or “one or more” of a set indicates that one member of the set or multiple members of the set (in any combination) satisfy the claim. For example, claim language reciting “at least one of A and B” means A, B, or A and B. In another example, claim language reciting “at least one of A, B, and C” means A, B, C, or A and B, or A and C, or B and C, or A and B and C. The language “at least one of’ a set and/or “one or more” of a set does not limit the set to the items listed in the set. For example, claim language reciting “at least one of A and B” can mean A, B, or A and B, and can additionally include items not listed in the set of A and B.[0164] The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, firmware, or combinations thereof. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.[0165] The techniques described herein may also be implemented in electronic hardware, computer software, firmware, or any combination thereof. Such techniques may be implemented in any of a variety of devices such as general purposes computers, wireless communication device handsets, or integrated circuit devices having multiple uses including application in wireless communication device handsets and other devices. Any features described as modules or components may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. If implemented in software, the techniques may be realized at least in part by a computer-readable data storage medium comprising program code including instructions that, when executed, performs one or more of the methods described above. The computer-readable data storage medium may form part of a computer program product, which may include packaging materials. The computer-readable medium may comprise memory or data storage media, such as random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, magnetic or optical data storage media, and the like. The techniques additionally, or alternatively, may be realized at least in part by a computer-readable communication medium that carries or communicates program code in the form of instructions or data structures and that can be accessed, read, and/or executed by a computer, such as propagated signals or waves.[0166] The program code may be executed by a processor, which may include one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, an application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Such a processor may be configured to perform any of the techniques described in this disclosure. A general purpose processor may be a microprocessor; but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure, any combination of the foregoing structure, or any other structure or apparatus suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated software modules or hardware modules configured for encoding and decoding, or incorporated in a combined video encoder- decoder (CODEC).[0167] Illustrative aspects of the disclosure include: [0168] Aspect 1 : An apparatus for processing image data, the apparatus comprising at least one memory and one or more processors coupled to the memory. The one or more processors are configured to: receive an image of a portion of an environment captured by an image sensor, wherein the environment includes an object; identify a data stream from an external device; detect a condition based on at least one of the image, the data stream, and an operational status of the apparatus; in response to detecting the condition, determine a location of the object in the environment based on at least one of the image and the data stream; and generate an output based on the location of the object in the environment.[0169] Aspect 2: The apparatus of Aspect 1, wherein, to detect the condition based on the image, the one or more processors are configured to determine that the object is missing from a portion of the environment in the image.[0170] Aspect 3: The apparatus of Aspect 2, wherein, to determine that the object is missing from the portion of the environment in the image, the one or more processors are configured to determine that at least a part of the object is occluded in the image.[0171] Aspect 4: The apparatus of any of Aspects 2 or 3, wherein the external device includes a second image sensor, wherein the data stream includes a second image of a second portion of the environment, and wherein determining the location of the object in the environment is based at least in part on a depiction of the object in the second image. [0172] Aspect 5: The apparatus of Aspect 4, wherein the portion of the environment and the second portion of the environment overlap.[0173] Aspect 6: The apparatus of any of Aspects 1 to 5, wherein, to detect the condition based on the operational status of the apparatus, the one or more processors are configured to determine that an availability of a resource is below a threshold.[0174] Aspect 7: The apparatus of Aspect 6, wherein, to determine that the availability of the resource is below the threshold, the one or more processors are configured to determine that a battery level of a battery is below a battery level threshold.[0175] Aspect 8: The apparatus of any of Aspects 6 or 7, wherein, to determine that the availability of the resource is below the threshold, the one or more processors are configured to determine that an available bandwidth is below a bandwidth threshold.[0176] Aspect 9: The apparatus of any of Aspects 1 to 8, wherein, to detect the condition based on the operational status of the apparatus, the one or more processors are configured to receive user input corresponding to offloading processing to the external device. [0177] Aspect 10: The apparatus of any of Aspects 1 to 9, wherein, to generate the output, the one or more processors are configured to generate content.[0178] Aspect 11: The apparatus of Aspect 10, wherein the one or more processors are configured to: output the content based on the location of the object in the environment.[0179] Aspect 12: The apparatus of Aspect 11, further comprising: a display; wherein, to output the content, the one or more processors are configured to send the content to the display to be displayed.[0180] Aspect 13: The apparatus of any of Aspects 1 to 12, wherein the one or more processors are configured to: detect an additional condition based on at least one of an additional image captured by the image sensor, the data stream, and the operational status of the apparatus; and in response to detecting the additional condition, perform a function previously performed by the external device. [0181] Aspect 14: The apparatus of any of Aspects 1 to 13, wherein, to generate the output, the one or more processors are configured to: control the apparatus based on a user input.[0182] Aspect 15: The apparatus of any of Aspects 1 to 14, wherein, to detect the condition based on the image, the one or more processors are configured to determine one or more lighting conditions in the image.[0183] Aspect 16: The apparatus of Aspect 15, wherein, to determine the one or more lighting conditions in the image, the one or more processors are configured to determine that one or more light values of the image are below a lighting threshold.[0184] Aspect 17: The apparatus of any of Aspects 1 to 16, wherein, to determine the location of the object in the environment, the one or more processors are configured to: send a request for the external device to identify the location of the object in the environment; and receive a response from the external device identifying the location of the object in the environment.[0185] Aspect 18: The apparatus of any of Aspects 1 to 17, wherein the object is a display of an external display device. [0186] Aspect 19: The apparatus of Aspect 18, wherein, to detect the condition based on the image, the one or more processors are configured to identify, in the image, visual media content displayed on the display of the external display device.[0187] Aspect 20: The apparatus of any of Aspects 18 or 19, wherein, to generate the output, the one or more processors are configured to generate content, and wherein the content virtually extends the display of the external display device.[0188] Aspect 21 : The apparatus of any of Aspects 1 to 20, wherein, to generate the output, the one or more processors are configured to: generate content at least in part by overlaying virtual content over a region of the image, wherein the region of the image is based on the location of the object in the environment. [0189] Aspect 22: The apparatus of Aspect 21, wherein the object is a display of an external display device, and wherein the region of the image is adjacent to a depiction of the display of the external display device in the image.[0190] Aspect 23: The apparatus of Aspect 21, wherein the object is a hand of a user of the apparatus, and wherein the hand is at least partially adjacent to the region of the image.[0191] Aspect 24: The apparatus of any of Aspects 1 to 21, wherein the object is visual content displayed on the display.[0192] Aspect 25: The apparatus of any of Aspects 1 to 21, wherein the object is a head of a user of the apparatus. [0193] Aspect 26: The apparatus of any of Aspects 1 to 21, wherein the object is a body of a user of the apparatus.[0194] Aspect 27: The apparatus of any of Aspects 1 to 26, wherein the one or more processors are further configured to: in response to detecting the condition, generate a merged dataset at least by combining data from the data stream with the image captured by the image sensor, wherein determining the location of the object is based at least in part on the merged dataset.[0195] Aspect 28: The apparatus of any of Aspects 1 to 27, wherein the apparatus is a head- mounted display (HMD).[0196] Aspect 29: The apparatus of any of Aspects 1 to 28, further comprising: an audio output device; wherein, to generate the output, the one or more processors are configured to generate content; and wherein the one or more processors are configured to send the content to the audio output device to be played.[0197] Aspect 30: A method for processing image data, comprising: receiving an image of a portion of an environment captured by an image sensor, wherein the environment includes an object; identifying, by a device, a data stream from an external device; detecting a condition based on at least one of the image, the data stream, and an operational status of the device; in response to detecting the condition, determining a location of the object in the environment based on at least one of the image and the data stream; and generating an output based on the location of the object in the environment.[0198] Aspect 31 : The method of Aspect 30, wherein detecting the condition based on the image includes determining that the object is missing from a portion of the environment in the image. [0199] Aspect 32: The method of Aspect 31, wherein determining that the object is missing from the portion of the environment in the image includes determining that at least a part of the object is occluded in the image.[0200] Aspect 33 : The method of any of Aspects 31 or 32, wherein the external device includes a second image sensor, wherein the data stream includes a second image of a second portion of the environment, and wherein determining the location of the object in the environment is based at least in part on a depiction of the object in the second image.[0201] Aspect 34: The method of Aspect 33, wherein the portion of the environment and the second portion of the environment overlap.[0202] Aspect 35: The method of any of Aspects 30 to 34, wherein detecting the condition based on the operational status of the device includes determining that an availability of a resource is below a threshold.[0203] Aspect 36: The method of Aspect 35, wherein determining that the availability of the resource is below the threshold includes determining that a battery level of a battery is below a battery level threshold. [0204] Aspect 37: The method of any of Aspects 35 or 36, wherein determining that the availability of the resource is below the threshold includes determining that an available bandwidth is below a bandwidth threshold.[0205] Aspect 38: The method of any of Aspects 30 to 37, wherein detecting the condition based on the operational status of the device includes receiving user input corresponding to offloading processing to the external device. [0206] Aspect 39: The method of any of Aspects 30 to 38, wherein generating the output includes generating content.[0207] Aspect 40: The method of Aspect 39, further comprising outputting the content based on the location of the object in the environment. [0208] Aspect 41: The method of Aspect 40, wherein outputting the content includes sending the content to a display of the device to be displayed.[0209] Aspect 42: The method of any of Aspects 30 to 41, further comprising: detecting an additional condition based on at least one of an additional image captured by the image sensor, the data stream, and the operational status of the device; and in response to detecting the additional condition, performing a function previously performed by the external device.[0210] Aspect 43: The method of any of Aspects 30 to 42, wherein generating the output includes controlling the device based on a user input.[0211] Aspect 44: The method of any of Aspects 30 to 43, wherein detecting the condition based on the image includes determining one or more lighting conditions in the image. [0212] Aspect 45: The method of Aspect 44, wherein determining the one or more lighting conditions in the image includes determining that one or more light values of the image are below a lighting threshold.[0213] Aspect 46: The method of any of Aspects 30 to 45, wherein determining the location of the object in the environment includes: sending a request for the external device to identify the location of the object in the environment; and receiving a response from the external device identifying the location of the object in the environment.[0214] Aspect 47: The method of any of Aspects 30 to 46, wherein the object is a display of an external display device.[0215] Aspect 48: The method of Aspect 47, wherein detecting the condition based on the image includes identifying, in the image, visual media content displayed on the display of the external display device. [0216] Aspect 49: The method of any of Aspects 47 or 48, wherein generating the output includes generating content, and wherein the content virtually extends the display of the external display device.[0217] Aspect 50: The method of any of Aspects 30 to 49, wherein generating the output includes: generating content at least in part by overlaying virtual content over a region of the image, wherein the region of the image is based on the location of the object in the environment.[0218] Aspect 51 : The method of Aspect 50, wherein the object is a display of an external display device, and wherein the region of the image is adjacent to a depiction of the display of the external display device in the image. [0219] Aspect 52: The method of Aspect 50, wherein the object is a hand of a user of the device, and wherein the hand is at least partially adjacent to the region of the image.[0220] Aspect 53: The method of any of Aspects 30 to 50, wherein the object is visual content displayed on the display.[0221] Aspect 54: The method of any of Aspects 30 to 50, wherein the object is a head of a user of the apparatus.[0222] Aspect 55: The method of any of Aspects 30 to 50, wherein the object is a body of a user of the apparatus.[0223] Aspect 56: The method of any of Aspects 30 to 55, further comprising: in response to detecting the condition, generating a merged dataset at least by combining data from the data stream with the image captured by the image sensor, wherein determining the location of the obj ect is based at least in part on the merged dataset.[0224] Aspect 57: The method of any of Aspects 30 to 56, wherein generating the output includes generating content, and further comprising sending the content to an audio output device to be played. [0225] Aspect 58: A computer-readable storage medium storing instructions that, when executed by one or more processors, cause the one or more processors to perform operations according to any of Aspects 1 to 57.[0226] Aspect 59: An apparatus comprising means for performing operations according to any of Aspects 1 to 57.[0227] Aspect 60: An apparatus for processing image data, the apparatus comprising at least one memory and one or more processors coupled to the memory. The one or more processors are configured to: receive an image of a portion of an environment captured by an image sensor, wherein the environment includes an object; detect a condition regarding an availability of a resource; in response to detecting the condition, determine a location of at least a part of the object in the environment based on at least a data stream from a device; and output content that is based on the location of at least the part of the object in the environment.[0228] Aspect 61 : The apparatus of Aspect 60, wherein, to detect the condition, the one or more processors are configured to determine that the availability of the resource is below a threshold. [0229] Aspect 62: The apparatus of Aspect 61, wherein, to determine that the availability of the resource is below the threshold, the one or more processors are configured to determine that a battery level of a battery is below a battery level threshold.[0230] Aspect 63: The apparatus of any of Aspects 61 or 62, wherein, to determine that the availability of the resource is below the threshold, the one or more processors are configured to determine that an available bandwidth is below a bandwidth threshold.[0231] Aspect 64: The apparatus of any of Aspects 60 to 63, wherein, to determine the location of at least the part of the object in the environment, the one or more processors are configured to: send a request for the device to identify the location of at least the part of the object in the environment; and receive a response from the device identifying the location of at least the part of the object in the environment.[0232] Aspect 65 : The apparatus of any of Aspects 60 to 64, wherein the one or more processors are further configured to: generate the content at least in part by overlaying virtual content over a region of the image, wherein the region of the image is based on the location of at least the part of the object in the environment.[0233] Aspect 66: The apparatus of Aspect 65, wherein the object is a hand of a user of the apparatus, and wherein the hand at least partially adjacent to the region of the image. [0234] Aspect 67 : The apparatus of any of Aspects 60 to 66, wherein the one or more processors are further configured to: in response to detecting the condition, generate a merged dataset at least by merging data from the data stream with the image captured by the image sensor, wherein determining the location of at least the part of the object is based on the merged dataset.[0235] Aspect 68: The apparatus of any of Aspects 60 to 67, wherein the apparatus is a head- mounted display (HMD).[0236] Aspect 69: The apparatus of any of Aspects 60 to 68, further comprising: a display, wherein, to output the content, the one or more processors are configured to send the content to the display to be displayed by the display.[0237] Aspect 70: The apparatus of any of Aspects 60 to 69, further comprising: an audio output device, wherein, to output the content, the one or more processors are configured to send the content to the audio output device to be played by the audio output device.[0238] Aspect 71 : A method of processing image data, comprising operations according to any of Aspects 60 to 70.[0239] Aspect 72: A computer-readable storage medium storing instructions that, when executed by one or more processors, cause the one or more processors to perform operations according to any of Aspects 60 to 70.[0240] Aspect 73: An apparatus comprising means for performing operations according to any of Aspects 60 to 70.[0241] Aspect 74: An apparatus for processing image data, the apparatus comprising at least one memory and one or more processors coupled to the memory. The one or more processors are configured to: receive an image of a portion of an environment captured by an image sensor, wherein the environment includes an object; detect a condition based on the image; in response to detecting the condition, generate content based on at least a data stream from a device; and output the content based on a location of at least a part the object in the environment.[0242] Aspect 75: The apparatus of Aspect 74, wherein, to detect the condition, the one or more processors are configured to determine that the object is missing from a portion of the environment in the image.[0243] Aspect 76: The apparatus of Aspect 74, wherein the object is a display of an external device.[0244] Aspect 77: The apparatus Aspect 76, wherein, to detect the condition, the one or more processors are configured to identify, in the image, a depiction of visual media content displayed on the display of the external device.[0245] Aspect 78: The apparatus of Aspect 76, wherein, to detect the condition, the one or more processors are configured to detect a presence of the display in the proximity of the apparatus.[0246] Aspect 79: The apparatus of Aspect 76, wherein, the one or more processors are further configured to generate a direction indicator pointing toward the position of the display.[0247] Aspect 80: The apparatus of any of Aspects 76 to 79, wherein the content virtually extends the display of the external device.[0248] Aspect 81 : The apparatus of any of Aspects 74 to 80, wherein the one or more processors are configured to: generate the content at least in part by overlaying virtual content over a region of the image, wherein the region of the image is based on the location of at least the part of the object in the environment.[0249] Aspect 82: The apparatus of Aspect 81, wherein the object is a display of an external device, and wherein the region of the image is adjacent to a depiction of the display of the external device in the image. [0250] Aspect 83 : The apparatus of any of Aspects 74 to 82, wherein the one or more processors are configured to: in response to detecting the condition, generate a merged dataset at least by merging data from the data stream with the image captured by the image sensor, wherein the content is generated based on the merged dataset.[0251] Aspect 84: The apparatus of any of Aspects 74 to 83, wherein the apparatus is a head- mounted display (HMD). [0252] Aspect 85: The apparatus of any of Aspects 74 to 84, further comprising: a display, wherein, to output the content, the one or more processors are configured to send the content to the display to be displayed by the display.[0253] Aspect 86: The apparatus of any of Aspects 74 to 85, further comprising: an audio output device, wherein, to output the content, the one or more processors are configured to send the content to the audio output device to be played by the audio output device.[0254] Aspect 87: A method of processing image data, comprising operations according to any of Aspects 74 to 86.[0255] Aspect 88 : A computer-readable storage medium storing instructions that, when executed by one or more processors, cause the one or more processors to perform operations according to any of Aspects 74 to 86.[0256] Aspect 89: An apparatus comprising means for performing operations according to any of Aspects 74 to 86.
Provided are a method, system, and program for handling Input/Output (I/O) requests. A bus enables communication with an initiator, target device and device controller, wherein the device controller accesses the target device to execute I/O commands directed to the target device. An I/O request command is received to access the target device. The initiator is configured to transmit at least one data request on the bus to one memory address in a predefined address window of the device controller. To device controller is enabled to claim the data request to the memory address in the predefined address window from the initiator on the bus to execute the data request against the target device.
1.A method for processing input / output (I / O) requests, wherein a bus enables communication with an initiator, a target device, and a device controller, and wherein the device controller accesses the target device to perform pointing For the I / O command of the target device, the method includes:Receiving an I / O request command to access the target device; andConfiguring the initiator to send at least one data request on the bus to a memory address in a predetermined address window of the device controller, wherein the device controller is enabled to request a request from the initiator on the bus , The right to the data request pointing to the memory address in the predetermined address window to execute the data request to the target device, wherein the data request is sent in a burst mode, and wherein, When processing the data request sent in a burst mode, the device controller operates as a slave device.2.The method of claim 1, wherein the initiator includes a direct memory access (DMA) engine, and wherein configuring the initiator to send the at least one data request includes configuring the DMA engine to send the At least one data request.3.The method of claim 2, wherein the device controller includes a DMA engine, the method further comprising:When processing the data request sent by the initiator through the bus, the processor is used to configure the device controller to disable the DMA engine.4.The method of claim 1, wherein the received I / O command includes a write operation, wherein the initiator uses the data request to send write data to a random memory address in the address window.5.The method of claim 1, wherein the received I / O command includes a read operation to read data from the target device, wherein the initiator sends a read request to the address window, the method further includes:A processor is used to configure the device controller to access the data from the target device.6.The method of claim 1, wherein the data request is sent to a random memory address in the address window, and wherein the data request sequentially accesses data at the target device.7.A method for processing input / output (I / O) commands, wherein a bus enables communication with an initiator and a target device, the method includes:A device controller is used to detect a data request directed to a memory address in an address window used to address the target device, wherein the device controller controls access to the target device, wherein the data request is in a burst mode Be sentUse the device controller to claim the right to the data request, which is the data request sent by the initiator on the bus; andThe data controller is used to execute the data request, wherein the device controller operates as a slave device when processing the data request sent in a burst mode.8.The method of claim 7, wherein the device controller includes a direct memory access (DMA) engine, wherein the initiator sends the data request to the device controller, and wherein, when the device controller While processing the data request, the device controller DMA engine is disabled.9.The method of claim 7, wherein the I / O command includes a write operation, wherein the initiator uses at least one data request to send write data to the address window, the method further includes:Using the device controller to process a data request for the address window as a write operation;Using the device controller to store write data received from the initiator using the data request in a buffer; andTransfer the write data from the buffer to the target device.10.The method of claim 7, wherein the I / O command includes a read operation to read data from the target device, the method further comprising:Using the device controller to process the data request for the address window as a read operation; andThe device controller is used to return the request data from the target device to the initiator.11.The method of claim 10, further comprising:The data accessed from the target device is stored in a buffer, where the buffered data is returned in response to a data request for any memory address in the address window.12.The method of claim 11, wherein the device controller manages the buffer as a first-in first-out (FIFO) queue, and wherein, in response to a data request for any memory address in the address window, Based on the sorting of data in the FIFO queue, data is returned from the buffer.13.The method of claim 11, wherein the address window is a non-prefetchable area.14.The method of claim 7, wherein the initiator includes a network adapter, the device controller includes a disk controller, and the target device includes at least one storage disk.15.The method of claim 14, wherein the bus uses the PCI-X protocol.16.A system for processing input / output (I / O) requests, wherein a bus enables communication with an initiator, a target device, and a device controller, and wherein the device controller accesses the target device to perform pointing For the I / O command of the target device, the system includes:processor;Code executed by the processor to cause the processor to complete the following operations:(i) receive an I / O request command to access the target device; and(ii) configure the initiator to send at least one data request on the bus to a memory address in a predetermined address window of the device controller, wherein the device controller is enabled to request the bus from the The originator's right to the data request pointing to the memory address in the predetermined address window to execute the data request to the target device, wherein the data request is sent in a burst mode, and Wherein, when processing the data request sent in a burst mode, the device controller operates as a slave device.17.The system of claim 16, wherein the received I / O command includes a write operation, wherein the initiator uses the data request to send write data to a random memory address in the address window.18.The system of claim 16, wherein the received I / O command includes a read operation to read data from the target device, wherein the initiator sends a read request to the address window, and the system further includes:The processor is used to configure the device controller to access the data from the target device.19.The system of claim 16, wherein the data request is sent to a random memory address in the address window, and wherein the data request sequentially accesses data at the target device.20.A system for processing I / O requests, including:bus;An initiator coupled to the bus;A device controller coupled to the bus;A target device, wherein the device controller provides access to the target device, and the system includes:A processor coupled to the bus; andCode executed by the processor to cause the processor to complete the following operations:(i) receive an I / O request command to access the target device; and(ii) Configure the initiator to send at least one data request on the bus to a memory address in a predetermined address window of the device controller, wherein the device controller is enabled to request a request from the The originator's right to the data request directed to the memory address in the predetermined address window to execute the data request to the target device, wherein the data request is sent in a burst mode, And wherein, when processing the data request sent in a burst mode, the device controller operates as a slave device.21.The system of claim 20, wherein the received I / O command includes a write operation, wherein the initiator uses the data request to send write data to a random memory address in the address window.22.The system of claim 20, wherein the received I / O command includes a read operation to read data from the target device, wherein the initiator sends a read request to the address window, wherein the code also enables The processor completes the following operations:The device controller is configured to access the data from the target device.23.The system of claim 20, wherein the data request is sent to a random memory address in the address window, and wherein the data request sequentially accesses data at the target device.24.An apparatus for processing input / output (I / O) requests, wherein a bus enables communication with an initiator, a target device, and a device controller, and wherein the device controller accesses the target device to perform pointing The I / O command of the target device, wherein the apparatus includes:Means for receiving an I / O request command to access the target device; andAn apparatus for configuring the initiator to send at least one data request on the bus to a memory address in a predetermined address window of the device controller, wherein the device controller can be requested to The originator's right to the data request pointing to the memory address in the predetermined address window to execute the data request to the target device, wherein the data request is sent in a burst mode , And wherein, when processing the data request sent in a burst mode, the device controller operates as a slave device.25.The apparatus of claim 24, wherein the initiator includes a direct memory access (DMA) engine, and wherein the means for configuring the initiator to send the at least one data request includes means for configuring the The device for sending the at least one data request by the DMA engine.26.The apparatus of claim 25, wherein the device controller includes a DMA engine, and the apparatus further includes:Means for configuring the device controller to disable the DMA engine when processing the data request sent by the initiator through the bus.27.The apparatus of claim 24, wherein the received I / O command includes a write operation, wherein the initiator uses the data request to send write data to a random memory address in the address window.28.The apparatus of claim 24, wherein the received I / O command includes a read operation to read data from the target device, wherein the initiator sends a read request to the address window, and the apparatus further includes:Means for configuring the device controller to access the data from the target device.29.The apparatus of claim 24, wherein the data request is sent to a random memory address in the address window, and wherein the data request sequentially accesses data at the target device.30.An apparatus for processing input / output (I / O) commands, wherein a bus enables communication with an initiator and a target device, wherein the apparatus includes:An apparatus for detecting a data request directed to a memory address in an address window used to address the target device, wherein the device controller controls access to the target device, wherein the data request is processed in a burst mode send;An apparatus for claiming rights to the data request, the data request being the data request sent by the initiator on the bus; andAn apparatus for executing the data request, wherein, when processing the data request sent in a burst mode, the device controller operates as a slave device.31.The apparatus of claim 30, wherein the I / O command includes a write operation, wherein the initiator sends write data to the address window using at least one data request, and the apparatus further includes:A device for processing a data request for the address window as a write operation;Means for storing in the buffer the write data received from the initiator using the data request; andMeans for transferring the write data from the buffer to the target device.32.The apparatus of claim 30, wherein the I / O command includes a read operation to read data from the target device, and the apparatus further includes:A device for processing a data request for the address window as a read operation; andMeans for returning the requested data from the target device to the initiator.33.The apparatus of claim 32, further comprising:Means for storing data accessed from the target device in the buffer, the means for storing data accessed from the target device in the buffer includes means for responding to the A data request for any memory address in the address window, the device that returns the buffered data.34.The apparatus of claim 33, wherein the device controller manages the buffer as a first-in-first-out (FIFO) queue, and wherein the apparatus includes means for responding to any memory in the address window The data request of the address is a device that returns data from the buffer based on the sorting of data in the FIFO queue.35.The apparatus of claim 34, wherein the address window is a non-prefetchable area.36.The apparatus of claim 30, wherein the initiator includes a network adapter, the device controller includes a disk controller, and the target device includes at least one storage disk.37.The apparatus of claim 36, wherein the bus uses the PCI-X protocol.
Method, system and program for processing input / output commandsTechnical fieldThe invention relates to a method, system and program for processing input / output commands.Background techniqueFIG. 1 illustrates a prior art storage device architecture in which an external bus master 2 can be accessed on a peripheral component interconnect (PCI) bus 10 through a serial advanced technology accessory (SATA) controller 8 Data in one or more disks 4, 6, where the external bus master 2 is, for example, a network adapter (eg, Fibre Channel controller, Ethernet controller, etc.), and the PCI bus 10 can be interconnected using peripheral components (PCI) protocol or PCI-X protocol. In the prior art system, the data transferred between the external bus master 2 and the SATA controller 8 generally first flows through the memory controller 12 and the memory 14, such as a static dynamic random access memory (SDRAM) ). For example, when the external bus master 2 wants to write data to the disks 4, 6, the external bus master 2 can transfer the data to the memory 14. Then, the SATA controller 8 can read the data sent to the memory 14 in the write request and write the data to the disks 4 and 6. For read operations, the SATA controller 8 generally transfers the requested read data to the memory 14, and the external bus master 2 generally accesses the read data from the memory 14. The controllers 2 and 8 may include a direct memory access (DMA) engine that performs actual data movement operations between the two through the memory 14.In addition, in the prior art of PCI-X, the memory buffer 14 makes it possible to read bursts and write bursts between the external bus master 2 and the SATA controller 8, because the current SATA The controller must be used as the bus master to handle burst data transmission. Published in "PCI Local Bus Specification", version 2.3 (PCI Special Interest Group, March 2002) and "PCI-X Appendix to PCI Local Bus Specification", version 1.0a (PCI Special Interest Group, July 2000) The text describes further details of the PCI and PCI-X protocols.Using the memory 14 component to buffer the data transferred between the controllers 2 and 8 creates additional latency and delays, because when the memory 14 is used as an intermediate buffer, additional read operations and write operations are introduced. For these reasons, there is a need in the art for an improved technique for transferring data between controllers in a bus architecture.BRIEF DESCRIPTIONReference is now made to the drawings, in which the same reference numerals represent corresponding parts throughout the drawings:FIG. 1 illustrates a bus architecture for accessing data in a storage device known in the art;2 illustrates a bus architecture for accessing data in a storage device according to an embodiment of the present invention;FIG. 3 illustrates a configuration register of a disk controller according to an embodiment of the present invention;4 and 5 illustrate the logic for processing I / O requests according to an embodiment of the present invention;6 illustrates a register configured according to an embodiment of the present invention;7 illustrates the logic to configure the device controller according to an embodiment of the present invention;8 illustrates the logic for configuring an external bus master according to an embodiment of the present invention;9 illustrates a direct memory access (DMA) descriptor table used by embodiments of the present invention;10 illustrates the logic for processing the DMA descriptor table according to an embodiment of the present invention;11 illustrates an alternative bus architecture for accessing data in a storage device according to an embodiment of the present invention;FIG. 12 illustrates the fields in a request that are queued during read request processing according to an embodiment of the present invention;13 illustrates the logic to return data to a read request according to an embodiment of the present invention; andFIG. 14 illustrates an example of how read requests are processed according to an embodiment of the present invention.detailed descriptionIn the following description, reference is made to the accompanying drawings, which illustrate several embodiments of the present invention. It is understood that other embodiments may be used, and structural or operational changes may be made without departing from the scope of the present invention.Direct transfer of data requests between the external bus master and the disk controllerFigure 2 illustrates a computing environment in which various aspects of the invention can be implemented. A storage system 50 such as a disk array (eg, disk group (JBOD), redundant independent disk array (RAID) array, etc.) includes an external bus master 52, which is an external device capable of initiating memory requests to the disk controller 54 The bus master, the disk controller 54 manages access to the disks 60a ... 60n. The external bus master 52 includes a direct memory access (DMA) engine 56. The disk controller 54 includes a direct memory access (DMA) engine 58 for processing I / O operations to the controller 54. The disk controller 54 may implement a disk access protocol such as SATA, ATA, small computer system interface (SCSI), integrated drive circuit (IDE), and the like. The disk controller 54 enables access to one or more disk drives 60a ... 60n. The disk controller 54 includes a buffer 64, and data read from the disks 60a ... 60n and write data to be transmitted to the disks 60a ... 60n are buffered in the buffer before being transmitted to the initiator 64. The disk controller 54 may include components for writing data in the buffer 64 to the disks 60a ... 60n, and in an embodiment where the disk controller 54 includes a SATA controller, the component is, for example, a serial engine. The external bus master 52 may include a network adapter that receives I / O requests from devices on the network that are pointing to the disks 60a ... 60n.An I / O processor (such as the IQ80310 processor of Intel Corporation) 70 manages system operations, and programs the I / O controller DMA engine 56 to read and write data at specified addresses and perform other I / O management-related operating. In some embodiments, the I / O processor 70 is connected to the PCI bus 72 to execute I / O commands received from the host processor, the PCI bus 72 enables external bus master 52, disk controller 54 and the communication between the I / O processor 70. The external bus master 52, the disk controller 54, and the I / O processor 70 are implemented on one or more PCI add-on cards that communicate with each other via the bus 72. For example, the I / O processor 70 and the disk controller 54 may be implemented on the same PCI card, and the external bus master 52 may be implemented on a different PCI card such as a network adapter card. The bus 72 may follow the PCI protocol or the PCI-X protocol or other communication protocols known in the art. More details of the PCI-X protocol are described in the public text "PCI-X Specification, Version 1.0a" published by PCISIG.In the embodiment where the external bus master 52 includes a network adapter card (such as a fiber channel adapter), the I / O processor 70 can receive I / O commands through the adapter, and then configure the external bus master 52 and the disk controller 54 to press Transfer data as described below.In some embodiments, the disk controller 54 is configured to have an address window that includes a range of addresses that can be randomly accessed and can be used to control the external bus master 52 and the disk The buffer 64 directly transfers data. The address window is an address within a range that, when requested, causes the disk controller 54 to request access to the request on the bus 72 and directly responds to the external bus master 52 request. The external bus master DMA 56 can use the addresses in the address window randomly or sequentially. The external bus master DMA 56 can thus push and pull data to the disk by accessing the memory space in the address window. In addition, for any request, DMA 56 may use any address in the window to send the request to disk controller 54. The DMA engine 56 in the external bus master 52 may be configured by the I / O processor 70 to directly interface with the disk controller 54 using the address in the address window.FIG. 3 illustrates that the disk controller 52 configures the register 80, and the I / O processor 70 writes the register to configure the bus 72 for I / O operations. The settings that can be written to the configuration register 80 by the I / O processor include:Address window 82: A range of addresses, such as the initiator of the external bus master 52, can use these addresses to communicate directly with the disk controller 54 and the buffer 64 therein.DMA mode 84: Indicates whether the DMA engine is used for I / O operations.Read-write operation (OP) 86: Indicates whether the received request will be processed as a read operation or a write operation on the discs 60a ... 60n.Burst Slave Mode 88: Indicates whether the initiator will operate in Burst Slave Mode, enabling the disk controller to respond to burst memory requests from the external bus master 52.FIG. 4 illustrates an interactive operation performed between the external bus master 52, the disk controller 54, and the I / O processor 70 in order to write data to the disks 60a ... 60n according to an embodiment of the present invention. Blocks 100 and 102 may be performed by I / O processor 70. At block 100, the I / O processor 70 programs the external bus master 52 to obtain data from an external source. For example, in an embodiment where the I / O processor 70 includes a host bus adapter, the external source may include an external host that submits I / O write requests to the disks 60a ... 60n. The I / O processor 70 also programs the external bus master DMA 56 (at block 102) to transfer the fetched data on the bus 72 to addresses in the address window of the disk controller 54 in burst-sized packets. The I / O processor 70 programs the disk controller 54 configuration register 80 (at block 104) to disable the DMA mode 84 in the disk controller 54 and set the field 88 to enable burst mode.In response to being configured (at blocks 100 and 102), external bus master 52 may receive (at block 120) data retrieved from an external source. Blocks 120-124 illustrate the operations or logic implemented by the external bus master 52 when configured by the I / O processor 70. The external bus master 52 then divides (at block 122) the data to write burst-sized blocks for transmission over the bus 72 to the address in the address window of the disk controller 54. The DMA engine 56 then transfers (at block 124) a write request including a smaller block of fetched data to an address in the address window. In some embodiments, such as the PCI-X embodiment, the DMA engine 56 transmits data in a burst mode (using memory requests) to achieve the transfer of larger amounts of data.Blocks 140-144 illustrate operations performed by the disk controller 54 to process write operations to the address windows to the disks 60a ... 60n. At block 140, the disk controller 54 requests access to a write request transmitted to a certain address in the address window through the bus 72. Because the DMA mode 84 is disabled and write is indicated in the operation field 86, the disk controller 54 (at block 142) adds the received data to the buffer 64 according to a buffering scheme, which may be a first-in First out (FIFO). The disk controller 54 then transmits (at block 144) the buffered data to the target disks 60a ... 60n. As discussed above, the disk controller 54 may include a serial engine for transferring write data in the buffer 64 to the disks 60a ... 60n.5 illustrates an operation performed by the external bus master 52, the disk controller 54, and the I / O processor 70 to transfer data from the disks 60a ... 60n to the external bus master 52 according to an embodiment of the present invention. . Blocks 200 and 202 illustrate operations performed by the I / O processor 70 to configure the storage system 50 for a read operation, which may be initiated by an external host system and transmitted to the external bus master 52 through the network The I / O processor 70 includes a host bus adapter. At block 200, the I / O processor 70 configures the disk controller 54 to configure the register 80 to disable DMA mode in field 84 and enable burst mode in field 88. The I / O processor 70 also configures (at block 202) the external bus master 52 DMA engine 56 to request a specified amount of data in the address window of the disk controller 54. In the PCI-X implementation, the I / O processor 70 may configure the external bus master 52 to issue burst read requests, where each request includes an address in the address window and a word to be read Section data, for example 512 bytes. The address window includes a non-prefetchable area because once the data is read from the buffer 64 of the disk controller 54, the data is destroyed (replaced by new data from the disks 60a, 60b ... 60n).Blocks 210, 212, and 214 illustrate operations performed by the external bus master DMA engine 56 to submit read requests. At blocks 210 and 212, the DMA engine 56 constructs a read request for the address in the address window of the disk controller 54 with the burst block size as set by the I / O processor 70. The DMA engine 56 then transfers (at block 214) the read request to the address window along with the number of bytes of transfer length.Blocks 220, 222, 224, 226, and 228 illustrate operations performed by the disk controller 54 to process burst read requests. At block 220, the disk controller 54 fetches data from the target disks 60a ... 60n and adds (at block 222) the end of the buffer 64, which in the FIFO implementation is followed by the latest addition data. Independently of buffering data from the disks 60a ... 60n, the disk controller 54 can detect (at block 224) a request for an address in the address window 82 on the bus 72 and request (at block 226) Requested rights. In response to the read request, the disk controller 54 may transfer (at block 228) the data at the top of the buffer 64 to the bus, to return to the initiator of the transaction, the external bus master控 52. In some embodiments, the first data is transferred from the buffer 64 regardless of the actual address used in the address window. Furthermore, in the non-prefetchable embodiment, once the data is accessed from the buffer 64, when the next data from the discs 60a ... 60n is accessed, the data is overwritten.The described embodiment thus provides a technique that allows an initiator (eg, external bus master 52) to transmit a burst data request to a pre-defined address window in the disk controller 54 so that the disk controller 54 acts as Slaves are used and transfer write data to the target disks 60a ... 60n or return read data from the buffer 64. With the described embodiment, the external bus master can directly communicate with a disk controller such as an ATA or SATA controller without the need for an intermediate memory device as shown in FIG. In addition, the described embodiment allows an external bus master to directly burst data to a disk controller (eg, an ATA controller) and burst data directly from the disk controller, where the disk controller operates in burst slave mode. In this way, the described embodiment greatly reduces the latency and processor cycles required to process I / O commands. In addition, data that is accessed sequentially (for example, a data stream from a disk drive) can be mapped to a random access space (memory space).Configure the address window for data transmission operationsIn an embodiment where the system 50 uses the PCI-X protocol, the read request can be transmitted as a split read request. In the separate read request, the implementation of the external bus master 52, which acts as a bus master, sends a read request to the memory address in the address window of the disk controller 54. When the disk controller 54 receives the request as a The bus slave is used. When the requested data is available, the implementation of the disk controller 54 then acts as a bus master to return the requested data to the external bus master 52 via the bus 72. Because the external bus master 52 that initially requested the data does not have to continuously request the read data from the disk controller 54 before the data becomes available, the separate read request retains the bus bandwidth, which is exactly what the PCI protocol has Delayed read transactions.The size of the I / O request to the disk controller 54 is limited to the size of the memory space allocated to the disk controller 54. For example, if the memory space or address window for the disk controller 54 is 1 megabyte (Mbyte), the maximum byte size of an I / O request to the implementation of the disk controller 54 is at most 1 megaword Section. In the described embodiment, the address window can be configured regardless of the size of any I / O requests performed by the disk controller 54.6 illustrates a configuration register 250 included in the external bus master 52, the register 250 including a maximum memory read byte number field 252 and a maximum outstanding split transaction field 254, the maximum memory read byte number field 252 indicates Given the maximum byte size of any outstanding split read request, the maximum outstanding split transaction field 254 indicates the maximum number of split read requests that are outstanding at the external bus master 52. Therefore, the maximum amount of address space that the external bus master 52 can access when all requests are not completed (that is, not yet completed) (herein referred to as "maximum allocable address space") includes the values in fields 252 and Multiply.In some embodiments, the maximum number of outstanding split read requests that can be directed to the address window of the disk controller 54 is equal to the size of the address window divided by the maximum split read request size. Limit the number of outstanding requested bytes to the size of the address window, which ensures that multiple outstanding separate read requests will not point to the same memory address in the address window. If multiple outstanding separate read requests all point to the same memory address, then the implementation of the external bus master 52 cannot match the returned data with the specific request.In the current embodiment, the address window defined for the memory of the disk controller 54 can be extended to several gigabytes. However, the designer of the system 50 may want to set the address window to a smaller amount according to the characteristics of the disk controller 54 and the system 50 in which the disk controller 54 will operate. In some embodiments, the maximum outstanding split transaction field 254 is configured based on the size of the address window so that the maximum outstanding split transaction field 254 is set to the size of the address window (the configuration of the address window size can be independent of the external bus master Any consideration of the split read capability of the controller 52) divided by the result of the maximum memory read byte field 252. In this way, the largest outstanding split read request from the external bus master 52 will not use more addresses than provided in the disk controller 54 at any given moment. This ensures that no memory address in the address window will be used in multiple concurrent outstanding read requests. In other words, the external bus master 52 will not reuse the previously used address until the previous request for the reused address has been completed. Otherwise, if the disk controller 54 receives multiple separate read requests using the same memory address in the address window, the implementation of the disk controller 54 will not be able to determine the order in which the external bus master 52 initiates the separate read requests.7 illustrates the logic implemented in the I / O processor 70 for configuring the external bus master 52 PCI registers during initialization (eg, system startup or restart). Once the configuration routine begins (at block 300), the I / O processor 70 configures (at block 302) the address window for the disk controller 54 to a predetermined optimal size for operation directed to the disk controller 54 . The address window may be configured in the PCI-X configuration register of the disk controller 54. The maximum memory read bytes 252 register in the external bus master 52 is set (at block 304) to a predetermined value, which is the maximum size of the split read request submitted. The maximum outstanding split transaction 254 is set (at block 306) as the integer result of the address window byte size divided by the maximum memory read bytes 252. If the result of the division is not an integer, the implementation of the largest outstanding split transaction 254 includes the integer part of the division result. The I / O processor 70 may then perform additional (at block 308) configuration operations. After configuring the address window and the external bus master register 250 (Figure 6), an address window is established to the disk controller 54 to allow data to be directly transferred between the external bus master 52 without going through the bus 72 External storage device.FIG. 8 illustrates logic implemented in the I / O processor 70 for configuring the external bus master 52 and the disk controller 54 to process read requests submitted to the external bus master 52. Upon receiving (at block 350) an I / O request with a transfer size, the I / O processor 70 sets (at block 352) the base address to the first address in the address window and sets the remaining address window (at Block 354) is the address window byte size. The remaining transfer size variable is set (at block 356) to the transfer size of the received I / O request. The I / O processor 70 then adds (at block 358) a descriptor entry in the descriptor table that defines the operations performed by the external bus master DMA 56. 9 illustrates a descriptor table 400 having multiple entries 402a ... 402n, where each entry includes an entry number 404a ... 404n, an address 406a ... 406n as the memory address pointed to by the request, and an indication The number of bytes included in the request for memory addresses 406a ... 406n is 408a ... 408n. The entry is added to a list that will be processed on a first-in first-out (FIFO) basis. If (at block 360) the remaining address window size is 0, meaning that all addresses in the window were originally used in previous descriptor entries, then the remaining address window is set to (at block 362) address window bytes size. The address 406n for the address entry 402n is set (at block 364) as the base address. If (at block 360) the remaining address window is not 0, the I / O processor 70 sets (at block 366) the address 406n in the address entry 402n to add to the number of bytes in the immediately preceding entry 402n-1 After the address.From block 364 or 366, the I / O processor 70 sets (at block 368) the number of bytes 408n for the added entry 402n to a number of bytes that does not exceed the remaining address window or the remaining transfer size. Then, the number of bytes 408n for the added entry 402n is subtracted (at block 370) from both the remaining address window and the remaining transfer size. If (at block 372) the remaining transfer size is equal to 0, that is, there are no more bytes to be read in the received I / O request, then the I / O processor 70 (at block 374) sends to the disk controller 54 Command to access the disks 60a ... 60n and store the data requested in the I / O transaction in the buffer 64. The I / O processor 70 also signals (at block 376) the external bus master DMA 56 to issue a read request for the entry added to the DMA descriptor table 400 to access the data that will be collected by the disk controller 54 and stored in Data in the buffer 64 (FIG. 2). If (at block 372) the remaining transfer size is greater than 0, that is, there are bytes that must be processed in the received I / O request, control proceeds to block 358 to add another entry to the descriptor table.Under the logic of FIG. 8, the entries indicated in the descriptor table 400 may be of different byte sizes. In some embodiments, the I / O processor 70 may configure the external bus master 52 read request size to a size of 408a, 408i in the descriptor table entries 402a, 402i ... 402n. .408n irrelevant value (eg 512 bytes). In some embodiments, the number of bytes of entries 408a, 408i ... 408n may not exceed the size of the address window. In these embodiments, the address window of the disk controller 54 must be set to a size that can accommodate the maximum number of outstanding read requests 254 (FIG. 6), where each outstanding read request has at most the The requested maximum number of read bytes 252 is equal to the number of bytes. For example, if the maximum outstanding request 254 is 4, and the maximum number of read bytes 252 is 1 kilobyte (kb), then the size of the address window must be at least 4 kilobytes. However, each descriptor entry 402a, 402i ... 402n may have a number of bytes of 4 kilobytes, that is, the size of the address window. In this case, when the external bus master 52 processes a descriptor entry 402a ... 402n that defines a request whose number of bytes 408a ... 408n is greater than the maximum number of read bytes 252, it In the example, descriptor requests equal to 4 kilobytes are divided into requests that do not exceed the maximum number of read bytes 252, which is 1 kilobyte. In this way, the number of bytes 408a ... 408n (FIG. 9) indicated in the descriptor entries 402a ... 402n will not be related to the maximum number of read bytes 252, but will be limited by the size of the address window. Therefore, in these embodiments, the number of bytes 408a ... 408n of the descriptor entries 402a ... 402n cannot exceed the upper limit of the address window.FIG. 10 illustrates logic implemented in the DMA engine 56 for processing the DMA descriptor table 400 generated by the I / O processor 70 according to the logic of FIG. 8. In response (at block 450) to the signal from the I / O processor 70 to start the operation, the DMA 56 sets (at block 452) the number of outstanding split request variables to zero. Then, the DMA 56 performs a loop from blocks 454 to 470 on each entry i in the DMA descriptor table 400, where i is equal to 1 to n. If (at block 456) the number of bytes in entry i 408i exceeds the maximum number of read bytes 252 (FIG. 6), then the DMA engine 56 divides (at block 458) the request in entry i for each of the maximum read Multiple split read sub-requests of byte number 252 to read sequential addresses from the part of the address window accessed by the request of entry i. Before processing the next entry in the descriptor table 400, each sub-request is processed in the same way as the sequential descriptor entries.Starting from the No branch of block 456 or 458, if (at block 460) the number of outstanding detach requests does not exceed the maximum outstanding detach transaction 254 indicated in the configuration register 250, then more detach read requests can be issued, DMA 56 sends (at block 462) one of the read request or sub-request for entry 402i to memory address 406i provided to entry 402i. The unfinished detach request variable is incremented (at block 464), and control (at block 470) moves back to block 454 to process the next entry in the DMA descriptor table 400. If (at block 460) the maximum possible number of split requests are outstanding, then DMA 56 waits (at block 466) for a split request to complete. After completing the detach request, DMA 56 decrements the uncompleted detach request variable by 1 (at block 468), and proceeds to block 458 to send the next read request in the i-th entry in DMA descriptor table 400.With the described embodiment, the address window for the disk controller 54 can be set to any size regardless of the size of the I / O transaction received at the external bus master 52. Based on the configured address window, the implementation of the I / O processor determines the maximum number of outstanding split read requests that the external bus master DMA 56 can submit in order to process the received I / O transactions that are larger than the address window. By setting the maximum unfinished detach transaction 254 so that the number of bytes in the unfinished detach request does not exceed the number of bytes in the address window (its implementation requires the address in the address window to be used again), the I / O processor 70 ensures the disk The controller 54 may determine the order in which the requests are initiated and return the requested data to the correct request. In this way, the external bus master 52 can be confident of the read request associated with the data returned from the disk controller 54.Return data to read request11 illustrates an alternative embodiment of the system 50 shown in FIG. 2, wherein the components 552, 554, 556, 558, 560, ... 560n, 564, 570, and 572 in the system 550 of FIG. 11 may include The same components 52, 54, 56, 58, 60a ... 60n, 64, 70, and 72 as in FIG. In addition, the system 500 in FIG. 11 includes a bridge 574 between the external bus master 552 and the bus 572 connected to the disk controller 554. Another bus 576 connects the external bus master 552 to the bridge 574. In the implementation of PCI and PCI-X, a bridge device such as bridge 574 may forward the read requests (eg, separate read requests) relative to the order of the original initiator (eg, external bus master 552), and forward the out of order Read the request. This may cause bridge 574 to forward read requests sent later before requests sent earlier.In the above-described embodiment, the disk controller 554 returns from the buffer 564 the data read from the disks 560a ... 560n in response to a request for an address in the address window. If the external bus master 552 requests sequential data from the sequential address in the address window, the external bus master 552 expects the data to be returned to the sequential request in the order in which the request was originally generated. However, if the disk controller 554 returns data from the buffer 564 to a request for an address that is after a request for a previous address that has not yet been processed, then the disk controller 554 may return out-of-order data. For example, the PCI bridge and PCI-X bridge may forward out-of-order requests. In this case, if the disk controller 554 responds to the read requests received out of order in the order in which the requests were issued, the disk controller 554 may return out of order data, so that when the data should be returned When a previously issued request has not been received, the data can be returned to the subsequent request.In some of the described embodiments, the disk controller 554 initiates the order of requests according to the external bus master DMA 556, regardless of whether the requests are received out of order relative to their sequential order, will The data is returned to the request from the external bus master 552. In this way, the data is returned to the requests sequentially in the order in which the requests were issued, so that each transmitted request will access a sequential portion of the data requested from the disks 560a ... 560n. In order to return sequential data to the request in the order in which the external bus master 552 initiated the request, the disk controller 554 maintains a request queue 578 to buffer the read requests received from the external bus master 552 out of order (Eg separate read request). The disk controller 554 also saves a next address variable 580, which indicates the address of the next request that should be received in order to follow the previously processed request. In the described embodiment, the external bus master 552 issues requests to the addresses in the address window in sequence, so that subsequent requests should point to the address immediately after the target address plus the number of bytes previously requested. In some embodiments, the request queue 578 may be of sufficient size to queue the maximum number of read requests that may be outstanding while leaving the external bus master 554, which may include for the external bus master The maximum uncompleted separation transaction 254 (FIG. 6) of 552 is set.The request queue 578 may include information provided by each read request sent from the external bus master 552. FIG. 12 illustrates the information held by each request entry 590 in the request queue 578, where each entry 590 may include request information 592 for identifying the request, the target device in the address window (eg, disk 560a ... 560n) The target address of the request is 594, and the number of bytes requested is 596.In some embodiments, each read request may specify the same request byte size. In an alternative embodiment, each read request may specify a different byte size when accessing adjacent addresses in the address window. In some embodiments, the read request may include a read request sent from the external bus master DMA 556 (eg, separate read request) when the descriptor table generated by the I / O processor 570 is processed according to the logic of FIG. 8 ).FIG. 13 illustrates the logic implemented in the disk controller 554 for returning data to a separate read request. Control begins with an indication that a separate read request will be received (at block 600) from the disk controller. This indication may include the I / O processor 570 sending a command to buffer and access data for a certain I / O request, such as the signal sent by the I / O processor 570 at block 374 in FIG. 8. The disk controller 554 sets (at block 602) the next address variable 580, which indicates the address for the next next sequential read request to the base or first address in the address window of the disk controller 554. Upon receiving (at block 604) a separate read request for the address in the address window from the external bus master 552, the disk controller 554 determines (at block 606) whether the received request is directed to the next address variable 580 The same address indicated in. If not, the disk controller 554 queues (at block 608) the received split read request to the request queue 578, where the queued request 590 may include request information 592, the target address 594, and the requested bytes Number 596. If (at block 606) the target address of the received request is the same as the address indicated in the next address variable 580, then the disk controller 554 returns the data packet from the buffer 564 (if currently available in the buffer 564) ( At block 610) to the received request, the data packet has a number of bytes from the buffer 564 equal to the number of bytes indicated in the received split read request. In some embodiments, the buffer 564 queues the data accessed from the disks 560a ... 560n based on the FIFO so that the returned data is accessed from the "first in" end of the buffer 564.After returning the data, if (at block 612) the next address variable 580 plus the number of bytes of the request returned is equal to the last address of the address window, that is, after the last request, no sequential addresses remain in the address window , Then the next address variable 580 is set (at block 614) as the base address, because the address is flipped back to the base address. Otherwise, if there is an address in the address window after the last request, the disk controller 554 increments (at block 616) the next address variable 580 to the requested number of bytes to which the data has just been returned, because the next The request will point to the next sequential address after the last address of the previously processed request.After incrementing the next address variable 580 to the address of the next sequential read request in block 614 or 616, the disk controller 554 determines (at block 618) whether a read request queued in the request queue 578 has and The same target address 594 (FIG. 12) in an address variable 580 is the address of the next expected sequential separation read request that was previously received and placed in the request queue 578. If not, control returns to block 604 to wait for the next split read request. Otherwise, the disk controller 554 dequeues requests with the same address (at block 620). When a request with the same address is dequeued (at block 620), the disk controller 554 accesses (at block 622) from the buffer 564 the number of bytes equal to the number of dequeued requests, and The accessed bytes are returned (at block 624) to the queued request in a manner known in the art. Control proceeds from block 624 to block 612 to set the next address variable 580 to the address of the next sequential read request to be processed.Figure 14 provides a table illustrating how to process four sequential read requests 1, 2, 3, and 4 according to the logic of FIG. 13, where each read request is 1 kilobyte in length and is controlled by the external bus 552 issued. In the embodiment of FIG. 14, the disk controller 554 receives requests 1, 2, 3, and 4 for sequential addresses in the address window in the reverse order of requests 1, 2, 3, and 4 initiated by the external bus master 552. As shown, the data is not returned to the request until the data is returned to the request for the previous sequential address. Until the data is returned to the previous request, the request is not queued.With the described embodiment, if the disk controller 554 receives out-of-order split read requests due to bridge 574 request processing or for some other reason, then the disk controller 554 queues the requests received out of order And only return the data to the request as the next expected read request. In this way, the disk controller 554 sequentially returns the data to the split read request in the order that the external bus master 552 initiates the split read request. This ensures that the external bus master 552 receives the data returned to the appropriate read request, so that the order in which the data is returned to the sequential request is the sequential order in which the request originally intended to be served.Additional implementationUsing standard programming and / or engineering design techniques to generate software, firmware, hardware, or a combination thereof, the operations and logic described herein can be implemented as a method, device, or article of manufacture. The term "article of manufacture" as used herein refers to hardware logic (eg, integrated circuit chips, programmable gate arrays (PGA), application specific integrated circuits (ASIC), etc.) or machine-readable media (eg, magnetic storage media (eg, hard drives, Floppy disks, magnetic tapes, etc.), optical storage devices (CD-ROM, optical disks, etc.), volatile and non-volatile storage devices (such as EEPROM, ROM, PROM, RAM, DRAM, SDRAM), firmware, programmable logic, etc.) Machine-readable instructions or logic implemented on The code in the computer-readable medium can be accessed and executed by the processor. The code in which the preferred embodiment is implemented can also be accessed from a file server through a transmission medium or through a network. In these cases, the article in which the code is implemented may include a transmission medium, such as a network transmission line, a wireless transmission medium, a signal propagated through space, radio waves, infrared signals, etc. Of course, those skilled in the art will recognize that many modifications can be made to this configuration without departing from the scope of the invention, and that the article can include any information-bearing medium known in the art.In the described embodiment, the processing devices 52, 54 and 70 communicate on a bus topology, such as a PCI-X or PCI bus topology. In alternative embodiments, the processing devices 52, 54 and 70 may communicate using any communication architecture known in the art.In a PCI bus implementation, an additional PCI-X or PCI bridge may be located between any of the processing devices 52, 54 and 70 and the bus 72 to enable communication on the bus 72. For example, in a PCI-X implementation, the external bus master 52 may send a burst read request to the bridge, which may then forward the request to the bus 72 to obtain an accurate amount of the requested data.In some embodiments, the disk drives 60a ... 60n include magnetic hard drives. In an alternative embodiment, the storage device connected to the disk controller 54 may include any storage device known in the art, such as an optical disk, a magnetic tape, and the like.In the described embodiment, the initiator uses the address window to submit a request to the disk controller. In an alternative embodiment, the target disk controller may include any type of input / output controller device known in the art in addition to the storage-related controller. In addition, the initiator or external bus master 52 may be any device that initiates a request to the disk controller, such as a host bus adapter or other external device.The logic of Figures 4 and 5 describes the specific operations that occur in a particular order. In alternative embodiments, some of the logical operations may be performed in a different order, that is, a modified or deleted order. In addition, steps can be added to the above logic, and these steps still follow the described embodiments. In addition, the operations described here can occur sequentially, or certain operations can be processed in parallel. Also, operations can be performed by a single processing unit or a distributed processing unit. In a further embodiment, the address window may be set to a smaller size to achieve multiple address windows for multiple target devices (eg, disk controllers) so that each target device may have a unique range of address windows . This allows the external bus master to directly access any one of the multiple target devices by sending a data request to the memory address in the address window configured for the specific target device.In the described embodiment, the received read request includes a separate read request. Alternatively, the request processed according to the above logic may include any type of bus request to which data is returned.In the above-described embodiment, the disk controller holds the address of the next sequential request issued by the external bus master. This address should be received to determine whether the request was received out of order. In an alternative embodiment, the disk controller may perform an alternative operation to determine whether at least one read request for sequential data before the data requested by the received read request has not yet been processed, ie the currently received Whether the data targeted by the request is after the data requested by a previous request that has not yet been processed. Alternative calculations, flags, and / or other indicators can also be used to determine whether the sent request was received out of order.The foregoing description of the preferred embodiment of the present invention has been presented for the purposes of illustration and description. These descriptions are not intended to be exhaustive, or to limit the invention to the precise form disclosed. According to the above teaching, many modifications and changes are possible. The scope of the present invention is not intended to be limited by this detailed description, but is intended to be defined by the appended claims. The above specification, examples and data provide a complete description of the construction and use of the invention. Since many embodiments of the invention can be implemented without departing from the spirit and scope of the invention, the invention is represented by the appended claims.
Techniques for providing a semiconductor memory device are disclosed. In one particular exemplary embodiment, the techniques may be realized as a semiconductor memory device including a plurality of memory cells arranged in an array of rows and columns. Each memory cell including a first region, a a second region, and a body region capacitively coupled to at least one word line and disposed between the first region and the second region. Each memory cell also including a third region, wherein the third region may be doped differently than the first region, the second region, and the body region.
CLAIMS 1. A semiconductor memory device comprising: a plurality of memory cells arranged in an array of rows and columns, each memory cell comprising: a first region; a second region; a body region capacitively coupled to at least one word line and disposed between the first region and the second region; and a third region, wherein the third region is doped differently than the first region, the second region, and the body region . 2. The semiconductor memory device according to claim 1, wherein the first region is coupled to a first poly plug and the second region is coupled to a second poly plug. 3. The semiconductor memory device according to claim 1, wherein the first region, the second region, the body region, and the third region are arranged in a planar configuration. 4. The semiconductor memory device according to claim 3, wherein the first region, the second region, and the body region are doped with donor impurities. 5. The semiconductor memory device according to claim 4,wherein the third region is doped with acceptor impurities. 6. The semiconductor memory device according to claim 5, wherein the first region, the second region, and the body region are undoped regions. 7. The semiconductor memory device according to claim 3, wherein the body region is coupled to a first doped region and the third region is coupled to a second doped region. 8. The semiconductor memory device according to claim 7, wherein the second doped region is doped with acceptor impurities having a concentration higher than the doped third region . 9. The semiconductor memory device according to claim 3, wherein the first region, the second region, and the body region are doped with acceptor impurities. 10. The semiconductor memory device according to claim 3, wherein the third region is doped with donor impurities. 11. The semiconductor memory device according to claim 10, wherein the first region, the second region, and the body region are undoped regions. 12. The semiconductor memory device according to claim 1, wherein the first region, the second region, and the body region are arranged in a vertical configuration. 13. The semiconductor memory device according to claim 12, wherein the first region, the second region, and the body region are doped with donor impurities. 14. The semiconductor memory device according to claim 13, wherein the third region is doped with acceptor impurities. 15. The semiconductor memory device according to claim 14, wherein the third region is made of a P-well region. 16. The semiconductor memory device according to claim 13, wherein the first region is coupled to a source line and the second region is coupled to a bit line. 17. The semiconductor memory device according to claim 16, wherein the source line and the bit line are arranged on opposite sides of the memory cell. 18. The semiconductor memory device according to claim 12, wherein the first region, the second region, and the body region are doped with acceptor impurities. 19. The semiconductor memory device according to claim 18,wherein the third region is doped with donor impurities. 20. The semiconductor memory device according to claim 19, wherein the third region is made of an N-well region. 21. A method for biasing a semiconductor memory device comprising the steps of: applying a plurality of voltage potentials to a plurality of memory cells arranged in an array of rows and columns, wherein applying the plurality of voltage potentials to the plurality of memory cells comprises: applying a first voltage potential to a first region of each of the plurality of memory cells; applying a second voltage potential to a second region of each of the plurality of memory cells,- applying a third voltage potential to a body region of each of the plurality of memory cells via at least one respective word line of the array that is capacitively coupled to the body region; and applying a fourth voltage potential to a third region . 22. The method according to claim 21, further comprising increasing the third voltage potential applied to the at least one respective word line during a hold operation in order to perform a write logic low operation. 23. The method according to claim 21, further comprising maintaining the first voltage potential, the second voltage potential, and the fourth voltage potential applied during a hold operation in order to perform a write logic low operatio . 24. The method according to claim 21, further comprising increasing the fourth voltage potential applied during a hold operation in order to perform a write logic high operation. 25. The method according to claim 21, further comprising maintaining the first voltage potential, the second voltage potential, and the third voltage potential applied during a hold operation in order to perform a write logic high operation . 26. The method according to claim 21, further comprising increasing the second voltage potential applied during a hold operation in order to perform a read operation. 27. The method according to claim 21, further comprising increasing the third voltage potential applied during a hold operation in order to perform a read operation.
TECHNIQUES FOR PROVIDING A SEMICONDUCTOR MEMORY DEVICE CROSS-REFERENCE TO RELATED APPLICATIONS This patent application claims priority to U.S. Provisional Patent Application No. 61/313,986, filed March 15, 2010, which is hereby incorporated by reference herein in its entirety. FIELD OF THE DISCLOSURE The present disclosure relates generally to semiconductor memory devices and, more particularly, to techniques for providing a junction-less semiconductor memory device. BACKGROUND OF THE DISCLOSURE The semiconductor industry has experienced technological advances that have permitted increases in. density and/or complexity of semiconductor memory devices. Also, the technological advances have allowed decreases in power consumption and package sizes of various types of semiconductor memory devices. There is a continuing trend to employ and/or fabricate advanced semiconductor memory devices using techniques, materials, and devices that improve performance, reduce leakage current, and enhance overall scaling. Silicon-on- insulator (SOI) and bulk substrates are examples of materials that may be used to fabricate such semiconductor memory devices. Such semiconductor memorydevices may include, for example, partially depleted (PD) devices, fully depleted (FD) devices, multiple gate devices (e.g., double, triple gate, or surrounding gate}, and Fin-FET devices . A semiconductor memory device may include a memory cell having a memory transistor with an electrically floating body region wherein electrical charge may be stored. When excess majority electrical charges carriers are stored in the electrically floating body region, the memory cell may store a logic high (e.g., binary wl" data state) . When the electrical floating body region is depleted of majority electrical charge carriers, the memory cell may store a logic low (e.g., binary "0" data state) . Also, a semiconductor memory device may be fabricated on silicon-on-insulator (SOI) substrates or bulk substrates (e.g., enabling body isolation). For example, a semiconductor memory device may be fabricated as a three- dimensional (3-D) device (e.g., a multiple gate device, a Fin- FET device, and a vertical pillar device) , In one conventional technique, the memory cell of the semiconductor memory device may be manufactured by an implantation process . During a conventional implantation process, defect structures may be produced in a silicon lattice of various regions of the memory cell of the semiconductor memory device. The defect structures formed during the implantation process may decrease retention time of majority charge carriers stored in the memory cell of thesemiconductor memory device. Also, during a conventional implantation process, various regions of the memory cell may be doped with undesired doping concentrations. The undesired doping concentrations may thus produce undesired electrical properties for the memory cell of the semiconductor memory device. Further, the conventional implantation process may face lateral and vertical scaling challenges. In view of the foregoing, it may be understood that there may be significant problems and shortcomings associated with conventional techniques for providing a semiconductor memory device . BRIEF DESCRIPTION OF THE DRAWINGS In order to facilitate a fuller understanding of the present disclosure, reference is now made to the accompanying drawings, in which like elements are referenced with like numerals. These drawings should not be construed as limiting the present disclosure, but are intended to be exemplary only. Figure 1 shows a block diagram of a semiconductor memory device including a memory cell array, data write and sense circuitry, and memory cell selection and control circuitry in accordance with an embodiment of the present disclosure. Figure 2 shows a cross-sectional view of the memory cell shown in Figure 1 in accordance with an embodiment of the present disclosure. Figure 3 shows a cross-sectional view of the memory cellshown in Figure 1 in accordance with an alternate embodiment of the present disclosure. Figure 4 shows a cross -sectional view of the memory cell shown in Figure 1 in accordance with an embodiment of the present disclosure. Figure 5 shows a cross- sectional view of the memory cell shown in Figure 1 in accordance with an alternate embodiment of the present disclosure. Figure 6 shows cross-sectional views of at least a portion of the memory cell array shown in Figure 1 in accordance with an embodiment of the present disclosure. Figure 7 shows cross-sectional views of at least a portion of the memory cell array shown in Figure 1 in accordance with an alternate embodiment of the present disclosure. Figure 8 shows cross-sectional views of at least a portion of the memory cell array shown in Figure 1 in accordance with an alternate embodiment of the present disclosure . Figure 9 shows cross-sectional views of at least a portion of the memory cell array shown in Figure 1 in accordance with an alternate embodiment of the present disclosure . Figure 10 shows control signal voltage waveforms for performing a write operation on a memory cell shown in Figure 2 in accordance with an embodiment of the present disclosure.Figure 11 shows control signal voltage waveforms for performing a read, operation on a memory cell shown in Figure 2 in accordance with an embodiment of the present disclosure. DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS Referring to Figure 1, there is shown a block diagram of a semiconductor memory device 10 comprising a memory cell array 20, data write and sense circuitry 36, and memory cell selection and control circuitry 38 in accordance with an embodiment of the present disclosure. The memory cell array 20 may comprise a plurality of memory cells 12 each coupled to the memory cell selection and control circuitry 38 via a word line (WL) 28 and a carrier injection line (EP) 34, and to the data write and sense circuitry 36 via a bit line (CN) 30 and a source line (EN) 32. It may be appreciated that the bit line (CN} 30 and the source line (EN) 32 are designations used to distinguish between two signal lines and they may be used i terchangeably. The data write and sense circuitry 36 may read data from and may write data . to selected memory cells 12. In an exemplary embodiment, the data write and sense circuitry 36 may include a plurality of data sense amplifier circuits. Each data sense amplifier circuit may receive at least one bit line (CN) 30 and a current or voltage reference signal. For example, each data sense amplifier circuit may be a cross - coupled type sense amplifier to sense a data state stored in amemory cell 12. The data write and sense circuitry 36 may include at least one multiplexer that may couple to a data sense amplifier circuit to at least one bit line (CN) 30. In an exemplary embodiment, the multiplexer may couple a plurality of bit lines (CN) 30 to a data sense amplifier circuit . Each data sense amplifier circuit may employ voltage and/or current sensing circuitry and/or techniques. In an exemplary embodiment, each data sense amplifier circuit may employ current sensing circuitry and/or techniques. For example, a current sense amplifier may compare current from a selected memory cell 12 to a reference current (e.g., the current of one or more reference cells) . From that comparison, it may be determined whether the selected memory cell 12 stores a logic high (e.g., binary "1" data state} or a logic low (e.g., binary "0" data state). It may be appreciated by one having ordinary skill in the art that various types or forms of the data write and sense circuitry 36 (including one or more sense amplifiers, using voltage or current sensing techniques, to sense a data state stored in a memory cell 12) may be employed to read data stored in the memory cells 12. The memory cell selection and control circuitry 38 may select and/or enable one or more predetermined memory cells 12 to facilitate reading data therefrom by applying control signals on one or more word lines (WL) 28 and/or carrier injection lines (EP) 34. The memory cell selection andcontrol circuitry 38 may generate such control signals from address signals, for example, row address signals. Moreover, the memory cell selection and control circuitry 38 may include a word line decoder and/or driver. For example, the memory cell selection and control circuitry 38 may include one or more different control/selection techniques (and circuitry thereof) to select and/or enable one or more predetermined memory cells 12. Notably, all such control/selection techniques, and circuitry thereof, whether now known or later developed, are intended to fall within the scope of the present disclosure. In an exemplary embodiment, the semiconductor memory device 10 may implement a two step write operation whereby all the memory cells 12 in a row of memory cells 12 may be written to a predetermined data state by first executing a "clear" or a logic low (e.g., binary "0" data state) write operation, whereby all of the memory cells 12 in the row of memory cells 12 are written to logic low (e.g., binary "0" data state) . Thereafter, selected memory cells 12 in the row of memory cells 12 may be selectively written to the predetermined data state (e.g., a logic high (binary "1" data state)). The semiconductor memory device 10 may also implement a one step write operation whereby selected memory cells 12 in a row of memory cells 12 may be selectively written to either a logic high (e.g., binary "1" data state) or a logic low (e.g., binary "0" data state) without first implementing a "clear"operation. The semiconductor memory device 10 may employ any of the exemplary writing, preparation, holding, refresh, and/or reading techniques described herein. The memory cells 12 may comprise N-type, P-type and/or both types of transistors. Circuitry that is peripheral to the memory cell array 20 (for example, sense amplifiers or comparators, row and column address decoders, as well as line drivers (not illustrated herein) ) may also include P-type and/or N-type transistors. Regardless of whether P-type or N- type transistors are employed in memory cells 12 in the memory cell array 20, suitable voltage potentials (for example, positive or negative voltage potentials) for reading from the memory cells 12 will be described further herein. Referring to Figure 2, there is shown a cross-sectional view of the memory cell 12 shown in Figure 1 in accordance with an embodiment of the present disclosure. The memory cell 12 may comprise a first N- region 120, a second N- region 122, a third N- region 124, and/or a P- region 126. The first N- region 120, the second N- region 122, the third N- region 124, and/or the P- region 126 may be disposed in sequential contiguous relationship within a planar configuration that may extend horizontally or parallel to a plane defined by an oxide region 128 and/or a P- substrate 130. In an exemplary embodiment, the second N- region 122 may be an electrically floating body region of the memory cell 12 configured to accumulate/ store charges that may be spaced apart from andcapacxtively coupled to the word line (WL) 28. The first N- region 120 of the memory cell 12 may be coupled to the source line (EN) 32 via a first N+ poly plug 232. The first N+ poly plug 232 may be directly coupled to the first N- region 120 of the memory cell 12. The second N- region 122 of the memory cell 12 may be coupled to the word line (WL) 28 via a gate region 228. The gate region 228 may be capacitively coupled to the second N- region 122 of the memory cell 12. The third N- region 124 of the memory cell 12 may be coupled to a bit line (CN) 30 via a second N÷ poly plug 230. The second N+ poly plug 230 may be directly coupled to the third N- region 124 of the memory cell 12. The P- region 126 of the memory cell 12 may be coupled to a carrier injection line (EP) 34 via a P+ region 234. The P+ region 234 may be directly coupled to the P- region 126 of the memory cell 12. The first N- region 120, the second N- region 122, and the third N- region 124 may be formed of the same material or different materials. Also, the first N- region 120, the second N- region 122, and the third- N- region 124 may be formed of the same material having various doping concentrations. In an exemplary embodiment, the first region 120, the second N- region 122, and the third N- region 124 may be formed of a semiconductor material (e.g., silicon) comprising donor impurities (e.g., nitrogen, arsenic, and/or phosphorus) . In an exemplary embodiment, the first N- region120, the second N- region 122, and/or the third N- region 124 may be formed of a silicon material with donor impurities having a concentration of 1015 atoms/cm3 to 1018 atoms/cm3. The P- region 126 may be formed of a semiconductor material (e.g., intrinsic silicon) comprising acceptor impurities. For example, the P- region 126 may be formed of a silicon material doped with boron impurities. In an exemplary embodiment, the P- region 126 may be formed of a silicon material with acceptor impurities having a concentration of 1015 atoms/cm3 to 1018 atoms/cm3. In another exemplary embodiment, the P- region 126 may be formed of an undoped semiconductor material (e.g., intrinsic silicon) . The first N+ poly plug 232 and the second N+ poly plug 230 may be formed of the same material or different materials. The first N+ poly plug 232 and the second N+ poly plug 230 may be formed of a metal material, polysilicon material, silicon dioxide material, and/or a combination thereof. The first N+ poly plug 232 and the second N+ poly plug 230 may couple voltage potentials from the source line (EN) 32 and the bit line (CN) 30, respectively, to the first N- region 120 and the third N- region 124 of the memory cell 12. In another exemplary embodiment, the first N+ poly plug 232 and the second N+ poly plug 230 may be formed of tungsten, titanium, titanium nitride, polysilicon or a combination thereof. The first N+ poly plug 232 and the second N+ poly plug 230 may have a height extending from the first N- region 120 and thethird N- region 124, respectively, to the source line (EN) 32 and the bit line (CN) 30. The gate region 228 may be formed of a polycide material, a silicon material, a metal material, and/or a combination thereof. In another exemplary embodiment, the gate region 228 may be formed of a doped silicon layer. The gate region 228 may be formed of a semiconductor material (e.g., silicon) comprising acceptor impurities. For example, the gate region 228 may be formed of a silicon material doped with boron impurities. The P+ region 234 may be formed of a semiconductor material (e.g., silicon) comprising acceptor impurities. For example, the P+ region 234 may be formed of a silicon material doped with boron impurities. In an exemplary embodiment, the P+ region 234 may be doped with acceptor impurities having a concentration of 1020 atom/cm3 or higher. The oxide layer 128 may be formed on the P- substrate 130. For example, the oxide layer 128 may be formed of an insulating material. The oxide layer 128 may include a continuous planar region configured above the P- substrate 130. In an exemplary embodiment, the oxide layer 128 may be formed of an insulating oxide material. The oxide layer 128 may form a trench region that may have a cross -sectional shape to accommodate one or more memory cells 12 therein. For example, the trench region may have a cross-sectional shape of a square, a rectangle, a cylinder, and/or other shapes thatmay accommodate one or more memory cells 12. In an exemplary embodiment, the P- substrate 130 may be made of a semiconductor material (e.g., silicon) comprising acceptor impurities and may form a base of the memory cell array 20. In alternative exemplary embodiments, a plurality of P- substrates 130 may form the base of the memory cell array 20 or a single P- substrate 130 may form the base of the memory cell array 20. Also, the P- substrate 130 may be made in the form of a P-well substrate. An insulating layer 132 may be formed on top of the oxide layer 128. For example, the insulating layer 132 may be formed of an insulating material, oxide material, and/or dielectric material. In an exemplary embodiment,, the insulating layer 132 may be formed of a silicon nitride material. The insulating layer 132 may be formed above the oxide layer 128 to electrically insulating the first N+ poly plug 232, the gate region 228, the second N+ poly plug 230, and/or the P+ region 234. Referring to Figure 3, there is shown a cross-sectional view of the memory cell 12 shown in Figure 1 in accordance with an alternate embodiment of the present disclosure. The memory cell 12 illustrated in Figure 3 may be similar to the memory cell 12 illustrated in Figure 2, except that the memory cell 12 may comprise a plurality of undoped regions. The plurality of undoped region may comprise a first undoped region 320 coupled a corresponding first N+ poly plug 232, asecond undoped region 322 capacitively coupled to a corresponding gate region 228, and/or a third undoped region 324 coupled to a corresponding second N+ poly plug 230. The plurality of undoped regions may be formed of the same material or different materials. For example, the plurality of undoped regions (e.g., the first undoped region 320, the second undoped region 322, and/or the third undoped region 324) may be formed of an undoped semiconductor material {e.g., intrinsic silicon). Referring to Figure 4, there is shown a cross-sectional view of the memory cell 12 shown in Figure 1. in accordance with an embodiment of the present disclosure. The memory cell 12 illustrated in Figure 4 may be similar to the memory cell 12 illustrated in Figure 2, except that the memory cell 12 may comprise a first P- region 420, a second P- region 422, a third P- region 424, and/or an N- region 426. The first P- region 420, the second P- region 422, the third P- region 424, and/or the N- region 426 may be disposed in sequential contiguous relationship within a planar configuration that may extend horizontally or parallel to a plane defined- by an oxide region 128 and/or a P- substrate 130. In an exemplary embodiment, the second P- region 422 may be an electrically floating body region of the memory cell 12 configured to accumulate/store charges that may be spaced apart from and capacitively coupled to the word line ( L) 28. The first P- region 420 of the memory cell 12 may becoupled to the source line (EN) 32 via a first P+ poly plug 432. The first P+ poly plug 432 may be directly coupled to the first P- region 420 of the memory cell 12. The second P- region 422 of the memory cell 12 may be coupled to the word line (WL) 28 via a gate region 428. The gate region 428 may be capacitively coupled to the second P- region 422 of the memory cell 12. The third P- region 424 of the memory cell 12 may be coupled to a bit line (CN) 30 via a second N+ poly plug 430. The second N+ poly plug 430 may be directly coupled to the third P- region 424 of the memory cell 12. The N- region 426 of the memory cell 12 may be coupled to a carrier injection line (EP) 34 via an N+ region 434. The N+ region 434 may be directly coupled to the N- region 426 of the memory cell 12. The first P- region 420, the second P- region 422, and the third P- region 424 may be formed of the same material or different materials. Also, the first P- region 420, the second P- region 422, and the third P- region 424 may be formed of the same material having various doping concentrations. In an exemplary embodiment, the first P- region 420, the second P- region 422, and the third P- region 424 may be formed of a semiconductor material (e.g., silicon) comprising acceptor impurities. For example, the first P- region 420, the second P- region 422, and/or the third P- region 424 may be formed of a silicon material doped with boron impurities. In an exemplary embodiment, the first P-region 420, the second P- region 422, and/or the third P- region 424 may be formed of a silicon material with acceptor impurities having a concentration of 1015 atoms/cm3 to 1018 atoms/cm3. The N- region 426 may be formed of a semiconductor material (e.g., intrinsic silicon) comprising donor impurities. For example, the N- region 426 may be formed of a silicon material doped with nitrogen, arsenic, and/or phosphorous impurities. In an exemplary embodiment, the N- region 426 may be formed of a silicon material with donor impurities having a concentration of 1015 atoms/cm3 to 1018 atoms/cm3. In another exemplary embodiment, the N- region 426 may be formed of an undoped semiconductor material (e.g., intrinsic silicon) . The first P+ poly plug 432 and/or the second P+ poly plug 430 may be formed of the same material or different materials. The first P+ poly plug 432 and the second P+ poly plug 430 may be formed of a metal material, polysilicon material, silicon dioxide material, and/or a combination thereof. The first P+ poly plug 432 and/or the second P÷ poly plug 430 may couple voltage potentials from the source line (EN) 32 and the bit line (CN) 30, respectively, to the first P- region 420 and the third P- region 424 of the memory cell 12. In another exemplary embodiment, the first P+ poly plug 432 and/or the second P+ poly plug 430 may be formed of tungsten, titanium, titanium nitride, polysilicon or a combination thereof. Thefirst P+ poly plug 432 and/or the second P+ poly plug 430 may have a height extending from the first P- region 420 and the third P- region 424, respectively, to the carrier injection line (EP) 34 and the bit line (CN} 30. The gate region 428 may be formed of a polycide material, a silicon material, a metal material, and/or a combination thereof. In another exemplary embodiment, the gate region 428 may be formed of a doped silicon layer. The gate region 428 may be formed of a semiconductor material (e.g., silicon) comprising acceptor impurities. For example, the gate region 428 may be formed of a silicon material doped with boron impurities . The N+ region 434 may be formed of a semiconductor material (e.g., silicon) comprising donor impurities. For example, the N+ region 434 may be formed of a silicon material doped with nitrogen, arsenic, and/or phosphorous impurities. In an exemplary embodiment, the N+ region 434 may be formed of a silicon material with donor impurities having a concentration of 1020 atom/cm3 or higher. Referring to Figure 5, there is shown a cross- sectional view of the memory cell 12 shown in Figure 1 in accordance with an alternate embodiment of the present disclosure. The memory cell 12 illustrated in Figure 5 may be similar to the memory cell 12 illustrated in Figure 4, except that the memory cell 12 may comprise a plurality of undoped 'regions. The plurality of undoped region may comprise a first undopedregion 520 coupled a corresponding first P+ poly plug 432, a second undoped region 522 capacitively coupled to a corresponding gate region 428, and/or a third undoped region 524 coupled to a corresponding second N+ poly plug 430. The plurality of undoped regions may be formed of the same material or different materials. For example, the plurality of undoped regions {e.g., the first undoped region 420, the second undoped region 422, and/or the third undoped region 424} may be formed of an undoped semiconductor material (e.g., intrinsic silicon) . Referring to Figure 6, there is shown cross-sectional views of at least a portion of the memory cell array 20 shown in Figure 1 in accordance with an embodiment of the present disclosure. Figure 6 illustrates a cross-sectional view of at least a portion of the memory cell array 20 along the bit line (CN) 30 and a cross-sectional view of at least a portion of the memory cell array 20 along the word line (WL) 28. The memory cells 12 of the memory cell array 20 may be implemented in a vertical configuration having various regions. For example, the memory cell 12 may comprise a first N- region 620, a second N- region 622, a third N- region 624, and/or a P+ region 626. The first N- region 620, the second N- region 622, the third N- region 624, and/or the P+ region 626 may be disposed in a sequential contiguous relationship, and may extend vertically from -a plane defined by a P- substrate 130. In an exemplary embodiment, the second N- region 622 may be anelectrically floating body region of the memory cell 12 configured to accumulate/store charges, and may be spaced apart from and capacitively coupled to the plurality of word lines (WL) 28. The first N- region 620 of the memory cell 12 may be coupled to the source line (EN) 32. The second N- region 622 of the memory cell 12 may be capacitively coupled to the word line (WL) 28. The third N- region 624 of the memory cell 12 may be coupled to a bit line (CN) 30. The P+ region 626 of the memory cell 12 may be coupled to a carrier injection line (EP) 34. The first N- region 620, the second N- region 622, and the third N- region 624 may be formed of the same material or different materials. Also, the first N- region 620, the second N- region 622, and the third N- region 624 may be formed of the same material having various doping concentrations. In an exemplary embodiment, the first -NT- region 620, the second 3ΧΓ- region 622, and the third N- region 624 may be formed of a semiconductor material (e.g., silicon) comprising donor impurities (e.g., nitrogen, arsenic, and/or phosphorus) . In an exemplary embodiment, the first N- region 620, the second N- region 622, and/or the third N- region 624 may be formed of a silicon material with donor impurities having a concentration of 101S atoms/cm3 to 1018 atoms/cm3. The P+ region 626 may be formed of at least one layer. In an exemplary embodiment, the P+ region 626 may comprise aplurality of layers. For example, the first layer of the P+ region 626 may be formed of a polysilicon material or silicon dioxide material, and/or a combination thereof. In another exemplary embodiment, the first layer of the P+ region 626 may be formed of a semiconductor material (e.g., intrinsic silicon) comprising acceptor impurities. For example, the first layer of the P+ region 626 may be formed of a silicon material doped with boron impurities. In an exemplary embodiment, the first layer of the P+ region 626 may be formed of a silicon material with acceptor impurities having a concentration of 10ia atoms/cm3 or above. The second layer of the P+ region 626 may be formed of a metal material, polysilicon material, silicon dioxide material, and/or a combination thereof. In an exemplary embodiment, the second layer of the P+ region 626 may be formed of tungsten, titanium, titanium nitride, polysilicon or a combination thereof. The source line (EN) 32 may be formed of a metal material. In another exemplary embodiment, the source line (EN) 32 may be formed of a polycide material (e.g., a combination of a metal material and a silicon material) . In other exemplary embodiments, the source line (EN) 32 may be formed of an N+ doped silicon layer. The source line (EN) 32 may provide voltage potentials to the first N- region 620 of the memory cells 12. For example, the source line (EN) 32 may be coupled to a plurality of memory cells 12 (e.g., a columnor a row of memory cells 12 of the memory cell array 20) . The source line (EN) 32 may be configured on a side portion of the first N- region 620. The word lines (WL) 28 may be capacitively coupled to the second N- region 622. The word lines (WL) 28 may be oriented in a row direction of the memory cell array 20 and coupled to a plurality of memory cells 12. The word lines (WL) 28 may be arranged on side portions of the memory cells 12 (e.g., memory cells 12 located on a row direction of the memory cell array 20) . For example, the word lines (WL) 28 may be arranged at two side portions of the second N- region 622 of the memory cells 12. For example, the word lines (WL) 28 may be formed of a polycide material (e.g., a combination of a metal material and a silicon material) , a metal material, and/or a combination of a polycide material and a metal material . In another exemplary embodiment, the word lines (WL) 28 may be formed of an N+ doped silicon material. In an exemplary embodiment, the word lines (WL) 28 may capacitively couple a voltage/current source of the memory cell selection and control circuitry 38 to the second N- region 622 of the memory cell 12. In an exemplary embodiment, the first word line (WL) 28 may implement a write logic low (e.g., binary "0" data state) operation on the memory cell 12, while the second word line (WL) 28 may implement a write logic high (e.g., binary "1" data state) operation.The bit line (CN) 30 may be coupled to the third N- region 624 of the memory cell 12. The bit line (CN) 30 may be formed of a metal material. In another exemplary embodiment, the bit line (CN) 30 may be formed of a polycide material (e.g., a combination of a metal material and a silicon material) . In other exemplary embodiments, the bit line (CN) 30 may be formed of an N+ doped silicon layer. For example, the bit line (CN) 30 may be coupled to a plurality of memory cells 12. The bit line (CN) 30 may be configured on a side portion of the third N- region 624. In an exemplary embodiment, the bit line (CN) 30 may be configured on an opposite side portion as the source line (EN) 30. An oxide layer 128 may be formed on the P- substrate 130. For example, the oxide layer 128 may be formed of an insulating material. In an exemplary embodiment, the oxide layer 128 may be formed of an insulating oxide material. The oxide layer 128 may include a plurality of barrier walls formed of an insulating oxide material. The plurality of barrier walls may be oriented in a column direction and a row direction of the memory cell array 20. For example, a first barrier wall of the plurality of barrier walls may be oriented in a column direction. A second barrier wall of the plurality of barrier walls may be oriented in a row direction. In an exemplary embodiment, the first barrier wall oriented in the column direction and the second barrier wall oriented in the row direction may intersect to form a trench region. Theoxide layer 128 may form a trench region that may have a cross -sectional shape to accommodate one or more memory cells 12 therein. For example, the trench region may have a cross- sectional shape of a square, a rectangle, a cylinder, and/or other shapes that may accommodate one or more memory cells 12. In an exemplary embodiment, the P- substrate 130 may be made in the form of a P-well substrate. In another exemplary embodiment, the P- substrate 130 may be made of a semiconductor material (e.g., silicon} comprising acceptor impurities and may form a base of the memory cell array 20. In alternative exemplary embodiments, a plurality of P- subst ates 130 may form the base of the memory cell array 20 or a single P- substrate 130 may form the base of the memory cell array 20. An insulating layer 132 may be formed on top of the P+ region 626. For example, the insulating layer 132 may be formed of an insulating material, oxide material, and/or dielectric- material. In an exemplary embodiment, the insulating layer 132 may be formed of a silicon nitride material. The insulating layer 132 may be formed above the P+ region 626 to electrically insulating the P+ region 626. Referring to Figure 7, ' there is shown cross-sectional views of at least a portion of the memory cell array 20 shown in Figure 1 in accordance with an alternate embodiment of the present disclosure. Figure 7 illustrates a cross-sectional view of at least a portion of the memory cell array 20 alongthe bit line (CN) 30 and a cross-sectional view of at least a portion of the memory cell array 20 along the word line (WL) 28. The memory cells 12 of the memory cell array 20 may be implemented in a vertical conf guration having various regions. For example, the memory cell 12 may comprise a first N- region 720, a second N- region 722, a third N- region 724, and/or a P+ region 726. The first N- region 720, the second N- region 722, the third N- region 724,· and/or the P+ region 726 may be disposed in a sequential contiguous relationship, and may extend vertically from a plane defined by an N+ substrate 130. In an exemplary embodiment, the second N- region 722 may be an electrically floating body region of the memory cell 12 configured to accumulate/store charges, and may be spaced apart from and capacitively coupled to the plurality of word lines (WL) 28. The first N- region 720 of the memory cell 12 may be coupled to the source line (EN) 32. The second N- region 722 of the memory cell 12 may be capacitively coupled to the word line (WL) 28. The third N- region 724 of the memory cell 12 may be coupled to a bit line (CN) 30. The P+ region 726 of the memory cell 12 may be coupled to a carrier injection line (EP) 34. The first N- region 720, the second N- region 722, and the third N- region 724 may be formed of the same material or different materials. Also, the first N- region 720, the second N- region 722, and the third N- region 724 may beformed of the same material having various doping concentrations. In an exemplary embodiment, the first N- region 720, the second N- region 722, and the third N- region 724 may be formed of a semiconductor material {e.g., silicon) comprising donor impurities (e.g., nitrogen, arsenic, and/or phosphorus) . In an exemplary embodiment, the first N- region 720, the second N- region 722, and/or the third N- region 724 may be formed of a silicon material with donor impurities having a concentration of 1015 atoms/cm3 to 1018 atoms/cm3. The P+ region 726 may be made in the form of a P-well region. In another exemplary embodiment, the P+ region 726 may be made of a semiconductor material (e.g., silicon) comprising acceptor impurities and may form a base of the one or more memory cells 12. For example, the P+ region 726 may form the base of a row or a column of memory cells 12 of the memory cell array 20. The P+ region 726 may comprise a continuous planar region configured above the N+ substrate 130. The P+ region 726 may also comprise a plurality of barrier walls formed on the continuous planar region. The plurality of barrier walls of the P+ region 726 may be oriented in a column direction and/or a row direction of the memory cell array 20. The source line (EN) 32 may be formed of at least one layer. In an exemplary embodiment, the source line (EN) 32 may comprise a plurality of layers. For example, the first layer of the source line (EN) 32 may be formed of apolysilicon material or silicon dioxide material, and/or a combination thereof. In another exemplary embodiment, the first layer of the source line (EN) 32 may be formed of a semiconductor material (e.g., intrinsic silicon) comprising donor impurities. For example, the first layer of the source line (EN) 32 may be formed of a silicon material doped with nitrogen, arsenic, and/or phosphorus impurities. In an exemplary embodiment, the first layer of the source line (EN) 32 may be formed of a silicon material with acceptor impurities having a concentration of 1018 atoms/cm3 or above. The second layer of the source line (EN) 32 may be formed of a metal material, polysilicon material, silicon dioxide material, and/or a combination thereof. In an exemplary embodiment, the second layer of the source line (EN) 32 may be formed of tungsten, titanium, titanium nitride, polysilicon or a combination thereof. For example, the source line {EN} 32 may be coupled to a plurality of memory cells 12 (e.g., a column or a row of memory cells 12 of the memory cell array 20) . The source line (ΞΝ) 32 may be configured above the first N- region 720. The word lines (WL) 28 may be capacitively coupled to the second N- region 722. The word lines (WL) 28 may be oriented in a row direction of the memory cell array 20 and coupled to a plurality of memory cells 12. The word lines (WL) 28 may be arranged on side portions of the memory cells 12 (e.g., memory cells 12 located on a row direction of the memory cell array20) . For example, the word lines (WL) 28 may be arranged at two side portions of the second N- region 722 of the memory cells 12. For example, the word lines (WL) 28 may be formed of a polycide material (e.g., a combination of a metal material and a silicon material), a metal material, and/or a combination of a polycide material and a metal material. In another exemplary embodiment, the word lines (WL) 28 may be formed of an N+ doped silicon material. In an exemplary embodiment, the word lines (WL) 28 may capacitively couple a voltage potential/current source of the memory cell selection and control circuitry 38 to the second N- region 722 of the memory cell 12. In an exemplary embodiment, the first word line (WL) 28 may implement a write logic low (e.g., binary "0" data state) operation on the memory cell 12, while the second word line (WL) 28 may implement a write logic high (e.g., binary "1" data state) operation. The bit line (CN) 30 may be coupled to the third N- region 724 of the memory cell 12. The bit line (CN) 30 may be formed of a metal material. In another exemplary embodiment, the bit line (CN) 30 may be formed of a polycide material (e.g., a combination of a metal material and a silicon material). In other exemplary embodiments, the bit line (CN) 30 may be formed of an N+ doped silicon layer. For example, the bit line (CN) 30 may be coupled to a plurality of memory cells 12. The bit line (CN) 30 may be configured on a sideportion of the third N- region 724. An oxide layer 128 may be formed on the P+ region 726 and/or the N+ substrate 130. For example, the oxide layer 128 may be formed of an insulating material . In an exemplary embodiment, the oxide layer 128 may be formed of an insulating oxide material. The oxide layer 128 may include a plurality of barrier walls formed of an insulating oxide material. The plurality of barrier walls may be oriented in a column direction and a row direction of the memory cell array 20. For example, a first barrier wall of the plurality of barrier walls may be oriented in a column direction. A second barrier wall of the plurality of barrier walls may be oriented in a row direction. The first barrier wall oriented in a column direction may have a different height from the second barrier wall oriented in a row direction. In an exemplary embodiment, the first barrier wall oriented in the column direction and the second barrier wall oriented in the row direction may intersect to form a trench region. The oxide layer 128 may form a trench region that may have a cross- sectional shape to accommodate one or more memory cells 12 therein. For example, the trench region may have a cross -sectional shape of a square, a rectangle, a cylinder, and/or other shapes that may accommodate one or more memory cells 12. In an exemplary embodiment, the N+ substrate 130 may be made in the form of an N-well substrate. In another exemplary embodiment, the N+ substrate 130 may' be made of asemiconductor material (e.g., silicon) comprising donor impurities and may form a base of the memory cell array 20. In alternative exemplary embodiments, a plurality of N+ substrates 130 may form the base of the memory cell array 20 or a single N+ substrate 130 may form the base of the memory cell array 20. An insulating layer 132 may be formed on top of the first N- region 720. For example, the insulating layer 132 may be formed of an insulating material, oxide material, and/or dielectric material. In an exemplary embodiment, the insulating layer 132 may be formed of a silicon nitride material. The insulating layer 132 may be formed above the first N- region 720 to electrically insulating the source line (EN) 32. Referring to Figure 8, there is shown cross-sectional views of at least a portion of the memory cell array 20 shown in Figure 1 in accordance with an embodiment of the present disclosure. Figure 8 illustrates a cross -sectional view of at least a portion of the memory cell array 20 along the bit line (CN) 30 and a cross-sectional view of at least a portion of the memory cell array 20 along the word line (WL) 28. The memory cells 12 of the memory cell array 20 may be implemented in a vertical configuration having various regions . For example, the memory cell 12 may comprise a first P- region 820, a second P- region 822, a third P- region 824, and/or an N+ region 826. The first P- region 820, the second P- region822, the third P- region 824, and/or the N+ region 826 may be disposed in a sequential contiguous relationship, and may extend vertically from a plane defined by an N+ substrate 130. In an exemplary embodiment, the second P- region 822 may be an electrically floating body region of the memory cell 12 configured to accumulate/store charges, and may be spaced apart from and capacitively coupled to the plurality of word lines (WL) 28. The first P- region 820 of the memory cell 12 may be coupled to the source line (EN) 32. The second P- region 822 of the memory cell 12 may be capacitively coupled to the word line (WL) 28. The third P- region 824 of the memory cell 12 may be coupled to a bit line (CN) 30. The N+ region 826 of the memory cell 12 may be coupled to a carrier injection line (EP) 34. The first P- region 820, the second P- region 822, and the third P- region 824 may be formed of the same material or different materials. Also, the first P- region 820, the second P- region 822, and the third P- region 824 may be formed of the same material having various doping concentrations. In an exemplary embodiment, the first P- region 820, the second P- region 822, and the third P- region 824 may be formed of a semiconductor material (e.g., silicon) comprising acceptor impurities. The first P- region 820, the second P- region 822, and/or the third P- region 824 may be formed of a silicon material doped with boron impurities. Inan exemplary embodiment, the first P- region 820, the second P- region 822, and/or the third P- region 824 may be formed of a silicon material with acceptor impurities having a concentration of 1015 atoms/cm3 to 1018 atoms/cm3. The N+ region 826 may be formed of at least one layer. In an exemplary embodiment, the N+ region 826 may comprise a plurality of layers. For example, the first layer of the N+ region 826 may be formed of a polysilicon material or silicon dioxide material, and/or a combination thereof. In another exemplary embodiment, the first layer of the N+ region 826 may be formed of a semiconductor material (e.g., intrinsic silicon) comprising donor impurities. For. example, the. first layer of the N+ region 826 may be formed of a silicon material doped with boron impurities. In an exemplary embodiment, the first layer of the N+ region 826 may be formed of a silicon material with donor impurities having a concentration of 1018 atoms/cm3 or above. The second layer of the N+ region 826 may be formed of a metal material, polysilicon material, silicon dioxide material, and/or a combination thereof. In an exemplary embodiment, the second layer of the N+ region 826 may be formed of tungsten, titanium, titanium nitride, polysilicon or a combination thereof. The source line (EN) 32 may be formed of a metal material. In another exemplary embodiment, the source line (EN) 32 may be formed of a polycide material (e.g., a combination of a metal material and a silicon material) . Inother exemplary embodiments, the source line (EN) 32 may be formed of a P+ doped silicon layer. The source line (EN) 32 may provide voltage potentials to the first P- region 820 of the memory cells 12. For example, the source line (EN) 32 may be coupled to a plurality of memory cells 12 {e.g., a column or a row of memory cells 12 of the memory cell array 20) . The source line (ΞΝ) 32 may be configured on a side portion of the first P- region 820. The word lines (WL) 28 may be capacitively coupled to the second P- region 822. The word lines (WL) 28 may be oriented in a row direction of the memory cell array 20 and coupled to a plurality of memory cells 12. The word lines (WL) 28 may be arranged on side portions of the memory cells 12 (e.g., memory cells 12 located on a row direction of the memory cell array 20) . For example, the word lines (WL) 28 may be arranged at two side portions of the second P- region 822 of the memory cells 12. For example, the word lines (WL) 28 may be formed of a polycide material (e.g., a combination of a metal material and a silicon material), a metal material, and/or a combination of a polycide material and a metal material . In another exemplary embodiment, the word lines (WL) 28 .may be formed of a P+ doped silicon material. In an exemplary embodiment, the word lines (WL) 28 may capacitively couple a voltage/current source of the memory cell selection and control circuitry 38 to the second - region 822 of the memory cell 12. In anexemplary embodiment, the first word line (WL) 28 arranged on a side portion of the second P- region 822 may implement a write logic low (e.g., binary ¾0" data state) operation on the memory cell 12, while the second word line (WL) 28 arranged on an opposite side portion of the second P- region 822 may implement a write logic high (e.g., binary ul" data state) operatio . The bit line (CN) 30 may be coupled to the third P- region 824 of the memory cell 12. The bit line (CN) 30 may be formed of a metal material. In another exemplary embodiment, the bit line (CN) 30 may be formed of a polycide material (e.g., a combination of a metal material and a silicon material). In other exemplary embodiments, the bit line (CN) 30 may be formed of a P+ doped silicon layer. For example, the bit line (CN) 30 may be coupled to a plurality of memory cells 12. The bit line (CN) 30 may be configured on a side portion of the third P- region 824. In an exemplary embodiment, the bit line (CN) 30 may be configured on an opposite side portion as the source line (EN) 30. An oxide layer 128 may be formed on the N+ substrate 130. For example, the oxide layer 128 may be formed of an insulating material. In an exemplary embodiment, the oxide layer 128 may be formed of an insulating oxide material. The oxide layer 128 may include a plurality of barrier walls formed of an insulating oxide material. The plurality of barrier walls may be oriented in a column direction and a rowdirection of the memory cell array 20. For example, a first barrier wall of the plurality of barrier walls may be oriented in a column direction. A second barrier wall of the plurality of barrier walls may be oriented in a row direction. In an exemplary embodiment, the first barrier wall oriented in the column direction and the second barrier wall oriented in the row direction may intersect to form a trench region. The oxide layer 128 may form a trench region that may have a cross -sectional shape to accommodate one or more memory cells 12 therein. For example, the trench region may have a cross- sectional shape of a square, a rectangle, a cylinder, and/or other shapes that may accommodate one or more memory cells 12. In an exemplary embodiment, the N+ substrate 13.0 may be made in the form of an N-well substrate. In another exemplary embodiment, the N+ substrate 130 may be made of a semiconductor material (e.g., silicon) comprising donor impurities and may form a base of the memory cell array 20. In alternative exemplary embodiments, a plurality of N+ substrates 130 may form the base of the memory cell array 20 or a single N+ substrate 130 may form the base of the memory cell array 20. An insulating layer 132 may be formed on top of the N+ region 826. For example, the insulating layer 132 may be formed of an insulating material, oxide material, and/or dielectric material. In an exemplary embodiment, the insulating layer 132 may be formed of a silicon nitridematerial. The insulating layer 132 may be formed above the N+ region 826 to electrically insulating the N+ region 826. Referring to Figure 9, there is shown cross-sectional views of at least a portion of the memory cell array 20 shown in Figure 1 in accordance with an alternate embodiment of the present disclosure. Figure 9 illustrates a cross-sectional view of at least a portion of the memory cell array 20 along the bit line (CN) 30 and a cross- sectional view of at least a portion of the memory cell array 20 along the word line (WL) 28. The memory cells 12 of the memory cell array 20 may be implemented in a vertical configuration having various regions. For example, the memory cell 12 may comprise a first P- region 920, a second P- region 922, a third P- region 924, and/or an N+ region 926. The first P- region 920, the second P- region 922, the third P- region 924, and/or the N+ region 926 may be disposed in a sequential contiguous relationship, and may extend vertically from a plane defined by a P+ substrate 130. In an exemplary embodiment, the second P- region 922 may be an electrically floating body region of the memory cell 12 configured to accumulate/store charges, and may be spaced apart from and capacitively coupled to the plurality of word lines (WL) 28. The first P- region 920 of the memory cell 12 may be coupled to the bit line (CN) 30. The second P- region 922 of the memory cell 12 may be capacitively coupled to the word line (WL) 28. The third P- region 924 of the memory cell 12may be coupled to the source line (EN) 32. The N+ region 926 of the memory cell 12 may be coupled to a carrier injection line CEP) 34. The first P- region 920, the second P- region 922, and the third P- region 924 may be formed of the same material or different materials. Also, the first P- region 920, the second P- region 922, and the third P- region 924 may be formed of the same material having various doping concentrations. In an exemplary embodiment, the first P- region 920, the second P- region 922, and the third P- region 924 may be formed of a semiconductor material (e.g., silicon) comprising acceptor impurities. For example, the first P- region 920, the second P- region 922, and/or the third P- region 924 may be formed of a silicon material doped with boron impurities. In an exemplary embodiment, the first P- region 920, the second P- region 922, and/or the third P- region 924 may be formed of a silicon material with acceptor impurities having a concentration of 1015 atoms/cm3 to 1018 atoms/cm3. The N+ region 926 may be made in the form of an N-well region. In another exemplary embodiment, the N+ region 926 may be made of a semiconductor material (e.g., silicon) comprising donor impurities and may form a base of the one or more memory cells 12. For example, the N+ region 926 may form the base of a row or a column of memory cells 12 of the memory cell array 20. The N+ region 926 may comprise a continuousplanar region configured above the P+ substrate 130. The N+ region 926 may also comprise a plurality of barrier walls formed on the continuous planar region. The plurality of barrier walls of the N+ region 926 may be oriented in a column direction and/or a row direction of the memory cell array 20. The bit line (CN) 30 may be formed of at least one layer. In an exemplary embodiment, the bit line (CN) 30 may comprise a plurality of layers. For example, the first layer of the bit line (CN) 32 may be formed of a polysilicon material or silicon dioxide material, and/or a combination thereof. In another exemplary embodiment, the first layer of the bit line (CN) 30 may be formed of a semiconductor material (e.g., intrinsic silicon) comprising donor impurities. For example, the first layer of the bit line (CN) 30 may be formed of a silicon material doped with nitrogen, arsenic, and/or phosphorus impurities. In an exemplary embodiment, the first layer of the bit line (CN) 30 may be formed of a silicon material with donor impurities having a concentration of 1018 atoms/cm3 or above. The second layer of the bit line (CN) 30 may be formed of a metal material, polysilicon material, silicon dioxide material, and/or a combination thereof. In an exemplary embodiment, the second layer of the bit line (CN) 30 may be formed of tungsten, titanium, titanium nitride, polysilicon or a combination thereof. For example, the bit line (CN) 30 may be coupled to a plurality of memory cells 12 (e.g., a column or a row of memory cells 12 of .the memory cellarray 20) . The bit line (CN) 30 may be configured above the first P- region 920. The word lines (WL) 28 may be capacitively coupled to the second P- region 922. The word lines (WL) 28 may be oriented in a row direction of the memory cell array 20 and coupled to a plurality of memory cells 12. The word lines (WL) 28 may be arranged on side portions of the memory cells 12 (e.g., memory cells 12 located on a row direction of the memory cell array 20) . For example, the word lines (WL) 28 may be arranged at two side portions of the second P- region 922 of the memory cells 12. For example, the word lines (WL) 28 may be formed of a polycide material (e.g., a combination of a metal material and a silicon material) , a metal material, and/or a combination of a polycide material and a metal material. In another exemplary embodiment, the word lines (WL) 28 may be formed of an N-f doped silicon material. In an exemplary embodiment, the word lines (WL)■ 28 may capacitively couple a voltage potential/current source of the memory cell selection and control circuitry 38 to the second P- region 922 of the memory cell 12. In an exemplary embodiment, the first word line (WL) 28 may implement a write logic low (e.g., binary "0" data state) operation on the memory cell 12, while the second word line (WL) 28 may implement a write logic high (e.g., binary "1" data state) operation. The source line (EN) 32 may be coupled to the third P-region 924 of the memory cell 12. The source line (EN) 32 may be formed of a metal material. In another exemplary embodiment, the source line (EN) 32 may be formed of a polycide material (e.g., a combination of a metal . material and a silicon material). In other exemplary embodiments, the source line (EN) 32 may be formed of a P+ doped silicon layer. For example, the source line (EN) 32 may be coupled to a plurality of memory cells 12. The source line (EN) 32 may be configured on a side portion of the third P- region 924. An oxide layer 128 may be formed on the N+ region 926 and/or the P+ substrate 130. For example, the oxide layer 128 may be formed of an insulating material . In an exemplary embodiment, the oxide layer 128 may be formed of an insulating oxide material. The oxide layer 128 may include a plurality of barrier walls formed of an insulating oxide material. The plurality of barrier walls may be oriented in a column direction and a row direction of the memory cell array 20. For example, a first barrier wall of the plurality of barrier walls may be oriented in a column direction. A second barrier wall of the plurality of barrier walls may be oriented in a row direction. The first barrier wall oriented in a column direction may have a different height from the second barrier wall oriented in a row direction. In an exemplary embodiment, the first barrier wall oriented in the column direction and the second barrier wall oriented in the row direction may intersect to form a trench region. The oxide layer 128 mayform a trench region that may have a cross-sectional shape to accommodate one or more memory cells 12 therein. For example, the trench region may have a cross-sectional shape of a square, a rectangle, a cylinder, and/or other shapes that may accommodate one or more memory cells 12. In an exemplary embodiment, the P+ substrate 130 may be made in the form of a P-well substrate. In another exemplary embodiment, the P+ substrate 130 may be made of a semiconductor material (e.g., silicon) comprising acceptor impurities and may form a base of the memory cell array 20. In alternative exemplary embodiments, a plurality of P+ substrates 130 may form the base of the memory cell array 20 or a single P+ substrate 130 may form the base of the memory cell array 20. An insulating layer 132 may be formed on top of the first P- region 920. For example, the insulating layer 132 may be formed of an insulating material, oxide material, and/or dielectric material. In an exemplary embodiment, the insulating layer 132 may be formed of a silicon nitride material. The insulating layer 132 may be formed above the first P- region 920 to electrically insulating the bit line (CN) 30. Referring to Figure 10, there are shown control signal voltage waveforms for performing a write operation on a memory cell 12 shown in Figure 2 in accordance with an embodiment of the present disclosure. For example, the various controlsignals may be configured to perform a write logic low (e.g., binary "0" data state) operation, and/or a write logic high (e.g., binary "1" data state) operation. In an exemplary embodiment, various control signals may be applied to the memory cell 12 to perform one or more write logic low (e.g., binary "0" data state) operations to one or more selected memory cells 12. For example, the write logic low {e.g., binary "0" data state) operation may be performed to one or more selected memory cells 12 in order to deplete charge carriers that may have accumulated/stored in the floating body regions of the one or more selected memory cells 12. Various voltage potentials may be applied to the various regions of the memory cell 12. In an exemplary embodiment, the voltage potentials applied to the first N- region 120, the third N- region 124, and/or the P- region 126 may be maintained at.OV. The voltage potential applied to the word line ( L) 28 that may be capacitively coupled to the second N- region 122 may be raised from a voltage potential applied during the hold operation. In an exemplary embodiment, the voltage potential applied to the word line (WL) 28 that may be capacitively coupled to the second N- region.122 may be raised to -0.5V. Under such biasing, the junction between the first N- region 120 and the second N- region 122 and the junction between the second N- region 122 and the third N- region 124 may be forward biased. The junction between the third N- region 124 and the P- region 126 may be reverse biased orweakly forward biased (e.g., above a reverse bias voltage and below a forward bias threshold voltage potential) . The hole charge carriers that may have accumulated/stored in the second N- region 122 may flow to the first N- region 120 and/or the third N- region 124. Thus, the hole charge carriers that may have accumulated/stored in the second N- region 122 may be depleted via the first N- region 120 and/or the third N- region 124. By removing the hole charge carriers that may have accumulated/stored in the second N- region 122, a logic low (e.g., binary "0" data state) may be written to the memory cell 12. After performing a write logic low (e.g., binary "0" data state) operation, the control signals may be configured to perform a hold operation in order to maintain a data state (e.g., a logic high (binary "1" data state)} stored in the memory cell 12. In particular, the control signals may be configured to perform a hold operation in order to maximize a retention time of a data state (e.g., a logic low (binary "0" data state)) stored in the memory cell 12. Also, the control signals for the hold operation may be configured to eliminate or reduce activities or field (e.g., electrical fields between junctions■ which may lead to leakage of charges) within the memory cell 12. In an exemplary embodiment, during a hold operation, a negative voltage potential may be applied to the word line ( L) 28 that may be capacitively coupled to the second N- region 122 of the memory cell 12 while constantvoltage potentials may be applied to the first N- region 120 via the source line (EN) 32, the third N- region 124 via the bit line (CN) 30, and/or the P- region 126 via the carrier injection line (EP) 34 may be maintained at 0V. For example, the negative voltage potential applied to the word line (WL) 28 (e.g., capacitively coupled to the P- region 122 of the memory cell 12) may be -2.0V. During the hold operation, the junction between the first N- region 120 and the second N- region 122 and the junction between the third N- region 124 and the second N- region 122 may be reverse biased in order to retain a data state (e.g., a logic high (binary "1" data state) or a logic low (binary "0" data state)) stored in the memory cell 12. In another exemplary embodiment, control signals may be configured to write a logic high (e.g., binary "1" data state) to one or more selected memory cells 12 of one or more selected rows of the memory cell array 20. For example, the write logic high (e.g., binary "1" data state) operation may be performed on one or more selected rows of the memory cell array 20 or the entire memory cell array 20. In another exemplary embodiment, a write logic high (e.g., binary "1" data state) operation may have control signals configured to cause accumulation/storage of hole charge carriers in the second N- region 122. In an exemplary embodiment, a voltage potential applied to the first N- region 120 of the memory cell 12 via thesource line (EN) 32 and a voltage potential applied to the third N- region 124 via the bit line (CN) 30 may be maintained at the same voltage potential as the voltage potential during the hold operation. For example, the voltage potential applied to first N- region 120 via the source line (EN) 32 and the third N- region 124 via the bit line (CN) 30 may be maintained at OV. The voltage potential applied to the word line (WL) 28 that may be capacitively coupled to the second N- region 122 may be also maintained the same as during the hold operation. For example, the voltage potential applied to the word line (WL) 28 that may be capacitively coupled to the second N- region 122 may be maintained at -2.0V. The voltage potential applied to the P- region 126 via the carrier injection line (EP) 34 may be raised from a voltage potential applied during the hold operation. In an exemplary embodiment, the voltage potential applied to the P- region 126 via the carrier injection line (EP) 34 may be raised to approximately 0.7V to 0.9V from 0V. Under such biasing, the junction between the third N- region 124 and the P- region 126 may become forward biased. For example, the majority charge carriers (e.g., holes) may flow toward from the P- region 126 to the second N- region 122 via the third N- region 124. Thus, a predetermined amount of hole charge carriers may be accumulated/stored in the N- region 122 via the P+ region 126 and the third N- region 124. The predetermined amount of charge carriers accumulated/storedin the second N- region 122 (e.g., capacitively coupled to word line (WL) 28) may represent that a logic high (e.g., binary "1" data state) may be written in the memory cell 12. Referring to Figure 11, there are shown control signal voltage waveforms for performing a read operation on a memory cell 12 shown in Figure 2 in accordance with an embodiment of the present disclosure. In an exemplary embodiment, control signals may be configured to perform a read operation of a data state (e.g., a logic low (binary "0" data state) and/or a logic high (binary "1" data state) ) stored in one or more selected memory cells 12 of one or more selected rows of the memory cell array 20. The control signals may be ' configured to a predetermined voltage potential to implement a read operation via the bit line (CN) 30. In an exemplary embodiment, the voltage potential applied to the first N- region 120 via the source line (EN) 32 and the voltage potential applied to the P- region 126 via the carrier injection line (EP) 34 may be maintained at 0V. The voltage potential applied to the word line (WL) 28 that may be capacitively coupled to the second N- region 122 and the voltage potential applied to the third N- region 124 may be raised from the voltage potentials applied during the hold operation. In an exemplary embodiment, the voltage potential applied to the word line (WL) 28 that may be capacitively coupled to the second N- region 122 may be raised to -1.0V from -2.0V. The voltage potential applied to thethird N- region 124 via the bit line (CN) 30 may be raised to 1.0V from 0V. Under such biasing, when a logic low (e.g., binary "0" data state) is stored in the memory cell 12, the predetermined amount of hole charge carriers accumulated/stored in the second N- region 122 during hold operation may flow toward the third N- region 124. The predetermined amount of hole charge carriers flown to the third N- region 124 may cause an injection of electron charge carriers from the third N- region 124. The injection of electron charge carriers from the third N- region 124 may cause a current spike and may change a voltage potential on the bit line (CN) 30. A data sense amplifier in the data write and sense circuitry 36 may detect the small amount of voltage potential or current (e.g., compared to a reference voltage potential or current) or no voltage potential or current via the bit line (CN) 30 coupled to the third N- region 124. When a logic high (e.g., binary "1" data state) is stored in the memory cell 12, the predetermined amount of hole charge carriers (e.g., that may represent a logic high (e.g., binary "1" data state) ) accumulated/stored in the second N- region 122 may flow toward the third N- region 124. The predetermined amount of hole charge carriers injected into the third N- region 124 may also cause an injection of electron charge carriers into the third N- region 124. The injection of electron charge carriers into the third N- region 124 maycause a current spike and may change a voltage potential on the bit line (CN) 30. A data sense amplifier in the data write and sense circuitry 36 may detect the generated voltage potential or current (e.g., compared to a reference voltage potential or current) via the bit line (CN) 30. At this point it should be noted that providing techniques for providing a semiconductor memory device in accordance with the present disclosure as described above typically involves the processing of input data and the generation of output data to some extent. This input data processing and output data generation may be implemented in hardware or software. For example, specific electronic components may be employed in a semiconductor memory device or similar or related circuitry for implementing the functions associated with providing a semiconductor memory device in accordance with the present disclosure as described above. Alternatively, one or more processors operating in accordance with instructions may implement the functions associated with providing a semiconductor memory device in accordance with the present disclosure as described above. If such is the case, it is within the scope of the present disclosure that such instructions may be stored on one or more processor readable media (e.g., a magnetic disk or other storage medium), or transmitted to one or more processors via one or more signals embodied in one or more carrier waves. The present disclosure is not to be limited in scope bythe specific embodiments described herein. Indeed, other various embodiments of and modifications to the present disclosure, in addition to those described herein, will be apparent to those of ordinary skill in the art from the foregoing description and accompanying drawings. Thus, such other embodiments and modifications are intended to fall within the scope of the present disclosure. Further, although the present disclosure has been described herein in the context of a particular implementation in a particular environment for a particular purpose, those of ordinary skill in the art will recognize that its usefulness is not limited thereto and that the present disclosure may be beneficially implemented in any number of environments for any number of purposes. Accordingly, the claims set forth below should be construed in view of the full breadth and spirit of the present disclosure as described herein.
Embodiments of the present disclosure are directed towards techniques and configurations for providing a 3D memory array apparatus. In one embodiment, the apparatus may comprise a substantially hexagonal arrangement having seven pillars disposed in a die in a repeating pattern. The arrangement may include first and second pillars disposed at a pillar pitch from each other in a first row; third, fourth, and fifth pillars disposed at the pillar pitch from each other in a second row; and sixth and seventh pillar disposed at the pillar pitch from each other in a third row and shifted relative to the first and second pillars respectively by a quarter of the pillar pitch in a direction that is substantially orthogonal to bitlines disposed in the die. Each pillar in the arrangement may be electrically coupled with a different bitline. Other embodiments may be described and/or claimed.
ClaimsWhat is claimed is: 1. An apparatus, comprising:a plurality of pillars disposed in a die, wherein the plurality of pillars comprises:a first pillar grouping having at least a first pillar electrically coupled with a first bitline and a second pillar electrically coupled with a second bitline and disposed at a pillar pitch from the first pillar along a first imaginary line that is substantially orthogonal to the first and second bitlines; anda second pillar grouping having at least a third pillar electrically coupled with a third bitline and shifted by at least a quarter of the pillar pitch from the first pillar along a second imaginary line that is substantially orthogonal to the bitlines, and a fourth pillar electrically coupled with a fourth bitline and disposed at the pillar pitch from the third pillar and shifted by the quarter of the pillar pitch from the second pillar along the second imaginary line. 2. The apparatus of claim 1, wherein the first pillar grouping further includes a fifth pillar electrically coupled with a fifth bitline, and a sixth pillar electrically coupled with a sixth bitline and disposed at the pillar pitch from the fifth pillar along a third imaginary line that is substantially orthogonal to the first and second bitlines, andwherein the second pillar grouping further includes a seventh pillar electrically coupled with a seventh bitline and shifted by at least a quarter of the pillar pitch from the fifth pillar along a fourth imaginary line that is substantially orthogonal to the first and second bitlines, and an eighth pillar electrically coupled with an eighth bitline and disposed at the pillar pitch from the seventh pillar and shifted by the quarter of the pillar pitch from the sixth pillar along the fourth imaginary line. 3. The apparatus of claim 2, wherein the first and second imaginary lines are disposed at a first distance from each other.4. The apparatus of claim 3, wherein the second and third imaginary lines are disposed at a second distance from each other, wherein the second distance is different from the first distance. 5. The apparatus of claim 4, wherein the first and second distances are to provide a desired spacing between the pillars of the first and second groupings. 6. The apparatus of claim 2, wherein the first and fifth bitlines are disposed at a characteristic bitline pitch from each other, the first and sixth bitlines are disposed at the characteristic bitline pitch from each other, and the sixth and second bitlines are disposed at the characteristic bitline pitch from each other. 7. The apparatus of claim 6, wherein the third bitline is disposed between the fifth and first bitlines at least a half of the characteristic bitline pitch from the fifth and first bitlines, wherein the fourth bitline is disposed between the sixth and second bitlines at the half of the characteristic bitline pitch from the sixth and second bitlines. 8. The apparatus of claim 1, wherein each of the pillars in the first and second groupings is encompassed by a drain-side select gate (SGD). 9. The apparatus of any of claims 1 to 8, wherein the apparatus comprises a three- dimensional (3D) memory array. 10. The apparatus of claim 9, wherein the 3D memory array comprises a 3D NAND memory array. 11. An apparatus, comprising a substantially hexagonal arrangement having seven pillars disposed in a die in a repeating pattern, wherein the arrangement includes first and second pillars disposed at a pillar pitch from each other in a first row of the arrangement, third, fourth, and fifth pillars disposed at the pillar pitch from each other in a second row of the arrangement, and sixth and seventh pillars disposed at the pillar pitch from each other in a third row of the arrangement and shifted relative to the first and second pillars respectively by at least a quarter of the pillar pitch in a direction that is substantially orthogonal to a plurality of bitlines disposed in the die, wherein each pillar in the arrangement is electrically coupled with a different bitline of the plurality of bitlines. 12. The apparatus of claim 11, wherein each of the pillars in the arrangement is encompassed by a drain-side select gate (SGD). 13. The apparatus of claim 11, wherein the apparatus comprises a three-dimensional (3D) memory array. 14. The apparatus of claim 11, wherein the bitlines are disposed at least half of a characteristic bitline pitch from each other. 15. The apparatus of any of claims 11 to 14, wherein the first and second rows are disposed at a first distance from each other, wherein the second and third rows are disposed at a second distance from each other, wherein the second distance is different from the first distance.. 16. The apparatus of claim 15, wherein the first and second distances are to provide a desired spacing between the pillars of the arrangement. 17. A method for providing a memory device, comprising:disposing a plurality of bitlines in a die;disposing a substantially hexagonal arrangement having seven pillars in the die, including:disposing first and second pillars at a pillar pitch from each other in a first row of the arrangement;disposing third, fourth, and fifth pillars at the pillar pitch from each other in a second row of the arrangement; and disposing sixth and seventh pillars at the pillar pitch from each other and shifted relative to the first and second pillars respectively by at least a quarter of the pillar pitch in a direction that is substantially orthogonal to the plurality of bitlines; andelectrically coupling each pillar in the arrangement with a different bitline of the plurality of bitlines. 18. The method of claim 17, further comprising: electrically coupling thearrangement with a drain-side select gate (SGD). 19. The method of any of claims 17 to 18, further comprising: repeating the disposing of the arrangement in the die, to provide a structure comprising a three-dimensional (3D) memory array. 20. The method of claim 19, wherein the structure comprises a 3D NAND memory array.
PILLAR ARRANGEMENT IN NAND MEMORY Cross-Reference to Related ApplicationThis application claims priority to U.S. Application No.14/667,331, filed March 24, 2015, and entitled“PILLAR ARRANGEMENT IN NAND MEMORY," which is hereby incorporated by reference herein in its entirety for all purposes. FieldEmbodiments of the present disclosure generally relate to the field of integrated circuits (IC), and more particularly, to techniques and configurations for providing pillar arrangements in vertical memory, such as a three-dimensional NAND memory. BackgroundMemory provides data storage for electronic systems. Flash memory is one of various memory types, which has numerous uses in modern computers and devices. A typical flash memory may comprise a memory array that includes a large number of non-volatile memory cells arranged in row and column fashion. The cells may usually be grouped into blocks. Each of the cells within a block may be electrically programmed by charging a floating gate. The charge may be removed from the floating gate by a block erase operation. Data may be stored in a cell as charge in the floating gate. NAND memory array may comprise a basic architecture of flash memory.In recent years, vertical memory, such as three-dimensional (3D) memory has been developed. A 3D flash memory (e.g., 3D NAND memory array) device may include a plurality of strings of charge storage devices (memory cells) stacked over one another (e.g., in a first of three dimensions of 3D) with each charge storage device corresponding to one of multiple tiers of the device. The charge storage devices of a respective string may share a common channel region, such as one formed in a respective pillar of semiconductor material (e.g., polysilicon) about which the string of charge storage devices may be formed.In a second dimension, each first group of the plurality of strings may comprise, for example, a group of strings sharing a plurality of access lines, known as wordlines (WLs). Each of the plurality of access lines may couple (e.g., electrically or otherwise operably connect) the 1 charge storage devices (memory cells) corresponding to a respective tier of the plurality of tiers of each string. The charge storage devices coupled by the same access line (and thuscorresponding to the same tier) may be logically grouped into memory pages, when each charge storage device comprises a multi-level cell capable of storing two bits of information.In a third dimension, each group of the plurality of strings may comprise a group of strings coupled by corresponding data lines, known as bitlines (BLs). During operation of a computing device, data stored in the memory may be subjected to periodic (e.g., continuous) manipulations. These manipulations may be caused by internal control mechanisms, directed, for example, to optimize memory capacity, location areas, speed of access to memory, and the like. For example, the data may be moved from one area of memory to another area, copied from one area to another area, and the like. Accordingly, time to internally access data stored in a memory unit (e.g., memory block) may become an important factor in overall speed of manipulation of the data in the memory. For example, the lower the access time to a memory block, the lower is the time for an operation related to internal data manipulation. Brief Description of the DrawingsEmbodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings.FIG.1 is a side view of an example apparatus comprising a 3D memory array, in accordance with some embodiments of the present disclosure.FIG.2 is a top view of an example apparatus comprising a 3D memory array that includes pillars in a shifted pillar arrangement, compared to a 3D memory array with a hexagonal pillar arrangement, in accordance with some embodiments.FIG.3 illustrates a top view of an example portion of a 3D memory array with a shifted pillar arrangement, in accordance with some embodiments.FIG.4 is a flow diagram for a method of fabricating an apparatus comprising a 3D memory array with a shifted pillar arrangement, in accordance with some embodiments.FIG.5 schematically illustrates an example computing device 500 in accordance with some embodiments. Detailed DescriptionEmbodiments of the present disclosure describe techniques and configurations for providing an apparatus comprising a 3D memory array with a shifted pillar arrangement. In one embodiment, the apparatus may include a substantially hexagonal arrangement having seven semiconductor pillars disposed in a die in a repeating pattern. The arrangement may include first and second pillars disposed at a pillar pitch from each other in a first row; third, fourth, and fifth pillars disposed at the pillar pitch from each other in a second row; and sixth and seventh pillars disposed at the determined pitch from each other in a third row and shifted relative to the first and second pillars respectively by a quarter of the pillar pitch in a direction that is substantially orthogonal to bitlines disposed in the die. Each pillar in the arrangement may be electrically coupled with a different bitline.In the following description, various aspects of the illustrative implementations will be described using terms commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. However, it will be apparent to those skilled in the art that embodiments of the present disclosure may be practiced with only some of the described aspects. For purposes of explanation, specific numbers, materials, and configurations are set forth in order to provide a thorough understanding of the illustrative implementations. However, it will be apparent to one skilled in the art that embodiments of the present disclosure may be practiced without the specific details. In other instances, well-known features are omitted or simplified in order not to obscure the illustrative implementations.In the following detailed description, reference is made to the accompanyingdrawings which form a part hereof, wherein like numerals designate like parts throughout, and in which is shown by way of illustration embodiments in which the subject matter of the present disclosure may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of embodiments is defined by the appended claims and their equivalents.For the purposes of the present disclosure, the phrase“A and/or B” means (A), (B), (A) or (B), or (A and B). For the purposes of the present disclosure, the phrase“A, B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B, and C). The description may use perspective-based descriptions such as top/bottom, in/out, over/under, and the like. Such descriptions are merely used to facilitate the discussion and are not intended to restrict the application of embodiments described herein to any particular orientation.The description may use the phrases“in an embodiment,” or“in embodiments,” which may each refer to one or more of the same or different embodiments. Furthermore, the terms “comprising,”“including,”“having,” and the like, as used with respect to embodiments of the present disclosure, are synonymous.The term“coupled with,” along with its derivatives, may be used herein.“Coupled” may mean one or more of the following.“Coupled” may mean that two or more elements are in direct physical or electrical contact. However,“coupled” may also mean that two or more elements indirectly contact each other, but yet still cooperate or interact with each other, and may mean that one or more other elements are coupled or connected between the elements that are said to be coupled with each other. The term“directly coupled” may mean that two or more elements are in direct contact.FIG.1 is a side view of an example apparatus comprising a 3D memory array, in accordance with some embodiments. More specifically, the apparatus 100 may include a plurality of vertically-stacked memory cells 15 arranged in rows and columns along with access lines (e.g., word lines, not shown) and data lines (e.g., bitlines, not shown).In embodiments, the memory cells 15 may be stacked upon each other in vertical stacks or pillars 12, 13, 14, to form a 3D memory structure. Breaks are provided within pillars 12, 13, 14 of memory cells 15 to indicate that there may be additional memory cells besides those shown. Any suitable number of memory cells may be present. For instance, the individual stacks (pillars) 12-14 may comprise 8 memory cells, 16 memory cells, 32 memory cells, 64 memory cells,…, 256 memory cells, 512 memory cells, etc.The pillars 12-14 may be provided over an electrically conductive material 16, which in turn may be supported by a semiconductor base (die) 18. A break is provided between the material 16 and the base 18 to indicate that there may be additional materials and/or integrated circuit structures between the base 18 and the material 16. Similarly, a break is provided between the material 16 and each of the stacks 12-14 to indicate that there may be additional materials and/or integrated circuit structures between the pillars 12, 13, 14, and the material 16. The material 16 may comprise a common source and/or source-side select gate (SGS), with the term“source-side” indicating that material 16 is on the source side of the stacks (pillars) 12-14. The material 16 may comprise, for example, p-type doped silicon and/or other suitable conductively-doped semiconductor material.Bitlines (not shown) may be provided above material 16, with such bitlines being“drain” connections to stacks (pillars). Semiconductor base 18 may comprise semiconductor material, and in some embodiments may comprise monocrystalline silicon. Drain-side select gate (SGD) devices 20, 21, 22 (e.g., transistors having the SGD devices as control gates) may be provided over pillars 12, 13, 14, respectively. The SGD devices 20, 21, 22 may comprise one or more of various metals (for instance, tungsten, titanium, etc.), metal-containing compositions (for instance, metal silicide, metal nitride, etc.), and conductively doped semiconductor materials (for instance, conductively-doped silicon). The SGD devices 20, 21, 22 are drain-side devices in that they are on the drain side of the pillars 12, 13, 14. The pillars 12, 13, 14 may be disposed on the base 18 at a distance (pillar pitch) 30 from each other.It will be appreciated that the front view of the apparatus 100 in FIG.1 illustrates a front “tile” of pillars 12, 13, 14. A plurality of pillars comprising the memory array of the apparatus 100 may be arranged on the base (die) 18 in a number of different spatial configurations, depending on technological requirements to the apparatus 100. In some embodiments, some pillars may be disposed such as to be spatially shifted relative to each other by a distance that may comprise a fraction of the pillar pitch 30. For example pillar 40 (illustrated in dashed lines) may be spatially disposed at a horizontal distance 32 from pillar 12 that may comprise a fraction (e.g., a quarter) of the pillar pitch 30. Such pillar arrangement of the memory array of the apparatus 100 will be called a shifted pillar arrangement and described in detail in reference to FIG.2.FIG.2 is a top view of an example apparatus comprising a 3D memory array that includes pillars in a shifted pillar arrangement, compared to a 3D memory array with a hexagonal pillar arrangement, in accordance with some embodiments of the present disclosure.More specifically, FIG.2 illustrates a top view of a 3D memory array 200 having a hexagonally closest pack pillar arrangement and a top view of a 3D memory array 250 having a shifted pillar arrangement. The memory array 200 is provided next to the array 250 in FIG.2 for illustration purposes, in order to provide a comparison with the shifted pillar arrangement of the 3D memory array 250 in accordance with embodiments of the present disclosure. The memory arrays 200 and 250 are shown as disposed respectively on dies 202 and 252 of the same width W. It will be appreciated that perspective-based descriptors, such as vertical or horizontal, may be used to facilitate discussion. These descriptors do not limit the implementations of embodiments of the disclosure.The memory array 200 may comprise bitlines 210, 212, 214, 216, 218 disposed at a characteristic (e.g., standard) bitline pitch BLP from each other to vertically traverse the memory array 200. The term bitline pitch, as used herein, may be a distance between the center of one bit line and the center of the adjacent bit line in a direction of the wordlines (not shown), or in a direction perpendicular to the bitlines. In embodiments, the characteristic bitline pitch BLP may comprise about 82 nm.The memory array 250 with shifted pillar arrangement may comprise bitlines 260, 262, 264, 266, 268 that are disposed at the characteristic bitline pitch BLP from each other to vertically traverse the array 250. The bitlines 260, 262, 264, 266, 268 of the array 250 may correspond to the respective bitlines 210, 212, 214, 216, 218 of the array 200. It will be appreciated that a number of bitlines shown in the arrays 200 and 250 is provided for illustration purposes only and is not limiting this disclosure. Any number of bitlines may be disposed on the dies 202 and 252 depending on the width W of the die.In addition to the bitlines 260, 262, 264, 266, 268, bitlines 270, 272, 274, 276 may be disposed in the die 252 to vertically traverse the array 250 such that a distance full bit line pitch (FBLP) between the adjacent bitlines comprises a fraction of BLP, e.g., half of the characteristic bitline pitch BLP, as shown in the array 250. For example, the distance between bitlines 260 and 270, 270 and 262, 262 and 272, 272 and 264, 264 and 274, 274 and 266, 266 and 276, and 276 and 268 may comprise at least a half of BLP, or FBLP. In embodiments, FBLP may comprise about 41 nm. In other words, the number of bitlines in the array 250 may at least double, compared to the array 200, by disposing bitlines at half of the characteristic bitline pitch FBLP.The memory arrays 200 and 250 may each comprise a plurality of pillars (e.g., 230 and 232 in the array 200, and 290 and 292 in the array 250) that may be electrically connected to corresponding bitlines. The memory arrays 200 and 250 may further include a number of electrical lines arranged parallel to the wordlines (not shown) that may be selectively controlled to access various sets of the memory cells in respective pillars. The electrical lines may include select gate drains (SGD devices described in reference to FIG.1), where each SGD device may be electrically coupled with a respective pillar, to control selection of pillars corresponding to a particular wordline. The electrical lines connecting SGD devices will be called SGD lines for purposes of description. The wave-like lines 220, 222, and 224 shown in the memory array 200 may delineate the SGD lines associated with memory array 200. For example, two SGD lines 226 and 228 are shown as formed between the wave-like lines 220 and 222, and 222 and 224 respectively. Similarly, an SGD line 286 may be formed between the wave-like lines 280 and 282 in the memory array 250.As described above, the pillars in the memory array 200 may be arranged in a repeating pattern comprising hexagonally closest pack pillar arrangement formed, e.g., by pillars 234, 236, 238, 240, 242, 246, and 248. As shown, the pillars 234 and 236 may be disposed along a first imaginary line 302, and the pillars 246, 248, and 238 may be disposed along a second imaginary line 304. The imaginary lines 302 and 304 may be substantially orthogonal to the bitlines 210, 212.As shown, a distance between imaginary lines 302 and 304, and accordingly, vertical distance between the pillars 234, 236, and 246, 248, and 238 may be L1. Note that the above- mentioned pillars may be electrically coupled with the SGD line 226. A distance between imaginary lines 304 and 306, and accordingly, vertical distance between the pillars 246, 248, and 238, and 242, and 240 may be L2. The distance L1 may be different (e.g., smaller) than the distance L2, because the pillars 246, 248, and 238 may be coupled with the SGD line 226, whereas pillars 242 and 240 may be coupled with an adjacent SGD line, e.g., the SGD line 228.In some embodiments, L1 may comprise about 143 nm, and L2 may comprise about 150 nm, to maintain desired spacing S1 and S2 between pillars, e.g., spacing S1 between pillars 248 and 236, and S2 between pillars 248 and 240. In some embodiments, S1 may comprise about 164 nm, and S2 may comprise about 171 nm. As shown, adjacent pillars disposed on the same imaginary line (e.g., pillars 234 and 236) may be disposed at a distance (pillar pitch P) from each other in the memory array 200. It will be appreciated that each of the pillars electrically coupled with the same SGD line may be electrically coupled to a single bitline. For example, pillar 246 is coupled with bitline 210, pillar 234 is coupled with bitline 212, pillar 248 is coupled with bitline 214, and so on. As shown in the memory array 250, pillars 294, 296, 298, 348, and 346 may substantially repeat the pattern formed by the pillars 234, 236, 238, 248, and 246. It should be noted that adjacent pillars disposed on the same imaginary line (e.g., pillars 294 and 296) may be disposed at the pillar pitch P from each other in the memory array 250.In contrast to the pillar arrangement of memory array 200, pillars 340 and 342, disposed along imaginary line 299 that is orthogonal the bitlines 260, 270, may be disposed with an offset (e.g., shift) relative to the pillars 294, 296 along the imaginary line 299. For example, pillar 342 may be shifted by a fraction, e.g., at least a quarter of the pillar pitch Q from pillar 294, and pillar 340 may be shifted by at least Q from pillar 296. Similarly, a pair of pillars 356, 358 may be shifted along imaginary line 293 by the quarter of the pillar pitch Q from the pair of pillars 346 and 348 respectively. Note that the horizontal pillar pitch approximately equals four bit line pitches.It should be noted that pillars 294 and 296 may be coupled with bitlines 262 and 266 that may correspond to bitlines 212 and 216 of the memory array 200. Pillars 342 and 340 (shifted relative to pillars 294 and 296) may be coupled with bitlines 270 and 274, which may be additionally disposed at a quarter horizontal pillar pitch from bitlines 262 and 266, as described above. Similarly, pillars 346 and 348 may be coupled with bitlines 260 and 264 that may correspond to bitlines 210 and 214 of the memory array 200. Pillars 356 and 358 (shifted relative to 346 and 348) may be coupled with bitlines 261 and 272, which may be additionally disposed at a quarter horizontal pillar pitch from bitlines 260 and 264.Each pillar in the shifted pillar arrangement that is associated with the SGD line 286 may be electrically coupled with a different bitline of the memory array 250. For example, pillar 356 may be coupled with bitline 261, pillar 346 may be coupled with the bitline 260, pillar 342 may be coupled with the bitline 270, pillar 294 may be coupled with the bitline 262, and so on. The shifting pillar pattern described above may be repeated in the provision of memory array 250.Spacing between the pillars 294 and 348 may be maintained the same as spacing S1 between corresponding pillars 248, 236 of the memory array 200. However, spacing between shifted pillar 342 and pillar 348 or between shifted pillar 340 and pillar 348 may increase, in order to maintain desired spacing between pillars. For example, spacing S3 between shifted pillar 342 and pillar 348 may be about 200 nm. Accordingly, distance between imaginary lines 295 and 297 may be L1 (same as corresponding distance in the memory array 200), while distance between imaginary lines 297 and 299 may increase to L2 + X, compared to corresponding distance L2. In embodiments, L2 + X may be about 159 nm.The shifted pillar arrangement described in reference to memory array 250 may have a number of advantages compared to the hexagonal pillar arrangement of the memory array 200. For example, the shifted pillar arrangement of the memory array 250 may provide at least twice as many rows of pillars for a given SGD line, compared to the memory array 200. For example, the memory array 250 comprises four rows of pillars disposed along imaginary lines 295, 297, 299, and 293 for given SGD line 286, compared to two rows of pillars disposed along the imaginary lines 302 and 304 for given SGD line 226 in memory array 200.Effectively, the height of SGD line in the memory array 250 may at least double, compared to the height of SGD line in the memory array 200. Accordingly, the height of the memory array 250 may increase in vertical direction (illustrated by arrow 281), compared to the memory array 200, but the width of the memory array 250 may decrease in the horizontal direction compared to the width (W) of the memory array 200. Thus, the die size (density) of the memory array 250 may remain the same as the die size (density) of the memory 200 (e.g., in horizontal direction). Further, due to the shifting pillar arrangement, the bitline pitch of the memory array 250 may be reduced (e.g., at least by half) compared to the bitline pitch BLP of the memory array 200, allowing for allocation of at least twice as many bitlines in the same width W of the die comprising the memory array 250.Due to increased (e.g., doubled) bitline allocation and corresponding increase in the height of the memory array 250, the density of memory cells in the memory array 250 may remain the same as that of the memory array 200. Reducing the bitline pitch to at least half of the characteristic bitline pitch and offsetting (shifting) pillars with respect to each other for each SGD line may effectively reduce the number of pages per block of the memory array 250, compared to the memory array 200. Accordingly, a copy time (e.g., time to copy a block of memory array 250) may be reduced, compared to a copy time of a block of memory array 200.If the physical width of the array 250 is kept the same as the width of array 200, then the number of bit lines may double. In this case, for a given die density the height of array 250 remains the same as array 200, but the number of SGDs of array 250 halves. Hence for the same number of blocks in both array 200 and 250, the number of SGDs and pages per block also halves for array 250. Hence the block copy time may also be halved for array 250 over 200. FIG.3 is a top view of an example portion of a memory array with shifted pillar arrangement, in accordance with some embodiments of the present disclosure. The memory array 300 may include multiple memory sub blocks 302, 304, 306 (only three sub blocks are shown in FIG.3 for simplicity purposes). For illustration purposes, the memory sub block 302 is shown as having four rows A, B, C, D of pillars 310 electrically connected to respective bitlines 312. Also for illustration purposes, it may be assumed that the A, B, C, D of pillars 310 may be coupled with a single SGD line. As shown, only one pillar may intercept a given bitline on a given SGD line. Based on a shifted pillar arrangement described in reference to FIG.2, it may be seen that in block 302, row B of pillars may be shifted two bitlines with respect to row A, row C may be shifted one bitline with respect to row B, row D may be shifted two bitlines with respect to row C, and row A of block 304 may be shifted two bitlines with respect to row D, both within block 304 and across a sub block boundary 320. It will be understood that row A may correspond to imaginary line 295 of FIG.2, and row C may correspond to imaginary line 299 of FIG.2. Accordingly, row C may be shifted by a quarter of a pillar pitch or one bitline with respect to row A, as described in reference to FIG.2. Similarly, row D may be shifted by a quarter of the pillar pitch or one bitline with respect to row B, and so on.FIG.4 is a flow diagram for a method of fabricating an apparatus comprising a 3D memory array with shifted pillar arrangement, in accordance with some embodiments. The method 400 may comport with actions described in connection with FIGS.2-3 in some embodiments.At block 402, the method 400 may include disposing a plurality of bitlines in a die. As discussed in reference to FIG.2, the bitlines may be disposed at least at a half of characteristic bitline pitch from each other.At block 404, the method 400 may further include disposing a substantially hexagonal arrangement having seven semiconductor pillars in the die, including disposing first and second pillars at a pillar pitch from each other in a first row of the arrangement.At block 406, the method 400 may further include disposing third, fourth and fifth pillars at the pillar pitch from each other in a second row of the arrangement.At block 408, the method 400 may further include disposing sixth and seventh pillars at the determined pitch from each other and shifted relative to the first and second pillars respectively by a fraction (e.g., at least a quarter) of the pillar pitch in a direction that is substantially orthogonal to the plurality of bitlines.At block 410, the method 400 may further include electrically coupling each pillar in the arrangement with a different bitline of the plurality of bitlines.At block 412, the method 400 may further include electrically coupling the arrangement with a drain-side select gate.The method 400 may be performed in different ways, to provide a 3D memory array with the shifted pillar configuration. For example, as discussed above, the substantially hexagonal arrangement with shifted pillars may be a repeating pattern forming the 3D memory array.Accordingly, there may be different ways of providing the repeating pattern with a shifted pillar arrangement on a die, in order to form the 3D memory array.Various operations of the method 400 are described as multiple discrete operations, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent. It will be appreciated that the sequence of operations associated with method 400 may vary and/or include other actions in accordance with the present disclosure.The memory arrays and methods described herein may be implemented into a system using any suitable hardware and/or software to configure as desired.FIG.5 schematically illustrates an example computing device 500 in accordance with some embodiments. The computing device 500 may include system control logic 508 coupled to one or more processor(s) 504, a memory device 512, one or more communications interface(s) 516, and input/output (I/O) devices 520.The memory device 512 may be a non-volatile computer storage chip that includes the memory array 250 or the memory array 300. In addition to the memory array, the memory device 512 may include a package, having the memory array 250 or 300 disposed therein, driver circuitry (e.g., drivers), input/output connections to electrically couple the memory device 512 with other components of the computing device 500, etc. The memory device 512 may be configured to be removably or permanently coupled with the computing device 500.Communications interface(s) 516 may provide an interface for computing device 500 to communicate over one or more network(s) and/or with any other suitable device.Communications interface(s) 516 may include any suitable hardware and/or firmware. Communications interface(s) 516 for one embodiment may include, for example, a network adapter, a wireless network adapter, a telephone modem, and/or a wireless modem. For wireless communications, communications interface(s) 516 for one embodiment may use one or more antennas to communicatively couple the computing device 500 with a wireless network.For one embodiment, at least one of the processor(s) 504 may be packaged together with logic for one or more controller(s) of system control logic 508. For one embodiment, at least one of the processor(s) 504 may be packaged together with logic for one or more controllers of system control logic 508 to form a System in Package (SiP). For one embodiment, at least one of the processor(s) 504 may be integrated on the same die with logic for one or morecontroller(s) of system control logic 508. For one embodiment, at least one of the processor(s) 504 may be integrated on the same die with logic for one or more controller(s) of system control logic 508 to form a System on Chip (SoC).System control logic 508 for one embodiment may include any suitable interface controllers to provide for any suitable interface to at least one of the processor(s) 504 and/or to any suitable device or component in communication with system control logic 508. The system control logic 508 may move data into and/or out of the various components of the computing device 500.System control logic 508 for one embodiment may include a memory controller 524 to provide an interface to the memory device 512 to control various memory access operations. The memory controller 524 may include control logic 528 that is specifically configured to control the memory device 512 as described herein. In various embodiments, the control logic 528 may include instructions stored in a non-transitory computer-readable medium (e.g., the memory device 512 or other memory/storage) that, when executed by at least one of the processor(s) 504, cause the memory controller 524 to perform the above-described operations.In various embodiments, the I/O devices 520 may include user interfaces designed to enable user interaction with the computing device 500, peripheral component interfaces designed to enable peripheral component interaction with the computing device 500, and/or sensors designed to determine environmental conditions and/or location information related to the computing device 500. In various embodiments, the user interfaces could include, but are not limited to, a display, e.g., a liquid crystal display, a touch screen display, etc., a speaker, a microphone, one or more digital cameras to capture pictures and/or video, a flashlight (e.g., a light emitting diode flash), and a keyboard. In various embodiments, the peripheral component interfaces may include, but are not limited to, a non-volatile memory port, an audio jack, and a power supply interface. In various embodiments, the sensors may include, but are not limited to, a gyro sensor, an accelerometer, a proximity sensor, an ambient light sensor, and a positioning unit. The positioning unit may additionally/alternatively be part of, or interact with, the communications interface(s) 516 to communicate with components of a positioning network, e.g., a global positioning system (GPS) satellite.In various embodiments, the computing device 500 may be a mobile computing device such as, but not limited to, a laptop computing device, a tablet computing device, a netbook, a smartphone, etc.; a desktop computing device; a workstation; a server; etc. The computing device 500 may have more or fewer components, and/or different architectures. In further implementations, the computing device 500 may be any other electronic device that processes data.According to various embodiments, the present disclosure describes a number of examples.Example 1 is an apparatus, comprising: a plurality of pillars disposed in a die, wherein the plurality of pillars comprises: a first pillar grouping having at least a first pillar electrically coupled with a first bitline and a second pillar electrically coupled with a second bitline and disposed at a pillar pitch from the first pillar along a first imaginary line that is substantially orthogonal to the first and second bitlines; and a second pillar grouping having at least a third pillar electrically coupled with a third bitline and shifted by at least a quarter of the pillar pitch from the first pillar along a second imaginary line that is substantially orthogonal to the bitlines, and a fourth pillar electrically coupled with a fourth bitline and disposed at the pillar pitch from the third pillar and shifted by the quarter of the pillar pitch from the second pillar along the second imaginary line.Example 2 may include the subject matter of Example 1, wherein the first pillar grouping further includes a fifth pillar electrically coupled with a fifth bitline, and a sixth pillar electrically coupled with a sixth bitline and disposed at the pillar pitch from the fifth pillar along a third imaginary line that is substantially orthogonal to the first and second bitlines, and wherein the second pillar grouping further includes a seventh pillar electrically coupled with a seventh bitline and shifted by at least a quarter of the pillar pitch from the fifth pillar along a fourth imaginary line that is substantially orthogonal to the first and second bitlines, and an eighth pillar electrically coupled with an eighth bitline and disposed at the pillar pitch from the seventh pillar and shifted by the quarter of the pillar pitch from the sixth pillar along the fourth imaginary line.Example 3 may include the subject matter of Example 2, wherein the first and second imaginary lines are disposed at a first distance from each other.Example 4 may include the subject matter of Example 3, wherein the second and third imaginary lines are disposed at a second distance from each other, wherein the second distance is different from the first distance.Example 5 may include the subject matter of Example 4, wherein the first and second distances are to provide a desired spacing between the pillars of the first and second groupings.Example 6 may include the subject matter of Example 2, wherein the first and fifth bitlines are disposed at a characteristic bitline pitch from each other, the first and sixth bitlines are disposed at the characteristic bitline pitch from each other, and the sixth and second bitlines are disposed at the characteristic bitline pitch from each other.Example 7 may include the subject matter of Example 6, wherein the third bitline is disposed between the fifth and first bitlines at least a half of the characteristic bitline pitch from the fifth and first bitlines, wherein the fourth bitline is disposed between the sixth and second bitlines at the half of the characteristic bitline pitch from the sixth and second bitlines.Example 8 may include the subject matter of Example 1, wherein each of the pillars in the first and second groupings is encompassed by a drain-side select gate (SGD).Example 9 may include the subject matter of any of Examples 1 to 8, wherein the apparatus comprises a three-dimensional (3D) memory array.Example 10 may include the subject matter of Example 9, wherein the 3D memory array comprises a 3D NAND memory array.Example 11 is an apparatus, comprising a substantially hexagonal arrangement having seven pillars disposed in a die in a repeating pattern, wherein the arrangement includes first and second pillars disposed at a pillar pitch from each other in a first row of the arrangement, third, fourth, and fifth pillars disposed at the pillar pitch from each other in a second row of the arrangement, and sixth and seventh pillars disposed at the pillar pitch from each other in a third row of the arrangement and shifted relative to the first and second pillars respectively by at least a quarter of the pillar pitch in a direction that is substantially orthogonal to a plurality of bitlines disposed in the die, wherein each pillar in the arrangement is electrically coupled with a different bitline of the plurality of bitlines.Example 12 may include the subject matter of Example 11, wherein each of the pillars in the arrangement is encompassed by a drain-side select gate (SGD).Example 13 may include the subject matter of Example 11, wherein the apparatus comprises a three-dimensional (3D) memory array.Example 14 may include the subject matter of Example 11, wherein the bitlines are disposed at least half of a characteristic bitline pitch from each other.Example 15 may include the subject matter of any of Examples 11 to 14, wherein the first and second rows are disposed at a first distance from each other, wherein the second and third rows are disposed at a second distance from each other, wherein the second distance is different from the first distance.Example 16 may include the subject matter of Example 15, wherein the first and second distances are to provide a desired spacing between the pillars of the arrangement.Example 17 is a method for providing a memory device, comprising: disposing a plurality of bitlines in a die; disposing a substantially hexagonal arrangement having seven pillars in the die, including: disposing first and second pillars at a pillar pitch from each other in a first row of the arrangement; disposing third, fourth, and fifth pillars at the pillar pitch from each other in a second row of the arrangement; and disposing sixth and seventh pillars at the pillar pitch from each other and shifted relative to the first and second pillars respectively by at least a quarter of the pillar pitch in a direction that is substantially orthogonal to the plurality of bitlines; and electrically coupling each pillar in the arrangement with a different bitline of the plurality of bitlines.Example 18 may include the subject matter of Example 17, further comprising:electrically coupling the arrangement with a drain-side select gate (SGD).Example 19 may include the subject matter of any of Examples 17 to 18, further comprising: repeating the disposing of the arrangement in the die, to provide a structure comprising a three-dimensional (3D) memory array.Example 20 may include the subject matter of Example 19, wherein the structure comprises a 3D NAND memory array. Various embodiments may include any suitable combination of the above- described embodiments including alternative (or) embodiments of embodiments that are described in conjunctive form (and) above (e.g., the“and” may be“and/or”). Furthermore, some embodiments may include one or more articles of manufacture (e.g., non-transitory computer-readable media) having instructions, stored thereon, that when executed result in actions of any of the above-described embodiments. Moreover, some embodiments may include apparatuses or systems having any suitable means for carrying out the various operations of the above-described embodiments.The above description of illustrated implementations, including what is described in the Abstract, is not intended to be exhaustive or to limit the embodiments of thepresent disclosure to the precise forms disclosed. While specific implementations and examples are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the present disclosure, as those skilled in the relevant art will recognize.These modifications may be made to embodiments of the present disclosure in light of the above detailed description. The terms used in the following claims should not be construed to limit various embodiments of the present disclosure to thespecific implementations disclosed in the specification and the claims. Rather, the scope is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.
A method of manufacturing a semiconductor device that eliminates the n+ contact implant by using double diffused implants under the core cell contacts by forming core, n-channel and p-channel transistors in a semiconductor substrate, simultaneously forming source and drain DDI implants for the core transistors, forming source and drain Mdd implants for the core transistors, forming source and drain Pldd implants for the p-channel transistors, forming source and drain Nldd implants for the n-channel transistors, forming sidewall spacers on the core, n-channel and p-channel transistors, forming N+ implants for the n-channel transistors, forming P+ implants for the p-channel transistors and forming P+ contact implants for the p-channel transistors.
What is claimed is: 1. A method of manufacturing a semiconductor device that eliminates the n+ contact implant by using double diffused implants under the core cell contacts, the method comprising:(a) forming core transistors, n-channel transistors and p-channel transistors in a semiconductor substrate; (b) simultaneously forming source and drain DDI implants for the core transistors; (c) forming source and drain Mdd implants for the core transistors; (d) forming source and drain Pldd implants for the p-channel transistors; (e) forming source and drain Nldd implants for the n-channel transistors; (f) forming sidewall spacers on the core, n-channel and p-channel transistors; (g) forming N+ implants for the n-channel transistors; (h) forming P+ implants for the p-channel transistors; and (i) without forming N+ contact implants forming P+ contact implants in the semiconductor substrate in regions in which p-channel transistors will be formed. 2. The method of claim 1 wherein step (b) is accomplished by:(j) forming a first layer of photoresist on the semiconductor substrate; (k) masking and etching the first layer of photoresist exposing source and drain regions of the core transistors; and (l) implanting the exposed source and drain regions with a DDI implant. 3. The method of claim 2 wherein step (c) is accomplished by:(m) forming a second layer of photoresist on the semiconductor substrate; (n) masking and etching the second layer of photoresist exposing source and drain regions on the semiconductor substrate in which core transistors will be formed; and (o) implanting Mdd ions into the exposed source and drain regions on the semiconductor substrate under which core transistors will be formed. 4. The method of claim 3 wherein step (d) is accomplished by:(p) forming a third layer of photoresist on the semiconductor substrate; (q) masking and etching the third layer of photoresist exposing source and drain regions on the semiconductor substrate in which p-channel transistors will be formed; and (r) implanting Pldd ions into the exposed source and drain regions on the semiconductor substrate under which p-channel transistors will be formed. 5. The method of claim 4 wherein step (e) is accomplished by:(s) forming a fourth layer of photoresist on the semiconductor substrate; (t) masking and etching the fourth layer of photoresist exposing source and drain regions on the semiconductor substrate in which n-channel transistors will be formed; and (u) implanting Nldd ions into the exposed source and drain regions on the semiconductor substrate under which n-channel transistors will be formed. 6. The method of claim 5 wherein step (f) is accomplished by:(v) forming a layer of sidewall spacer material on the semiconductor substrate; and (w) etching the sidewall spacer material to form sidewalls on the gate structures of the core, n-channel and p-channel transistors. 7. The method of claim 6 wherein step (g) is accomplished by:(x) forming a fifth layer of photoresist on the semiconductor substrate; (y) masking and etching the fifth layer of photoresist exposing source and drain regions on the semiconductor substrate in which n-channel transistors will be formed; and (z) implanting N+ ions into the exposed source and drain regions on the semiconductor substrate. 8. The method of claim 7 wherein step (h) is accomplished by:(aa) forming a sixth layer of photoresist on the semiconductor substrate; (ab) masking and etching the sixth layer of photoresist exposing source and drain regions on the semiconductor substrate in which p-channel transistors will be formed; and (ac) implanting P+ ions into the exposed source and drain regions on the semiconductor substrate. 9. The method of claim 8 wherein step (i) is accomplished by:(ad) forming a layer of interlayer oxide on the semiconductor substrate; (ae) forming a seventh layer of photoresist on the layer of interlayer oxide; (af) masking and etching the seventh layer of photoresist exposing the drain regions on the semiconductor substrate in which core transistors will be formed, exposing the source and drain regions on the semiconductor substrate in which n-channel transistors will be formed and exposing the source and drain regions on the semiconductor substrate in which p-channel transistors will be formed; (ag) removing the seventh layer of photoresist; (ah) forming an eighth layer of photoresist on the layer of interlayer oxide; (ai) masking and etching the eighth layer of photoresist exposing source and drain regions on the semiconductor substrate in which p-channel transistors will be formed; (aj) implanting P+ ions into the exposed source and drain regions of the semiconductor substrate in which p-channel transistors will be formed; and (ak) removing the eighth layer of photoresist. 10. A method of manufacturing a semiconductor device that eliminates the N+ contact implant by using double diffused implants under the core cell contacts, comprising:forming core transistors, n-channel transistors and p-channel transistors in a semiconductor substrate; simultaneously forming source and drain DDI implants for the core transistors; and forming source and drain Mdd implants for the core transistors. 11. The method of claim 10, further comprising forming source and drain N<+> implants for the n-channel transistors and forming source and drain P<+> implants for the p-channel transistors.12. The method of claim 11, further comprising forming source and drain Pldd implants for the p-channel transistors and forming source and drain Nldd implants for the n-channel transistors.13. The method of claim 12, further comprising forming sidewall spacers on the core, n-channel and p-channel transistors.14. The method of claim 13, further comprising forming N<+> implants for the n-channel transistors and forming P<+> implants for the p-channel transistors.
BACKGROUND OF THE INVENTION1. Field of the InventionThis invention relates generally to the manufacture of high density, high performance semiconductor devices. More specifically, this invention relates to the manufacture of high density, high performance semiconductor devices utilizing a reduced number of steps during the manufacturing process.2. Discussion of the Related ArtIn order to remain competitive, a semiconductor manufacture must continuously increase the performance of the semiconductor integrated circuits being manufactured and at the same time, reduce the cost of the semiconductor integrated circuits. Part of the increase in performance and the reduction in cost of the semiconductor integrated circuits is accomplished by shrinking the device dimensions and by increasing the number of devices per unit area on an integrated circuit chip. Another part of reducing the cost of a semiconductor chip is to increase the throughput of the fabrication facility (the "fab").A single semiconductor chip requires numerous process steps such as oxidation, etching, metallization and wet chemical cleaning. Some of these process steps involve placing the wafer on which the semiconductor chips are being manufactured into different tools during the manufacturing process. As can be appreciated, a reduction in the number of process steps in which the semiconductor wafers must be moved from one tool to another can be a major increase in the throughput of the tab as well as a major decrease in the cost of manufacturing the chips on the semiconductor wafer.Therefore, what is needed are methods of reducing the number of processing steps necessary to manufacture semiconductor wafers on which semiconductor integrated chips are manufactured.SUMMARY OF THE INVENTIONAccording to the present invention, the foregoing and other objects and advantages are obtained by a method of manufacturing a semiconductor memory device that reduces the number of manufacturing steps required to manufacture the device.In accordance with an aspect of the invention, the method includes the following sequence of steps; core, n-channel and p-channel transistors are formed in a semiconductor substrate, source and drain DDI (double diffused implant) implants are simultaneously formed for the core transistors, source and drain Mdd (modified drain diffusion) implants are formed for the core transistors, source and drain Pldd (P lightly doped drain) implants for the p-channel transistors, source and drain Nldd (N lightly doped drain) implants are formed for the n-channel transistors, sidewall spacers are formed on the core, p-channel and n-channel transistors, N+ implants are formed for the n-channel transistors and P+ implants are formed for the p-channel transistors.In accordance with another aspect of the invention, P+ contact implants are formed for the p-channel transistors.The described method thus provides a method for reducing the number of manufacturing steps required to manufacture a semiconductor memory device.The present invention is better understood upon consideration of the detailed description below, in conjunction with the accompanying drawings. As will become readily apparent to those skilled in the art from the following description, there is shown and described an embodiment of this invention simply by way of illustration of the best mode to carry out the invention. As will be realized, the invention is capable of other embodiments and its several details are capable of modifications in various obvious aspects, all without departing from the scope of the invention. Accordingly, the drawings and detailed description will be regarded as illustrative in nature and not as restrictive.BRIEF DESCRIPTION OF THE DRAWINGSThe novel features believed characteristic of the invention are set forth in the appended claims. The invention itself however, as well as a preferred mode of use, and further objects and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein:FIGS. 1A-1AG show a number of the process steps necessary to manufacture a semiconductor wafer in accordance with the prior art, andFIGS. 2A-2AG show the reduced number of process steps in accordance with the present invention that are necessary to manufacture the semiconductor wafer processed in the prior art process shown in FIGS. 1A-1AG.DETAILED DESCRIPTIONReference is now made in detail to specific embodiment of the present invention that illustrates the best mode presently contemplated by the inventors for practicing the invention.FIGS. 1A-1AG show a number of the process steps necessary to manufacture a semiconductor wafer in accordance with the prior art, andFIGS. 2A-2AG show the reduced number of process steps in accordance with the present invention that are necessary to manufacture the semiconductor wafer processed in the process shown in FIGS. 1A-1AG.The prior art process shown in FIGS. 1A-1AG will be discussed in conjunction with the process shown in FIGS. 2A-2AG in accordance with the present invention in order to clearly point out which process steps have been modified or eliminated.FIG. 1A shows a portion 100 of a prior art semiconductor wafer including a core transistor 102 region, an n-channel transistor 104 region and a p-channel transistor 106 region with a layer 108 of photoresist formed over the entire semiconductor wafer including the portion 100. The line 110 indicates the separation between the core transistor 102 and the n-channel transistor 104 regions. The line 112 indicates the separation between the n-channel transistor 104 and the p-channel transistor 106 regions.FIG. 2A shows a portion 200 of a semiconductor wafer manufactured in accordance with the present invention including a core transistor 202 region, an n-channel transistor 204 region and a p-channel transistor 206 region with a layer 208 of photoresist formed over the entire semiconductor wafer including the portion 200. The line 210 indicates the separation between the core transistor 202 and the n-channel transistor 204 regions. The line 212 indicates the separation between the n-channel transistor 204 and. the p-channel transistor 206 regions.FIG. 1B shows the portion 100 of the prior art semiconductor wafer as shown in FIG. 1A with portions of the layer 108 of photoresist removed from the semiconductor wafer in locations such as 114 in which a core transistor source region 116 is to be formed.FIG. 2B shows the portion 200 of the semiconductor wafer as shown in FIG. 2A with portions of the layer 208 of photoresist removed from the semiconductor wafer in locations such as 214 in which a core transistor source region 216 is to be formed. In addition, portions of the layer 208 of photoresist are removed from the semiconductor wafer in locations such as 218 and 220 in which contacts to drain regions 222 and 224 are to be formed.FIG. 1C shows the portion 100 of the prior art semiconductor wafer as shown in FIG. 1B being implanted with a DDI implant indicated by arrows 120.FIG. 2C shows the portion 200 of the semiconductor wafer as shown in FIG. 2B being implanted with a DDI implant indicated by arrows 226.FIG. 1D shows the portion 100 of the prior art semiconductor wafer as shown in FIG. 1C with the remaining portions of the layer 108 of photoresist removed from the semiconductor wafer and showing the implanted core transistor source region 116.FIG. 2D shows the portion 200 of the semiconductor wafer as shown in FIG. 2C with the remaining portions of the layer 208 of photoresist removed from the semiconductor wafer and showing the implanted core transistor source region 216 and the implanted core transistor drain regions 222 and 224.FIG. 1E shows the portion 100 of the prior art semiconductor wafer as shown in FIG. 1D with a second layer 122 of photoresist formed on the surface of the semiconductor wafer.FIG. 2E shows the portion 200 of the semiconductor wafer as shown in FIG. 2D with a second layer 228 of photoresist formed on the surface of the semiconductor wafer.FIG. 1F shows the portion 100 of the prior art semiconductor wafer as shown in FIG. 1E with the second layer 122 of photoresist removed from over the core transistor 102 and the portion 100 of the semiconductor wafer being implanted with an Mdd implant indicated by arrows 123.FIG. 2F shows the portion 200 of the semiconductor wafer as shown in FIG. 2E with the second layer 228 of photoresist removed from over the core transistor 202 and the portion 100 of the semiconductor wafer being implanted with an Mdd implant as indicated by arrows 229.FIG. 1G shows the portion 100 of the semiconductor wafer as shown in FIG. 1F with the remaining portions of the second layer 122 of photoresist removed from the semiconductor wafer and showing the Mdd implant regions 124 and 126 in the core transistor drain regions and the Mdd implant region 128 in the core transistor source region 116. The semiconductor wafer is shown undergoing an oxidation process as indicated by wavy arrows 129.FIG. 2G shows the portion 200 of the semiconductor wafer as shown in FIG. 2F with the remaining portions of the second layer 228 of photoresist removed from the semiconductor wafer and showing the Mdd implant regions 230 and 232 in the core transistor drain regions 222 and 224, respectively and showing the Mdd implant region 234 in the core transistor source region 216. The semiconductor wafer is undergoing an oxidation process as indicated by wavy arrows 235.FIG. 1H shows the portion 100 of the semiconductor wafer as shown in FIG. 1G with a third layer 130 of photoresist formed on the surface of the semiconductor wafer.FIG. 2H shows the portion 200 of the semiconductor wafer as shown in FIG. 2G with a third layer 236 of photoresist formed on the surface of the semiconductor wafer.FIG. 1I shows the portion 100 of the semiconductor wafer as shown in FIG. 1H with the portion of the third layer 134 removed from the region over the p-channel transistor 106 and with the semiconductor wafer undergoing a Pldd implant as indicated by the arrows 136.FIG. 2I shows the portion 200 of the semiconductor wafer as shown in FIG. 2H with the portion of the third layer 236 of photoresist removed from the region over the p-channel transistor 206 and with the semiconductor wafer undergoing a Pldd implant as indicated by the arrows 238.FIG. 1J shows the portion 100 of the semiconductor wafer as shown in FIG. 1I with the remaining portions of the third layer 134 removed and showing the Pldd implants 138 and 140 in the region of the p-channel transistor 106.FIG. 2J shows the portion 200 of the semiconductor wafer as shown in FIG. 2I with the remaining portions of the third layer 236 removed and showing the Pldd implants 240 and 242 in the p-channel 206 region.FIG. 1K shows the portion 100 of the semiconductor wafer as shown in FIG. 1J with a layer 142 of photoresist formed on the semiconductor wafer.FIG. 2K shows the portion 200 of the semiconductor wafer as shown in FIG. 2J with a layer 244 of photoresist formed on the semiconductor wafer.FIG. 1L shows the portion 100 of the semiconductor as shown in FIG. 1K with a portion of the layer 142 of photoresist removed from the region over the n-channel transistor 104 and with the semiconductor wafer undergoing an Nldd implant as indicated by the arrows 144.FIGS. 2L shows the portion 200 of the semiconductor wafer as shown in FIG. 2K with a portion of the layer 244 of photoresist removed from the region over the n-channel transistor 204 and with the semiconductor wafer undergoing an Nldd implant as indicated by the arrows 246.FIG. 1M shows the portion 100 of the semiconductor wafer as shown in FIG. 1L with the remaining portions of the layer 142 of photoresist removed and showing the Nldd implants 146 and 148 in the n-channel transistor 104 region. A layer 150 of spacer oxide is formed on the surface of the semiconductor wafer.FIG. 2M shows the portion 200 of the semiconductor wafer as shown n FIG. 2L with the remaining portions of the layer 244 of photoresist removed and showing the Nldd implants 248 and 250 in the n-channel transistor 204 region. A layer 252 of spacer oxide is formed on the surface of the semiconductor wafer.FIG. 1N shows the portion 100 of the semiconductor wafer as shown in FIG. 1M with the layer 150 of spacer oxide etched to form the sidewall spacers 152.FIG. 2N shows the portion 200 of the semiconductor wafer as shown in FIG. 2M with the layer 252 of spacer oxide etched to form the sidewall spacers 254.FIG. 1O shows the portion 100 of the semiconductor wafer as shown in FIG. 1N with a fifth layer 154 of photoresist formed on the semiconductor wafer.FIG. 2O shows the portion 200 of the semiconductor wafer as shown in FIG. 2N with a fifth layer 256 of photoresist formed on the semiconductor wafer.FIG. 1P shows the portion 100 of the semiconductor wafer as shown in FIG. 1O with a portion of the fifth layer 154 of photoresist removed from the region over the n-channel transistor 104.FIG. 2P shows the portion 200 of the semiconductor wafer as shown in FIG. 2O with a portion of the fifth layer 256 of photoresist removed from the region over the n-channel transistor 204.FIG. 1Q shows the portion 100 of the semiconductor wafer as shown in FIG. 1P being implanted with an n+ implant as indicated by arrows 156.FIG. 2Q shows the portion 200 of the semiconductor wafer as shown in FIG. 2P being implanted with an n+ implant as indicated by arrows 258.FIG. 1R shows the portion 100 of the semiconductor wafer as shown in FIG. 1Q with the remaining portions of the fifth layer 154 of photoresist removed from the semiconductor wafer and showing the n+ implants 158 and 160 in the n-channel transistor 104 region.FIG. 2R shows the portion 200 of the semiconductor wafer as shown in FIG. 2Q with the remaining portions of the fifth layer 256 of photoresist removed from the semiconductor wafer and showing the n+ implants 260 and 262 in the n-channel transistor 204 region.FIG. 1S shows the portion 100 of the semiconductor wafer as shown in FIG. 1R with a layer 162 of photoresist formed on the semiconductor wafer.FIG. 2S shows the portion 200 of the semiconductor wafer as shown in FIG. 2R with a layer 264 of photoresist formed on the semiconductor wafer.FIG. 1T shows the portion 100 of the semiconductor wafer as shown in FIG. 1S with a portion of the layer 162 of photoresist removed from the region over the p-channel transistor 106.FIG. 2T shows the portion 200 of the semiconductor wafer as shown in FIG. 2S with a portion of the layer 264 of photoresist removed from the region over the p-channel transistor 206.FIG. 1U shows the portion 100 of the semiconductor wafer as shown in FIG. 1T showing the semiconductor wafer undergoing a p+ implant as indicated by the arrows 164.FIG. 2U shows the portion 200 of the semiconductor wafer as shown in FIG. 2T showing the semiconductor wafer undergoing a p+ implant as indicated by the arrows 266.FIG. 1V shows the portion 100 of the semiconductor wafer as shown in FIG. 1U with the remaining portions of the layer 162 of photoresist removed and showing the p+ implants 166 and 168 in the p-channel transistor 106 region.FIG. 2V shows the portion 200 of the semiconductor wafer as shown in FIG. 2U with the remaining portions of the layer 264 of photoresist removed and showing the p+ implants 268 and 270 in the p-channel transistor 206 regions.FIG. 1W shows the portion 100 of the semiconductor wafer as shown in FIG. 1V with a layer 170 of interlayer oxide formed on the semiconductor wafer.FIG. 2W shows the portion 200 of the semiconductor wafer as shown in FIG. 2V with a layer 272 of interlayer oxide formed on the semiconductor wafer.FIG. 1X shows the portion 100 of the semiconductor wafer as shown in FIG. 1W with a layer 172 of photoresist formed on the layer 170 of interlayer oxide.FIG. 2X shows the portion 200 of the semiconductor wafer as shown in FIG. 2W with a layer 274 of photoresist formed on the layer 272 of interlayer oxide.FIG. 1Y shows the portion 100 of the semiconductor wafer as shown in FIG. 1X with the layer 172 of photoresist etched to cut holes in the layer 272 of interlayer oxide.FIG. 2Y shows the portion 200 of the semiconductor wafer as shown in FIG. 2X with the layer 274 of photoresist etched and holes etched in the layer 272 of interlayer oxide exposing drain regions of the core transistors and exposing source and drain regions of the n-channel transistors and the p-channel transistors.FIG. 1Z shows the portion 100 of the semiconductor wafer as shown in FIG. 1Y with the layer 172 of photoresist removed.FIG. 2Z shows the portion 200 of the semiconductor wafer as shown in FIG. 2Z with the layer 274 of photoresist removed.FIG. 1AA shows the portion 100 of the semiconductor wafer as shown in FIG. 1Z with a layer 174 of photoresist formed on the semiconductor wafer.FIG. 2AA indicates that the step equivalent to the step shown in FIG. 1AA in the prior art can be skipped in the method taught by the present invention.FIG. 1AB shows the portion 100 of the semiconductor wafer as shown in FIG. 1AA with a portion of the layer 174 removed from the region over the core transistor 102 region and from over the n-channel transistor 104 region and showing the implantation of n+ contact implants indicated by arrows 176. The n+ contact implant is used to reduce the resistance of the n-channel transistor 104 and core transistor 102 contacts.FIG. 2AB indicates that the step equivalent to the step shown in FIG. 1AB in the prior art can be skipped in the method taught by the present invention.FIG. 1AC shows the portion 100 of the semiconductor wafer as shown in FIG. 1AB with the remaining portions of the eighth layer 174 removed and showing the n+ contacts 178 in the core transistor 102 and the n+ contacts 189 in the n-channel transistor 104.FIG. 2AC indicates that the step equivalent to the step shown in FIG. 1AC in the prior art can be skipped in the method taught by the present invention.FIG. 1AD shows the portion 100 of the semiconductor wafer as shown in FIG. 1AC with a layer 182 of photoresist formed on the semiconductor wafer.FIG. 2AD shows the portion 200 of the semiconductor wafer as shown in FIG. 2Z with the layer 276 on the semiconductor wafer.FIG. 1AE shows the portion 100 of the semiconductor wafer as shown in FIG. 1AD with a portion of the layer 276 of photoresist removed from over the p-channel transistor 206.FIG. 2AE shows the portion 200 of the semiconductor wafer as shown in FIG. 2AD with a portion of the layer 276 of photoresist removed from over the p-channel transistor 206.FIG. 1AF shows the portion 100 of the semiconductor wafer as shown in FIG. 1AE being implanted with p+ contact implants as indicted by arrows 184.FIG. 2AF shows the portion 200 of the semiconductor wafer as shown in FIG. 2AE being implanted with p+ contact implants as indicated by arrows 278.FIG. 1AG shows the portion 100 of the semiconductor wafer as shown n FIG. 1AF showing the p+ contacts 188, with the remaining portions of the layer 182 of photoresist removed and prepared for the forming of metal contacts via holes 186.FIG. 2AG shows the portion 200 of the semiconductor wafer as shown in FIG. 2AF showing the p+ contacts 182, with the remaining portions of the layer 276 removed and the semiconductor wafer prepared for the forming of metal contacts via holes 280.In summary, the present invention overcomes the limitations of the prior art and provides a method for the manufacture of semiconductor memory devices that reduces the number of manufacturing steps necessary to manufacture the semiconductor devices resulting in a reduction of the cost of producing the semiconductor memory devices.The foregoing description of the embodiment of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Obvious modifications or variations are possible in light of the above teachings. The embodiment was chosen and described to provide the best illustration of the principles of the invention and its practical application to thereby enable one of ordinary skill in the art to utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated. All such modifications and variations are within the scope of the invention as determined by the appended claims when interpreted in accordance with the breadth to which they are fairly, legally, and equitably entitled.
Systems, apparatuses, and methods related to extended memory communication subsystems for performing extended memory operations are described. An example method can include receiving, at a processing unit that is coupled between a host device and a non-volatile memory device, signaling indicative of a plurality of operations to be performed on data written to or read from the non-volatile memory device. The method can further include performing, at the processing unit, at least one operation of the plurality of operations in response to the signaling. The method can further include accessing a portion of a memory array in the non-volatile memory device. The method can further include transmitting additional signaling indicative of a command to perform one or more additional operations of the plurality of operations on the data written to or read from the non-volatile memory device.
What is claimed is:1. A method, comprising: receiving, at a processing unit that is coupled between a host device and a non-volatile memory device, signaling indicative of a plurality of operations to be performed on data written to or read from the non-volatile memory device; performing, at the processing unit, at least one operation of the plurality of operations in response to the signaling; accessing, via a controller at the processing unit or non-volatile memory device, a portion of a memory array in the non-volatile memory device; and transmitting, to a hardware accelerator, additional signaling indicative of a command to perform one or more additional operations of the plurality of operations on the data written to or read from the non-volatile memory device.2. The method of claim 1, wherein accessing the portion of the non-volatile memory device comprises accessing an array of phase change memory cells or cells of a resistive random access memory (ReRAM), or both.3. The method of claim 1, wherein performing the at least one of the plurality of operations further comprises performing an operation in which data is ordered, reordered, removed, or discarded, or a comma-separated value parsing operation, or any combination thereof.4. The method of any one of claims 1-3, wherein accessing the portion of data comprises reading data from the portion of the non-volatile memory device or writing data to the portion of the non-volatile memory device, or both.5. The method of any one of claims 1-3, wherein transmitting, to the hardware accelerator, the additional signaling indicative of the command to perform the one or more additional operations further comprises transmitting additional signaling indicative of a first portion of the command to perform the one or more additional operations by the hardware accelerator.6. The method of claim 5, further comprising transmitting further additional signaling indicative of a second portion of the command to perform the one or more additional operations by the additional hardware accelerator.7. The method of any one of claims 1-3, further comprising determining a portion of the non-volatile memory device to store output data resulting from performing the at least one operation.8. The method of claim 1, further comprising receiving a response from the hardware accelerator indicating that the at least one operation has been executed.9. The method of claim 8, further comprising sending a response to a host indicating that the at least one operation has been executed.10. A method, comprising: receiving, at a hardware accelerator and from a computing device, signaling indicative of an operation to be performed on data written to or read from a non-volatile memory device, wherein the signaling indicates: a location in the non-volatile memory device; and the operation to be executed by the hardware accelerator; accessing data in the location; performing the operation on the data by the hardware accelerator; and sending an indication to the computing device that the operation has been executed.11. The method of claim 10, wherein the signaling indicative of the operation comprises signaling associated with reducing a size of data from a first size to a second size by the computing device.12. The method of claim 11, further comprising sending additional signaling from the hardware accelerator to an additional hardware accelerator, the signaling indicative of performing a portion of the operation.13. The method of claim 12, further comprising: performing the portion of the operation in the additional hardware accelerator; performing an additional portion of the operation in the hardware accelerator; and combining a result of the performed portion of the operation and a result of the performed additional portion of the operation.14. An apparatus, comprising: a computing device comprising: a processing unit configured to perform an operation on a block of data; a memory array configured as a cache for the processing unit; a plurality of communication subsystems coupled to the computing device and to a memory device; and a plurality of hardware accelerators coupled to the communication subsystem, wherein the computing device is to: receive, at the processing unit that is coupled between a host device and the memory device, signaling indicative of an operation to be performed on data written to or read from the memory device; transmit, via the communication subsystem, to at least one of the plurality of hardware accelerators, additional signaling indicative of a command to perform at least a portion of the operation; and receive a result of performing the operation from the at least one of the plurality of hardware accelerators.15. The apparatus of claim 14, wherein the memory device comprises at least one of a double data rate (DDR) memory, a three-dimensional (3D) cross-point memory, or a NAND memory, or any combination thereof.16. The apparatus of claim 14, wherein at least one of the plurality of hardware accelerators is on-chip and is coupled to a static random access device (SRAM).17. The apparatus of any one of claims 14-16, wherein the hardware accelerator is on-chip and is coupled to an arithmetic logic unit (ALU) configured to perform an arithmetic operation or a logical operation, or both.18. The apparatus of any one of claims 14-16, wherein the computing device is a reduced instruction set computer (RISC)-V.19. The apparatus of any one of claims 14-16, wherein the computing device is to transmit additional signaling indicative of the command that comprises signaling indicative of an address of a particular location in the memory device.20. The apparatus of claim 19, wherein the at least one of the plurality of hardware accelerators is configured to perform the at least a portion of the operation by accessing the memory device at the particular location.21. The apparatus of any one of claims 14-16, wherein the at least one of the plurality of hardware accelerators is configured to send further signaling indicative of a request for an additional one of the plurality of hardware accelerators to perform a sub-portion of the portion of the operation.22. The apparatus of claim 14-16, wherein the additional one of the plurality of hardware accelerators is configured to: perform the sub-portion; and send a result of performing the sub-portion to the at least one of the plurality of hardware accelerators.23. The apparatus of claim 14-16, wherein the computing device is configured to combine: a first result received from the at least one of the plurality of hardware accelerators; and a second result from the additional one of the plurality of hardware accelerators.
EXTENDED MEMORY COMMUNICATIONTechnical Field[0001] The present disclosure relates generally to semiconductor memory and methods, and more particularly, to apparatuses, systems, and methods for an extended memory communication.Background[0002] Memory devices are typically provided as internal, semiconductor, integrated circuits in computers or other electronic systems. There are many different types of memory including volatile and non-volatile memory. Volatile memory can require power to maintain its data (e.g., host data, error data, etc.) and includes random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), synchronous dynamic random access memory (SDRAM), and thyristor random access memory (TRAM), among others. Non-volatile memory can provide persistent data by retaining stored data when not powered and can include NAND flash memory, NOR flash memory, and resistance variable memory such as phase change random access memory (PCRAM), resistive random access memory (RRAM), and magnetoresistive random access memory (MRAM), such as spin torque transfer random access memory (STT RAM), among others.[0003] Memory devices may be coupled to a host (e.g., a host computing device) to store data, commands, and/or instructions for use by the host while the computer or electronic system is operating. For example, data, commands, and/or instructions can be transferred between the host and the memory device(s) during operation of a computing or other electronic system.Brief Description of the Drawings[0004] Figure 1 is a functional block diagram in the form of a computing system including an apparatus including a first plurality of communication subsystems, a second plurality of communication subsystems, and a plurality of memory devices in accordance with a number of embodiments of the present disclosure. [0005] Figure 2 is yet another functional block diagram in the form of a computing system including an apparatus including a first plurality of communication subsystems, a second plurality of communication subsystems, and a plurality of memory devices in accordance with a number of embodiments of the present disclosure.[0006] Figure 3 is yet another functional block diagram in the form of a computing system including an apparatus including a first plurality of communication subsystems, a second plurality of communication subsystems, and a plurality of memory devices in accordance with a number of embodiments of the present disclosure.[0007] Figure 4 is yet another functional block diagram in the form of a computing system including an apparatus including a first plurality of communication subsystems, a second plurality of communication subsystems, and a plurality of memory devices in accordance with a number of embodiments of the present disclosure.[0008] Figure 5 is a functional block diagram in the form of an apparatus of a computing core including a number of ports in accordance with a number of embodiments of the present disclosure.[0009] Figure 6 is a flow diagram representing an example method corresponding to extended memory communication in accordance with a number of embodiments of the present disclosure.Detailed Description[0010] Systems, apparatuses, and methods related to extended memory communication subsystems for performing extended memory operations are described. An example method can include receiving, at a processing unit that is coupled between a host device and a non-volatile memory device, signaling indicative of a plurality of operations to be performed on data written to or read from the non-volatile memory device. The method can further include performing, at the processing unit, at least one operation of the plurality of operations in response to the signaling. The method can further include accessing a portion of a memory array in the non-volatile memory device. The method can further include transmitting additional signaling indicative of a command to perform one or more additional operations of the plurality of operations on the data written to or read from the non-volatile memory device. [0011] Extended memory communication can include providing signals and/or commands across extended memory. An extended memory interface can transfer instructions to perform operations specified by a single address and operand and may be performed by the computing device that includes a processing unit and a memory resource. The computing device can perform extended memory operations on data streamed through the computing device without receipt of intervening commands. The extended memory operations can include an operation in which data is ordered, reordered, removed, or discarded, a comma-separated value parsing operation, or both. In an example, a computing device is configured to receive a command to perform an operation that comprises performing an operation on data with the processing unit of the computing device and determine that an operand corresponding to the operation is stored in the memory resource. The computing device can further perform the operation using the operand stored in the memory resource.[0012] The computing device can perform hardware acceleration by sending instructions and/or commands to a number of hardware accelerators to perform the operation. In some examples, a portion of the operation can be sent to a first hardware accelerator and a second portion of the operation can be sent to a second hardware accelerator. In some examples, the operation can be sent to a hardware accelerator for completion and the hardware accelerator can send a portion of the operation to an additional hardware accelerator to complete a portion of the operation. In this way, results from more than one hardware accelerator can be sent to the computing device to combine the results or a primary hardware accelerator can combine the results and send the completed result to the computing device.[0013] Hardware acceleration can be implemented in computing systems to perform certain tasks and/or functions in a manner that is more efficient (e.g., faster, more accurate, higher quality, etc.) in comparison to performing the task and/or function using a central processing unit (CPU) of the computing system. For example, by providing dedicated hardware (e.g., a hardware accelerator or hardware acceleration unit) that is configured to perform a certain task and/or function that can otherwise be performed using the CPU of the computing system, certain tasks and/or functions can be processed in a more efficient manner than in approaches in which the CPU is responsible for performance of such tasks and/or functions. This can further allow for processing resources that could otherwise be consumed by the CPU to be freed up, thereby further improving performance of the computing system.[0014] Some examples of hardware accelerators include sounds processing units (e.g., sound cards), graphics processing units (GPUs or “graphics cards”), digital signal processing units, analog signal processing units, computer networking processing units (e.g., networks on a chip, TCP offload engines, I/O acceleration processing units, etc.), cryptography processing units (e.g., cryptographic accelerator units, which can provide hardware-based encryption and/or decryption), artificial intelligence processing units (e.g., vision processing units, neural network processing units, etc.), tensor processing units, physics processing units, regular expression processing units, and/or data compression acceleration units, among others. Hardware accelerators can be provided as computer hardware in the form of a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), and/or a complex programmable logic device, a system-on-chip, among others. It will be appreciated that the foregoing enumerated examples of hardware accelerators and specifically enumerated examples of computer hardware are neither limiting nor exhaustive, and other hardware accelerators and/or computer hardware are contemplated within the scope of the disclosure.[0015] In some approaches, hardware accelerators can be deployed in a computing system as discrete components that perform a specified task and/or function with no visibility to other hardware accelerators that can be deployed within the computing system. For example, in some approaches, a hardware accelerator can operate without knowledge of other hardware accelerators deployed within the computing system. Further, in some approaches, hardware accelerators can be dedicated to perform a limited set of specific tasks and/or functions. For example, a sound processing unit can be provided in a computing system with the sole purpose of performing hardware acceleration on signals related to auditory playback for the computing system. As another example, a GPU can be provided in a computing system for the sole purpose of performing hardware acceleration on signals related to visual display for the computing system.[0016] As is described below, the computing device can be a RISC-V application processor core, capable of supporting full-featured operating systems such as Linux. This particular core can be used in association with applications such as intemet-of-things (IoT) nodes and gateways, storage, and/or networking. The core can be coupled to a number of ports, such as a memory port, a system port, a peripheral port, and/or a front port. As an example, the memory port can be in communication with a memory device, the system port can be in communication with an on-chip accelerator, the peripheral port can be in communication with an off-chip serial port, and/or the front port can be in communication with a host interface, as will be described further below in association with Figure 4.[0017] In this way, the first communication subsystems can be used to direct data from a particular port (e.g., a memory port of a computing device) through a first communication subsystem (e.g., a multiplexer that selects that particular memory port) and transfer it through a second communication subsystem (e.g., an interface such as an AXI interconnect interface) to a memory controller that transfer the data to a memory device (e.g., a DDR memory, a three-dimensional (3-D) cross-point memory, aNAND memory, etc.). In an example, the AXI interconnect interfaces can conform to the AMBA® AXI version 4 specifications from ARM®, including the AXI4-Lite control register interface subset.[0018] As used herein, an “extended memory operation” refers to a memory operation that can be specified by a single address (e.g., a memory address) and an operand, such as a 64-bit operand. An operand can be represented as a plurality of bits (e.g., a bit string or string of bits).Embodiments are not limited to operations specified by a 64-bit operand, however, and the operation can be specified by an operand that is larger (e.g., 128-bits, etc.) or smaller (e.g., 32-bits) than 64-bits. As described herein, the effective address space accessible with which to perform extended memory operations is the size of a memory device or file system accessible to a host computing system or storage controller. [0019] Extended memory operations can include instructions and/or operations that can be performed by a processing device (e.g., by a processing device such as a core 110, 210, 310, 410, or a core computing device specifically shown as 510 in Figure 5). Examples of a core can include a reduced instruction set computing device or other hardware processing device that can execute instructions to perform various computing tasks. In some embodiments, performing an extended memory operation can include retrieving data and/or instructions stored in a memory resource of the computing device, performing the operation within the computing device 110 (e.g., without transferring the data or instructions to circuitry external to the computing device), and storing the result of the extended memory operation in the memory resource of the computing device 110 or in secondary storage (e.g., in a memory device such as the memory device 116-1, 116-2, illustrated in Figure 1, herein). Signaling indicative of a plurality of operations to be performed on data written to or from a memory device can be sent to or from the computing devices 110, accelerators 114, etc.[0020] Non-limiting examples of extended memory operations can include floating point add accumulate, 32-bit complex operations, square root address (SQRT(addr)) operations, conversion operations (e.g., converting between floating-point and integer formats, and/or converting between floating point and universal number formats such as Type I, Type II, and/or Type III universal number formats, posit formats, etc.), normalizing data to a fixed format, absolute value operations, etc. In some embodiments, extended memory operations can include operations performed by the computing device that update in place (e.g., in which a result of an extended memory operation is stored at the address in which an operand used in performance of the extended memory operation is stored prior to performance of the extended memory operation), as well as operations in which previously stored data is used to determine a new data (e.g., operations in which an operand stored at a particular address is used to generate new data that overwrites the particular address where the operand was stored).[0021] As a result, in some embodiments, performance of extended memory operations can mitigate or eliminate locking or mutex operations, because the extended memory operation(s) can be performed within the computing device, which can reduce contention between multiple thread execution. Reducing or eliminating performance of locking or mutex operations on threads during performance of the extended memory operations can lead to increased performance of a computing system, for example, because extended memory operations can be performed in parallel within a same computing device or across two or more of the computing devices that are in communication with each other. In addition, in some embodiments, extended memory operations described herein can mitigate or eliminate locking or mutex operations when a result of the extended memory operation is transferred from the computing device that performed the operation to a host.[0022] Memory devices may be used to store important or critical data in a computing device and can transfer, via at least one extended memory interface, such data between a host associated with the computing device. However, as the size and quantity of data stored by memory devices increases, transferring the data to and from the host can become time consuming and resource intensive.For example, when a host requests performance of memory operations using large blocks of data, an amount of time and/or an amount of resources consumed in obliging the request can increase in proportion to the size and/or quantity of data associated with the blocks of data.[0023] As storage capability of memory devices increases, these effects can become more pronounced as more and more data are able to be stored by the memory device and are therefore available for use in memory operations. In addition, because data may be processed (e.g., memory operations may be performed on the data), as the amount of data that is able to be stored in memory devices increases, the amount of data that may be processed can also increase. This can lead to increased processing time and/or increased processing resource consumption, which can be compounded in performance of certain types of memory operations. In order to alleviate these and other issues, embodiments herein can allow for extended memory operations to be performed using a memory device, one or more computing devices, and/or memory array(s) and a first plurality of communication subsystems (e.g., multiplexers) and a second plurality of subsystems (e.g., interfaces such as AXI interconnects) in order to transfer data more efficiently from a computing device to a memory device and/or from a computing device to a host, and vice versa. [0024] In some approaches, performing memory operations can require multiple clock cycles and/or multiple function calls to memory of a computing system such as a memory device and/or memory array. In contrast, embodiments herein can allow for performance of extended memory operations in which a memory operation is performed with a single function call or command. For example, in contrast to approaches in which at least one command and/or function call is utilized to load data to be operated upon and then at least one subsequent function call or command to store the data that has been operated upon is utilized, embodiments herein can allow for performance of memory operations using fewer function calls or commands in comparison to other approaches. Further, the computing devices of the computing system can receive requests to perform the memory operations via a first communication subsystem (e.g., a multiplexer, a control network-on-chip, etc.) and/or a second communication subsystem (e.g., an interface, an interconnect such as an AXI interconnect, etc.) and can receive blocks of data for executing the requested memory operations from the memory device via the first communication subsystem and the second communication subsystem. While the first and the second communication subsystem are described in tandem, embodiments are not so limited. As an example, the requests for data and/or receipt of blocks of data can be via the second communication subsystem alone.[0025] By reducing the number of function calls and/or commands utilized in performance of memory operations, an amount of time consumed in performing such operations and/or an amount of computing resources consumed in performance of such operations can be reduced in comparison to approaches in which multiple function calls and/or commands are required for performance of memory operations. Further, embodiments herein can reduce movement of data within a memory device and/or memory array because data may not need to be loaded into a specific location prior to performance of memory operations. This can reduce processing time in comparison to some approaches, especially in scenarios in which a large amount of data is subject to a memory operation. [0026] Further, extended memory operations described herein can allow for a much larger set of type fields in comparison to some approaches. For example, an instruction executed by a host to request performance of an operation using data in a memory device (e.g., a memory sub-system) can include a type, an address, and a data field. The instruction can be sent to at least one of a plurality of computing devices via a first communication subsystem (e.g., a multiplexer) and a second communication subsystem (e.g., an interface) and the data can be transferred from the memory device via the first and/or second communication subsystem. The type field can correspond to the particular operation being requested, the address can correspond to an address in which data to be used in performance of the operation is stored, and the data field can correspond to the data (e.g., an operand) to be used in performing the operation. In some approaches, type fields can be limited to different size reads and/or writes, as well as some simple integer accumulate operations. In contrast, embodiments herein can allow for a broader spectrum of type fields to be utilized because the effective address space that can be used when performing extended memory operations can correspond to a size of the memory device. By extending the address space available to perform operations, embodiments herein can therefore allow for a broader range of type fields and, therefore, a broader spectrum of memory operations can be performed than in approaches that do not allow for an effective address space that is the size of the memory device.[0027] In the following detailed description of the present disclosure, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration how one or more embodiments of the disclosure may be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the embodiments of this disclosure, and it is to be understood that other embodiments may be utilized and that process, electrical, and structural changes may be made without departing from the scope of the present disclosure.[0028] As used herein, designators such as “X,” “Y,” “N,” “M,” “A,” B. “C,” “D,” etc., particularly with respect to reference numerals in the drawings, indicate that a number of the particular feature so designated can be included. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” can include both singular and plural referents, unless the context clearly dictates otherwise.In addition, “a number of,” “at least one,” and “one or more” (e.g., a number of memory banks) can refer to one or more memory banks, whereas a “plurality of’ is intended to refer to more than one of such things. Furthermore, the words “can” and “may” are used throughout this application in a permissive sense (i.e., having the potential to, being able to), not in a mandatory sense (i.e., must). The term “include,” and derivations thereof, means “including, but not limited to.” The terms “coupled” and “coupling” mean to be directly or indirectly connected physically or for access to and movement (transmission) of commands and/or data, as appropriate to the context. The terms “data” and “data values” are used interchangeably herein and can have the same meaning, as appropriate to the context.[0029] The figures herein follow a numbering convention in which the first digit or digits correspond to the figure number and the remaining digits identify an element or component in the figure. Similar elements or components between different figures may be identified by the use of similar digits. For example, 104 may reference element “04” in Figure 1, and a similar element may be referenced as 204 in Figure 2. A group or plurality of similar elements or components may generally be referred to herein with a single element number. For example, a plurality of reference elements 110-1, 110-2, 110-3, 110-4, 110-5 may be referred to generally as 110. As will be appreciated, elements shown in the various embodiments herein can be added, exchanged, and/or eliminated so as to provide a number of additional embodiments of the present disclosure. In addition, the proportion and/or the relative scale of the elements provided in the figures are intended to illustrate certain embodiments of the present disclosure and should not be taken in a limiting sense.[0030] Figure 1 is a functional block diagram in the form of a computing system 100 including an apparatus 104 including a plurality of computing devices 110, a first plurality of communication subsystems 108, a second plurality of communication subsystems 106, a plurality of hardware accelerators 114, and a plurality of memory devices 116, in accordance with a number of embodiments of the present disclosure. As used herein, an “apparatus” can refer to, but is not limited to, any of a variety of structures or combinations of structures, such as a circuit or circuitry, a die or dice, a module or modules, a device or devices, or a system or systems, for example. In the embodiment illustrated in Figure 1, memory devices 116-1... 116-N can include one or more memory modules (e.g., double data rate (DDR) memory, three-dimensional (3D) cross-point memory, NAND memory, single in-line memory modules, dual in line memory modules, etc.). The memory devices 116-1, . . ., 116-N can include volatile memory and/or non-volatile memory. In a number of embodiments, memory devices 116-1, ... , 116-N can include a multi-chip device. A multi-chip device can include a number of different memory types and/or memory modules. For example, a memory system can include non-volatile or volatile memory on any type of a module.[0031] The memory devices 116-1, . . ., 116-N can provide main memory for the computing system 100 or could be used as additional memory or storage throughout the computing system 100. Each memory device 116-1, . . ., 116-N can include one or more arrays of memory cells, e.g., volatile and/or non volatile memory cells. The arrays can be flash arrays with a NAND architecture, for example. Embodiments are not limited to a particular type of memory device. For instance, the memory device can include RAM, ROM, DRAM, SDRAM, PCRAM, RRAM, and flash memory, among others.[0032] In embodiments in which the memory devices 116-1, . . ., 116-N include non-volatile memory, the memory devices 116-1, . . ., 116-N can be flash memory devices such as NAND or NOR flash memory devices. Embodiments are not so limited, however, and the memory devices 116-1, . . ., 116-N can include other non-volatile memory devices such as non-volatile random-access memory devices (e.g., NVRAM, ReRAM, FeRAM, MRAM, PCM), “emerging” memory devices such as 3-D Crosspoint (3D XP) memory devices, etc., or combinations thereof. A 3D XP array of non-volatile memory can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, 3D XP non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased.[0033] As illustrated in Figure 1, the plurality of computing devices 110-1, 110-2, 110-3, 110-4, 110-5 (hereinafter referred to collectively as plurality of computing devices 110) can be coupled to an SRAM 109. The plurality of computing devices 110 can be coupled to the SRAM 109 through a bus matrix. Further, the plurality of computing devices 110 can be coupled to the first plurality of communication subsystems (e.g., multiplexers) 108-1, 108-2. The first plurality of communication subsystems 108 can include circuitry and/or logic configured to allocate and de-allocate resources to the computing devices 110 during performance of operations described herein. For example, the circuitry and/or logic can allocate and/or de-allocate resources to the computing devices 110 during performance of extended memory operations described herein.[0034] The plurality of computing devices 110 can be coupled to a first(108-1) of the first plurality of communication subsystems 108 through the SRAM 109. The plurality of computing devices 110 can be directly coupled to the first of the first plurality of communication subsystems 1081-1 and/or to a second (108-2) of the first plurality of communication subsystems 108, as illustrated by arrows in Figure 1. In this way, each of the first plurality of communication subsystems can select a particular computing device 110 for transferring data, and vice versa, each of the computing devices 110 can transfer data through the first plurality of communication subsystems 108.[0035] The first plurality of communication subsystems 108-1 can be coupled to a second plurality of communication subsystems (e.g., interfaces such as an interconnect interface) 106-1, 106-2, 106-3, 106-4, 106-5 (hereinafter referred to collectively as second plurality of communication subsystems 106). Each of the second plurality of communication subsystems 106 can be coupled to a corresponding one of a controller 112, an accelerator 114, and a host interface 120. In one example, the second plurality of communication subsystems 106 can be coupled to the corresponding controller 112, accelerators 114, and/or host interface 120 via a number of AXI buses.[0036] As is illustrated, a first (106-1) of the second plurality of communication subsystems 106 can be coupled to the controller (e.g., memory controller) 112. The controller 112 can be coupled to a number of memory devices 116-1, ... , 116-N via a number of channels 107-1, ... , 107-N. A second (106-2), third (106-3, and fourth (106-4) of the second plurality of communication subsystems 106 can each be coupled to a corresponding one of the plurality of hardware accelerators 114-1, 114-2, 114-3. The communication subsystem 108-1 can be coupled to the second plurality of communication subsystems 106-2, 106-3, 106-4 via respective buffers 119-1, 119-2, 119-3. The second plurality of communication subsystems 106-2, 106-3, 106-4 can be coupled to the plurality of hardware accelerators 114 via respective buffers 117- 1, 117-2, 117-3. The hardware accelerators 114 can be used for performing a number of posit operations, and/or for communication with an internal SRAM on the FPGA.[0037] A posit operation can refer to an operation performed using universal number (“unum”) formatted bit strings as operands and/or as inputs.As used herein, universal number formatted bit strings can provide an alternative to the IEEE floating point bit string standard. Several universal number formats exist (e.g., Type I universal numbers, Type II universal numbers, and Type III universal numbers). The Type III unum format is referred to herein as a “posit format” or, for simplicity, a “posit.” In contrast to floating-point bit strings, posits can, under certain conditions, allow for higher precision (e.g., a broader dynamic range, higher resolution, and/or higher accuracy) than floating-point numbers with the same bit width. This can allow for operations performed by a computing system to be performed at a higher rate (e.g., faster) when using posits than with floating-point numbers, which, in turn, can improve the performance of the computing system by, for example, reducing a number of clock cycles used in performing operations thereby reducing processing time and/or power consumed in performing such operations. In addition, the use of posits in computing systems can allow for higher accuracy and/or precision in computations than floating-point numbers, which can further improve the functioning of a computing system in comparison to some approaches (e.g., approaches which rely upon floating-point format bit strings).[0038] Posits can be highly variable in precision and accuracy based on the total quantity of bits and/or the quantity of sets of integers or sets of bits included in the posit. In addition, posits can generate a wide dynamic range.The accuracy, precision, and/or the dynamic range of a posit can be greater than that of a float, or other numerical formats, under certain conditions, as described in more detail herein. The variable accuracy, precision, and/or dynamic range of a posit can be manipulated, for example, based on an application in which a posit will be used. In addition, posits can reduce or eliminate the overflow, underflow, NaN, and/or other comer cases that are associated with floats and other numerical formats. Further, the use of posits can allow for a numerical value (e.g., a number) to be represented using fewer bits in comparison to floats or other numerical formats.[0039] A computing device 110 can send a command to perform a posit operation and/or additional operations. The computing device 100 can divide the posit operation into portions of sub-operations to each be sent to a hardware accelerator 114. For example, a first computing device 110-1 can divide a posit operation into two sub-operations and a first of the two sub-operations can be sent to a first hardware accelerator 114-1 and a second of the two sub-operations can be sent to a second hardware accelerator 114-2. The results of the first and the second sub-operation can be sent to the first computing device 110-1 and the first computing device 110-1 can combine the results into a single result of the posit operation.[0040] In one example, a computing device 110 can send a posit operation to a first hardware accelerator 114-1 and the first hardware accelerator 114-1 can send a portion of the posit operation to a second hardware accelerator 114-2. Upon receipt of the result of the portion of the posit operation from the second hardware accelerator 114-2, the first hardware accelerator 114-1 can generate a result for the posit operation, including the result from the portion of the posit operation. Likewise, any number of divisions of a posit operation can be sent from a computing device 110 to particular numbers of corresponding hardware accelerators 114 to perform the posit operation and the results can be combined for a final result of the posit operation. Likewise, multiple computing devices 110 can sub-divide a posit operation and send different portions to different hardware accelerators 114 to perform the sub-divided posit operations. As an example, a posit operation can be sub-divided by computing devices 110 and the sub-divided posit operations can be further sub-divided by each corresponding computing device 110 and sent to different hardware accelerators 114.[0041] Further, another additional one (also not illustrated) of the second plurality of communication subsystems 106 can be used for transferring data off- chip through an off-chip serial port. The fifth (106-5) of the second plurality of communication subsystems 106 can be coupled to a host interface 120 and can communicate, via channels 103/105, with a host controller 101 of a host 102. While not illustrated, a communication subsystem (such as another of the second plurality of communication subsystems, not illustrated) can be coupled to logic circuitry. The logic circuitry can be on a same field programmable gate array (FPGA) as the computing devices 110, first plurality of communication subsystems, second plurality of communication subsystems 106, etc.[0042] In one embodiment, the computing device 110 can process an operation queue of messages from a host 102. The operation queue can be processed by the computing device 110 by reading an input message with arguments and executing a desired function. Further, the computing device 110 can read and/or write data to at least one of the memory devices 116-1, ... , 116- N in order to perform an operation. The computing device 110 can generate a work item message to be performed and generate a message to send to at least one of the hardware accelerators 114 indicating to perform a work item associated with the input message. The message can identify an operation or sub-operation to be performed in relation to the receipted input message and arguments, identify a hardware accelerator to be activated and a function to be performed, identify an input data location of the memory device 116, and identify an output data location of the memory device 116. As an example, the input data location can indicate a location of data in the memory device 116 to retrieve data from in order to perform the work item. The output data location can indicate a location in the memory device 116 to store the resultant output of the operation of the work item. The work item message can be sent to the corresponding hardware accelerator 114 queue (e.g., such as to the buffer 117 of a respective hardware accelerator 114). As the results of the operation are generated or received, additional messages indicating additional operations to be performed can be generated and sent to hardware accelerators. The generation of messages and reception of results can continue until a final result of the initial operation brings the work item to completion. Upon completion of the work item, a completion message can be sent to the computing device 110 indicating the work item has been complete. A message can be sent to the host 102 indicating that the work item has been completed.[0043] The hardware accelerator 114, upon receipt of a work item, can read the work item message, including corresponding data locations in the memory device 116. The hardware accelerator 114 can perform the requested accelerator operations contained within the message. In one embodiment, the hardware accelerator 114 can send a portion of the operation to an additional hardware accelerator 114 (e.g., hardware accelerator 114-1 can receive the message and can send a portion of the operation in the message to hardware accelerator 114-2 to be completed by hardware accelerator 114-2). The completed portion of the operation (executed by hardware accelerator 114-2) can be sent to the initial hardware accelerator (114-1) and the initial hardware accelerator (114-1) can combine the completed portion with other results to finalized completion of the operation in the work item message. Once fully complete, the hardware accelerator 114 can send a message to the computing device 110 indicating the work item has been completed.[0044] In one embodiment, the host 102 can send a request for a computing device 110 to perform an operation. The computing device 110 can perform data initialization and write the data to location in the memory device 116. As the computing device 110 generates 4K (or a multiple thereol) of data, the computing device 110 can creates a work item to be completed by a hardware accelerator 114. To further process the data. When the hardware accelerator 114 completes the work item, the hardware accelerator can send a message to the computing device 110 that the work item is complete. The computing device 110 can either further process the data, send the data to another hardware accelerator 114, or leave the data in the memory device 116 alone and continue processing the data further.[0045] The host 102 can map the data into its own address space. The host 102 can map a file (e.g., Linux for example) into the computing device 110 processing unit address space. The computing device 110 has a map between its addresses and the locations within the memory device 116. When a hardware accelerator 116 work item is created, the address passed to the hardware accelerator 114 can be the logical block address of the memory device 116. The host 102 can be responsible for mapping the address between the file system and the 64-bit address space of the computing device 110. The computing device 110 can be responsible for mapping its addresses into logical block locations of the memory device 116. In this way, the hardware accelerators 114 are responsible for transferring data from one logical data location of the memory device 116 to another. [0046] The host 102 can be a host system such as a personal laptop computer, a desktop computer, a digital camera, a smart phone, a memory card reader, and/or intemet-of-things enabled device, among various other types of hosts, and can include a memory access device, e.g., a processor (or processing device). One of ordinary skill in the art will appreciate that “a processor” can intend one or more processors, such as a parallel processing system, a number of coprocessors, etc. The host 102 can include a system motherboard and/or backplane and can include a number of processing resources (e.g., one or more processors, microprocessors, or some other type of controlling circuitry). In some embodiments, the host can include the host controller 101, which can be configured to control at least some operations of the host 102 by, for example, generating and transferring commands to the host controller to cause performance of operations such as extended memory operations. The host controller 101 can include circuitry (e.g., hardware) that can be configured to control at least some operations of the host 102. For example, the host controller 101 can be an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other combination of circuitry and/or logic configured to control at least some operations of the host 102.[0047] The system 100 can include separate integrated circuits or the host 102, the first plurality of communication subsystems 108, the second plurality of communication subsystems 106, the controller 112, the on-chip accelerators 114, the host interface 120, and/or the memory devices 116-1, . . .,116-N can be on the same integrated circuit. The system 100 can be, for instance, a server system and/or a high performance computing (HPC) system and/or a portion thereof. Although the example shown in Figure 1 illustrate a system having a Von Neumann architecture, embodiments of the present disclosure can be implemented in non-Von Neumann architectures, which may not include one or more components (e.g., CPU, ALU, etc.) often associated with a Von Neumann architecture.[0048] The controller 112 can be configured to request a block of data from one or more of the memory devices 116-1, . . ., 116-N and cause the cores 110-1, . . ., 110-N, which may be referred to in the alternative as “computing devices,” herein, to perform an operation (e.g., an extended memory operation) on the block of data. The operation may be performed to evaluate a function that can be specified by a single address and one or more operands associated with the block of data. The controller 112 can be further configured to cause a result of the extended memory operation to be stored in one or more of the computing devices 110-1, . . 110-N through the second 106 and/or the first 108 communication subsystems and/or to be transferred to a channel (e.g., communication paths 103 and/or 105) and/or the host 102.[0049] In some embodiments, the second plurality of communication subsystems 106 can request a remote command, start a DMA command, send a read/write location, and/or send a start function execution command to one of the plurality of computing devices 110. In some embodiments, the second plurality of communication subsystems 106 can request that a block of data be copied from a buffer of a computing device 110 to a buffer of a memory controller 112 or memory device 116. Vice versa, one of the second plurality of communication subsystems 106 can request that a block of data be copied to the buffer of the computing device 110 from the buffer of the memory controller 112 or memory device 116. The second plurality of communication subsystems 106 can request that a block of data be copied to a computing device 110 from a buffer of the host 102 or, vice versa, request that a block of data be copied from a computing device 110 to a host 102. The second plurality of communication subsystems 106 can request that a block of data be copied to a buffer of the host 102 from a buffer of the memory controller 112 or memory device 116. Vice versa, the second plurality of communication subsystems 106 can request that a block of data be copied from a buffer of the host 102 to a buffer of the memory controller 112 or memory device 116. Further, in some embodiments, the second plurality of communication subsystems 106 can request that a command from a host be executed on a computing device 110. The second plurality of communication subsystems 106 can request that a command from a computing device 110 be executed on an additional computing device 110. The second plurality of communication subsystems 106 can request that a command from a memory controller 112 be executed on a computing device 110. In some embodiments, as described in more detail in connection with Figure 3, herein, the second plurality of communication subsystems 106 can include at least a portion of a controller (not illustrated). [0050] In some embodiments, the second plurality of communication subsystems 106 can transfer a block of data (e.g., a direct memory access (DMA) block of data) from a computing device 110 to a media device 116 (via the memory controller 112) or, vice versa, can transfer a block of data to a computing device 110 from a media device 116. The second plurality of communication subsystems 106 transfer a block of data (e.g., a DMA block) from a computing device 110 to a host 102 or, vice versa, to a computing device 110 from a host 102. Further, the second plurality of communication subsystems 106 can transfer a block of data (e.g., a DMA block) from a host 102 to a media device 116 or, vice versa, to a host 102 from a media device 116. In some embodiments, the second plurality of communication subsystems 106 can receive an output (e.g., data on which an extended memory operation has been performed) from the computing devices 110-1, . . ., 110-N and transfer the output from the computing devices 110-1, . . ., 110-N to a controller 115 of the apparatus 104 and/or the host 102, and vice versa. For example, the second plurality of communication subsystems 106 may be configured to receive data that has been subjected to an extended memory operation by the computing devices 110-1, . . ., 110-N and transfer the data that corresponds to the result of the extended memory operation to a controller 115 and/or the host 102. In some embodiments, second plurality of communication subsystems 106 can include at least a portion of the controller 115. For example, the second plurality of communication subsystems 106 can include the circuitry that comprises the controller 115, or a portion thereof.[0051] The memory controller 112 can be a “standard” or “dumb” memory controller. For example, the memory controller 112 can be configured to perform simple operations such as copy, write, read, error correct, etc. for the memory devices 116-1, . . ., 116-N. However, in some embodiments, the memory controller 112 does not perform processing (e.g., operations to manipulate data) on data associated with the memory devices 116-1, . . ., 116-N. For example, the memory controller 112 can cause a read and/or write operation to be performed to read or write data from or to the memory devices 116-1, . . ., 116-N via the communication paths 107-1, . . ., 107-N, but the memory controller 112 may not perform processing on the data read from or written to the memory devices 116-1, . . ., 116-N. In some embodiments, the memory controller 112 can be anon-volatile memory controller, although embodiments are not so limited.[0052] The embodiment of Figure 1 can include additional circuitry that is not illustrated so as not to obscure embodiments of the present disclosure. For example, the apparatus 104 can include address circuitry to latch address signals provided over I/O connections through I/O circuitry. Address signals can be received and decoded by a row decoder and a column decoder to access the memory devices 116-1, . . ., 116-N. It will be appreciated by those skilled in the art that the number of address input connections can depend on the density and architecture of the memory devices 116-1, . . ., 116-N.[0053] In some embodiments, extended memory operations can be performed using the computing system 100 shown in Figure 1 by selectively storing or mapping data (e.g., a file) into a computing device 110. The data can be selectively stored in an address space of the computing memory. In some embodiments, the data can be selectively stored or mapped in the computing device 110 in response to a command received from the host 102. In embodiments in which the command is received from the host 102, the command can be transferred to the computing device 110 via an interface (e.g., communication paths 103 and/or 105) associated with the host 102 and via the first and second plurality of communication subsystems 108 and 106, respectively. The interface(s) 103/105, first plurality of communication subsystems 108 and the second plurality of communication subsystems 106 can be peripheral component interconnect express (PCIe) buses, double data rate (DDR) interfaces, interconnect interfaces (such as AXI interconnect interfaces), multiplexers (muxes), or other suitable interfaces or buses. Embodiments are not so limited, however.[0054] In a non-limiting example in which the data (e.g., in which data to be used in performance of an extended memory operation) is mapped into the computing device 110, the host controller 101 can transfer a command to the computing device 110 to initiate performance of an extended memory operation using the data mapped into the computing device 110. In some embodiments, the host controller 101 can look up an address (e.g., a physical address) corresponding to the data mapped into the computing device 110 and determine, based on the address, which computing device (e.g., the computing device 110- 1) the address (and hence, the data) is mapped to. The command can then be transferred to the computing device (e.g., the computing device 110-1) that contains the address (and hence, the data).[0055] In some embodiments, the data can be a 64-bit operand, although embodiments are not limited to operands having a specific size or length. In an embodiment in which the data is a 64-bit operand, once the host controller 101 transfers the command to initiate performance of the extended memory operation to the correct computing device (e.g., the computing device 110-1) based on the address at which the data is stored, the computing device (e.g., the computing device 110-1) can perform the extended memory operation using the data.[0056] In some embodiments, the computing devices 110 can be separately addressable across a contiguous address space, which can facilitate performance of extended memory operations as described herein. That is, an address at which data is stored, or to which data is mapped, can be unique for all the computing devices 110 such that when the host controller 101 looks up the address, the address corresponds to a location in a particular computing device (e.g., the computing device 110-1).[0057] For example, a first computing device 110-1 can have a first set of addresses associated therewith, a second computing device 110-2 can have a second set of addresses associated therewith, a third computing device 110-3 can have a third set of addresses associated therewith, through the n-th computing device (e.g., the computing device 110-N), which can have an /i-th set of addresses associated therewith. That is, the first computing device 110-1 can have a set of addresses 0000000 to 0999999, the second computing device 110-2 can have a set of addresses 1000000 to 1999999, the third computing device 110-3 can have a set of addresses 2000000 to 2999999, etc. It will be appreciated that these address numbers are merely illustrative, non-limiting, and can be dependent on the architecture and/or size (e.g., storage capacity) of the computing devices 110.[0058] As a non-limiting example in which the extended memory operation comprises a floating-point-add-accumulate operation (FLOATINGPOINT ADD ACCUMULATE), the computing devices 110 can treat the destination address as a floating-point number, add the floating-point number to the argument stored at the address of the computing device 110, and store the result back in the original address. For example, when the host controller 101 (or an apparatus controller 115, not shown) initiates performance of a floating-point add accumulate extended memory operation, the address of the computing device 110 that the host looks up (e.g., the address in the computing device to which the data is mapped) can be treated as a floating-point number and the data stored in the address can be treated as an operand for performance of the extended memory operation. Responsive to receipt of the command to initiate the extended memory operation, the computing device 110 to which the data (e.g., the operand in this example) is mapped can perform an addition operation to add the data to the address (e.g., the numerical value of the address) and store the result of the addition back in the original address of the computing device 110.[0059] As described above, performance of such extended memory operations can, in some embodiments require only a single command (e.g., request command) to be transferred from the host 102 (e.g., from the host controller 101) to the memory device 104 or from the controller 115 to the computing device(s) 110. In contrast to some previous approaches, this can reduce an amount of time, for example, for multiple commands to traverse the interface(s) 103, 105 and/or for data, such as operands to be moved from one address to another within the computing device(s) 110, consumed in performance of operations.[0060] In addition, performance of extended memory operations in accordance with the disclosure can further reduce an amount of processing power or processing time since the data mapped into the computing device 110 in which the extended memory operation is performed can be utilized as an operand for the extended memory operation and/or the address to which the data is mapped can be used as an operand for the extended memory operation, in contrast to approaches in which the operands must be retrieved and loaded from different locations prior to performance of operations. That is, at least because embodiments herein allow for loading of the operand to be skipped, performance of the computing system 100 may be improved in comparison to approaches that load the operands and subsequently store a result of an operations performed between the operands. [0061] Further, in some embodiments, because the extended memory operation can be performed within a computing device 110 using the address and the data stored in the address and, in some embodiments, because the result of the extended memory operation can be stored back in the original address, locking or mutex operations may be relaxed or not required during performance of the extended memory operation. Reducing or eliminating performance of locking or mutex operations on threads during performance of the extended memory operations can lead to increased performance of the computing system 100 because extended memory operations can be performed in parallel within a same computing device 110 or across two or more of the computing devices 110. [0062] In some embodiments, valid mappings of data in the computing devices 110 can include a base address, a segment size, and/or a length. The base address can correspond to an address in the computing device 110 in which the data mapping is stored. The segment size can correspond to an amount of data (e.g., in bytes) that the computing system 100 can process, and the length can correspond to a quantity of bits corresponding to the data. It is noted that, in some embodiments, the data stored in the computing device(s) 110 can be uncacheable on the host 102. For example, the extended memory operations can be performed entirely within the computing devices 110 without encumbering or otherwise transferring the data to or from the host 102 during performance of the extended memory operations.[0063] In a non-limiting example in which the base address is 4096, the segment size is 1024, and the length is 16,386, a mapped address, 7234, may be in a third segment, which can correspond to a third computing device (e.g., the computing device 110-3) among the plurality of computing devices 110. In this example, the host 102 and/or the first 108 and second 106 communication subsystems can forward a command (e.g., a request) to perform an extended memory operation to the third computing device 110-3. The third computing device 110-3 can determine if data is stored in the mapped address in a memory of the third computing device 110-3. If data is stored in the mapped address (e.g., the address in the third computing device 110-3), the third computing device 110-3 can perform a requested extended memory operation using that data and can store a result of the extended memory operation back into the address in which the data was originally stored. [0064] In some embodiments, the computing device 110 that contains the data that is requested for performance of an extended memory operation can be determined by the host controller 101, and/or the first 108 and/or second 106 communication subsystems. For example, a portion of a total address space available to all the computing devices 110 can be allocated to each respective computing device. Accordingly, the host controller 101 and/or the first 108 and/or second 106 communication subsystems can be provided with information corresponding to which portions of the total address space correspond to which computing devices 110 and can therefore direct the relevant computing devices 110 to perform extended memory operations. In some embodiments, the host controller 101 and/or the second 106 communication subsystems can store addresses (or address ranges) that correspond to the respective computing devices 110 in a data structure, such as a table, and direct performance of the extended memory operations to the computing devices 110 based on the addresses stored in the data structure.[0065] Embodiments are not so limited, however, and in some embodiments, the host controller 101 and/or the second communication subsystems 106 can determine a size (e.g., an amount of data) of the memory resource(s) and, based on the size of the memory resource(s) associated with each computing device 110 and the total address space available to all the computing devices 110, determine which computing device 110 stores data to be used in performance of an extended memory operation. In embodiments in which the host controller 101 and/or the second communication subsystems 106 determine the computing device 110 that stores the data to be used in performance of an extended memory operation based on the total address space available to all the computing devices 110 and the amount of memory resource(s) available to each computing device 110, it can be possible to perform extended memory operations across multiple non-overlapping portions of the computing device memory resource(s).[0066] Continuing with the above example, if there is not data in the requested address, the third computing device 110-3 can request the data as described in more detail in connection with Figures 2-5, herein, and perform the extended memory operation once the data is loaded into the address of the third computing device 110-3. In some embodiments, once the extended memory operation is completed by the computing device (e.g., the third computing device 110-3 in this example), and/or the host 102 can be notified and/or a result of the extended memory operation can be transferred to the memory devices 116 and/or the host 102.[0067] In some embodiments, the memory controller 112 can be configured to retrieve blocks of data from a memory device(s) 116-1, . . ., 116-N coupled to the apparatus 104 in response to a request from a controller of the apparatus 104 or a host 102. The memory controller 112 can subsequently cause the blocks of data to be transferred to the computing devices 110-1, . . .,110-N and/or the apparatus controller. Similarly, the memory controller 112 can be configured to receive blocks of data from the computing devices 110 and/or the controller 115. The memory controller 112 can subsequently cause the blocks of data to be transferred to a memory device 116 coupled to the storage controller 104.[0068] The blocks of data can be approximately 4 kilobytes in size(although embodiments are not limited to this particular size) and can be processed in a streaming manner by the computing devices 110-1, . . ., 110-N in response to one or more commands generated by the controller 115 and/or a host and sent via the second communication subsystems 106. In some embodiments, the blocks of data can be 32-bit, 64-bit, 128-bit, etc. words or chunks of data, and/or the blocks of data can correspond to operands to be used in performance of an extended memory operation.[0069] For example, as described in more detail in connection withFigures 2-5, herein, because the computing devices 110 can perform an extended memory operation (e.g., process) a second block of data in response to completion of performance of an extended memory operation on a preceding block of data, the blocks of data can be continuously streamed through the computing devices 110 while the blocks of data are being processed by the computing devices 110. In some embodiments, the blocks of data can be processed in a streaming fashion through the computing devices 110 in the absence of an intervening command from the controller and/or the hostl02.That is, in some embodiments, the controller 115 (or host 102) can issue a command to cause the computing devices 110 to process blocks of data received thereto and blocks of data that are subsequently received by the computing devices 110 can be processed in the absence of an additional command from the controller.[0070] In some embodiments, processing the blocks of data can include performing an extended memory operation using the blocks of data. For example, the computing devices 110-1, . . ., 110-N can, in response to commands from the controller via the second plurality of communication subsystems 106, perform extended memory operations the blocks of data to evaluate one or more functions, remove unwanted data, extract relevant data, or otherwise use the blocks of data in connection with performance of an extended memory operation.[0071] In a non-limiting example in which the data (e.g., in which data to be used in performance of an extended memory operation) is mapped into one or more of the computing devices 110, the controller can transfer a command to the computing device 110 to initiate performance of an extended memory operation using the data mapped into the computing device(s) 110. In some embodiments, the controller 115 can look up an address (e.g., a physical address) corresponding to the data mapped into the computing device(s) 110 and determine, based on the address, which computing device (e.g., the computing device 110-1) the address (and hence, the data) is mapped to. The command can then be transferred to the computing device (e.g., the computing device 110-1) that contains the address (and hence, the data). In some embodiments, the command can be transferred to the computing device (e.g., the computing device 110-1) via the second communication subsystem 106.[0072] The controller 115 (or a host) can be further configured to send commands to the computing devices 110 to allocate and/or de-allocate resources available to the computing devices 110 for use in performing extended memory operations using the blocks of data. In some embodiments, allocating and/or de allocating resources available to the computing devices 110 can include selectively enabling some of the computing devices 110 while selectively disabling some of the computing devices 110. For example, if less than a total number of computing devices 110 are required to process the blocks of data, the controller 115 can send a command to the computing devices 110 that are to be used for processing the blocks of data to enable only those computing devices 110 desired to process the blocks of data. [0073] The controller 115 can, in some embodiments, be further configured to send commands to synchronize performance of operations, such as extended memory operations, performed by the computing devices 110 For example, the controller 115 (and/or a host) can send a command to a first computing device 110-1 to cause the first computing device 110-1 to perform a first extended memory operation, and the controller 115 (or the host) can send a command to a second computing device 110-2 to perform a second extended memory operation using the second computing device. Synchronization of performance of operations, such as extended memory operations, performed by the computing devices 110 by the controller 115 can further include causing the computing devices 110 to perform particular operations at particular time or in a particular order.[0074] As described above, data that results from performance of an extended memory operation can be stored in the original address in the computing device 110 in which the data was stored prior to performance of the extended memory operation, however, in some embodiments, blocks of data that result from performance of the extended memory operation can be converted into logical records subsequent to performance of the extended memory operation. The logical records can comprise data records that are independent of their physical locations. For example, the logical records may be data records that point to an address (e.g., a location) in at least one of the computing devices 110 where physical data corresponding to performance of the extended memory operation is stored.[0075] In some embodiments, the result of the extended memory operation can be stored in an address of a computing device memory that is the same as the address in which the data is stored prior to performance of the extended memory operation. Embodiments are not so limited, however, and the result of the extended memory operation can be stored in an address of the computing device memory that is the same as the address in which the data is stored prior to performance of the extended memory operation. In some embodiments, the logical records can point to these address locations such that the result(s) of the extended memory operation can be accessed from the computing devices 110 and transferred to circuitry external to the computing devices 110 (e.g., to a host). [0076] In some embodiments, the controller 115 can receive and/or send blocks of data directly to and from the memory controller 112. This can allow the controller 115 to transfer blocks of data that are not processed (e.g., blocks of data that are not used in performance of extended memory operations) by the computing devices 110 to and from the memory controller 112.[0077] For example, if the controller 115 receives unprocessed blocks of data from a host 102 coupled to the storage controller 104 that are to be stored by memory device(s) 116 coupled to the storage controller 104, the controller 115 can cause the unprocessed blocks of data to be transferred to the memory controller 112, which can, in turn, cause the unprocessed blocks of data to be transferred to memory device(s) coupled to the storage controller 104.[0078] Similarly, if the host requests an unprocessed (e.g., a full) block of data (e.g., a block of data that is not processed by the computing devices 110), the memory controller 112 can cause unprocessed blocks of data to be transferred to the controller 115, which can subsequently transfer the unprocessed blocks of data to the host.[0079] Figure 2 is a functional block diagram in the form of a computing system 200 including an apparatus 204 including a first plurality of communication subsystems 208, a second plurality of communication subsystems 206, and a plurality of memory devices 216 in accordance with a number of embodiments of the present disclosure. As used herein, an “apparatus” can refer to, but is not limited to, any of a variety of structures or combinations of structures, such as a circuit or circuitry, a die or dice, a module or modules, a device or devices, or a system or systems, for example. In the embodiment illustrated in Figure 2, memory devices 216-1...216-N can include one or more memory modules (e.g., double data rate (DDR) memory, three- dimensional (3D) cross-point memory, NAND memory, single in-line memory modules, dual in-line memory modules, etc.). The memory devices 216-1, . . .,216-N can include volatile memory and/or non-volatile memory. In a number of embodiments, memory devices 216-1, ... , 216-N can include a multi-chip device. A multi-chip device can include a number of different memory types and/or memory modules. For example, a memory system can include non volatile or volatile memory on any type of a module. [0080] As illustrated in Figure 2, and in contrast to Figure 1, a plurality of computing devices 210-1, 210-2, 210-3, 210-4, 210-5 (hereinafter referred to collectively as plurality of computing devices 210) can be coupled to a first 208- 1 of the first plurality of communication subsystem 208, which is coupled to the plurality of hardware accelerators 114 through a second 206-2 of the second plurality of communication subsystems 206. In one embodiment, the first plurality of communication subsystems 208 can be a plurality of multiplexers and the second plurality of communication subsystems 206 can be a plurality of AXI interconnects. Further, the first communication subsystem 208-1 is coupled directly to a buffer 219 which is coupled to the second communication subsystem 206-2. The second 206-2 of the second plurality of communication subsystems 206 is coupled directly to an additional buffer 217. The additional buffer 217 is coupled to a second 208-2 of the first plurality of communication subsystems 208. The second 208-2 of the first plurality of communication subsystems 208 can be coupled to each of the plurality of hardware accelerators 214-1, 214-2, 214-3. The hardware accelerators 214 can be on a same field programmable gate array (FPGA) as the computing devices 210, first plurality of communication subsystems 208, second plurality of communication subsystems 206, etc. The hardware accelerators 214 can be used for performing a number of posit operations, and/or for communication with an internal SRAM on the FPGA.[0081] The first plurality of communication subsystems 208 can include circuitry and/or logic configured to allocate and de-allocate resources to the computing devices 210 during performance of operations described herein. For example, the circuitry and/or logic can allocate and/or de-allocate resources to the computing devices 210 during performance of extended memory operations described herein. While the examples described above include a particular number of multiplexers within a particular arrangement, examples are not so limited. For example, a multiplexer can be positioned between the buffer 219 and the second communication subsystem 206-2, between the second communication subsystem 206-2 and the buffer 208-3, etc. A third 208-3 of the first plurality of communication subsystems 208 can be coupled to a third of the second plurality of communication subsystems 206-3. The third communication subsystem 206-3 can be coupled to a host interface 220. In one example, the third communication subsystem 206-3 can be coupled to the host interface 220 via a number of AXI buses.[0082] As is illustrated, a first (206-1) of the second plurality of communication subsystems 206 can be coupled to the controller (e.g., memory controller) 212). The controller 212 can be coupled to a number of memory devices 216-1, ... , 216-N via a number of channels 207-1, ... , 207-N.[0083] Figure 3 is a functional block diagram in the form of a computing system 300 including an apparatus 304 including a plurality of communication subsystems 306, 308 and a plurality of memory devices 316 in accordance with a number of embodiments of the present disclosure. As used herein, an “apparatus” can refer to, but is not limited to, any of a variety of structures or combinations of structures, such as a circuit or circuitry, a die or dice, a module or modules, a device or devices, or a system or systems, for example. In the embodiment illustrated in Figure 3, memory devices 316-1... 316-N can include one or more memory modules (e.g., double data rate (DDR) memory, three- dimensional (3D) cross-point memory, NAND memory, single in-line memory modules, dual in-line memory modules, etc.). The memory devices 316-1, . . .,316-N can include volatile memory and/or non-volatile memory. In a number of embodiments, memory devices 316-1, ... , 316-N can include a multi-chip device. A multi-chip device can include a number of different memory types and/or memory modules. For example, a memory system can include non volatile or volatile memory on any type of a module.[0084] As illustrated in Figure 3, the apparatus 304 can include a computing device (e.g., computing core). In some embodiments, the apparatus 304 can be an FPGA. As illustrated in Figure 3, the plurality of computing devices 310 can include ports 311 that can each be coupled to the plurality of communication subsystems 306 (as an example, without being coupled via an additional set of communication subsystems, such as communication subsystems 108 and 208, (which may be multiplexers) illustrated in Figures 1 and 2, respectively. The computing device 310 can be coupled to the plurality of communication subsystems 306 via corresponding port connections including a memory port (“MemPort”) 311-1, system port “SystemPort”) 311-2, peripheral port (“PeriphPort”) 311-3, and front port (“FrontPort”) 311-4). [0085] The memory port 311-1 can be directly coupled to a communication subsystem 306-1 specifically designated to receive data from a memory port and transfer the data to a memory controller 312. The system port 311-2 can be directly coupled to a communication subsystem 308 that is further coupled to a plurality of buffers 319-1, 319-2, 319-3 (hereinafter referred to collectively as buffers 319). Each of the plurality of buffers 319 can be coupled to a respective one of a plurality of communication subsystems 306-2, 306-3, 306-4. The plurality of communication subsystems 306-2, 306-3, 306-4 can be coupled to an additional plurality of buffers 317-1, 317-2, 317-3. The plurality of buffers 317 are each coupled to a respective one of a plurality of hardware accelerators 314-1, 314-2, 314-3. The plurality of hardware accelerators 314 are coupled to logic 313. The plurality of communication subsystems 306-2, 306-3, 306-4 are each specifically designated to receive data from the system port 311-2 and transfer the data to a respective accelerator (e.g., an on-chip accelerator)314, which can then transfer data to additional logic circuitry 313.[0086] The peripheral port 311-3 can be directly coupled to a communication subsystem 306-5 specifically designated to receive data from the peripheral port 311-3 and transfer the data to a serial port 318. The front port 311-4 can be directly coupled to a communication subsystem 306-6 specifically designated to receive data from the front port 311-4 and transfer the data to a host interface 320, and subsequently to a host 302 via channels 303 and/or 305.In this embodiment, the hardware accelerators 314 may be coupled to the computing device 310 via a multiplexer. In contrast, a multiplexer may not be used to couple the controller 312, the serial port 318, and/or the host interface 320 to the computing device 310, but rather the ports and the communication subsystem are directly connected for data transfer.[0087] In some embodiments, the communication subsystems 306 can facilitate visibility between respective address spaces of the computing device 310. For example, the computing device 310 can, responsive to receipt of data and/or a file, store the data in a memory resource of the computing device 310. The computing device can associate an address (e.g., a physical address) corresponding to a location in the memory resource of the computing device 310 in which the data is stored. In addition, the computing device 310 can parse (e.g., break) the address associated with the data into logical blocks. [0088] In some embodiments, the zeroth logical block associated with the data can be transferred to a processing device (e.g., a reduced instruction set computing (RISC) device). A particular computing device (e.g., computing device 110, 210, 310) can be configured to recognize that a particular set of logical addresses are accessible to that computing device (e.g., 210-2), while other computing devices (e.g., computing device 210-3, 210-4, respectively, etc.) can be configured to recognize that different sets of logical addresses are accessible to those computing devices 110, 210, 310. Stated alternatively, a first computing device (e.g., the computing device 210-2) can have access to a first set of logical addresses associated with that computing device (210-2), and a second computing device (e.g., the computing device 210-3) can have access to a second set of logical address associated therewith, etc.[0089] If data corresponding to the second set of logical addresses (e.g., the logical addresses accessible by the second computing device 210-3) is requested at the first computing device (e.g., the computing device 210-2), the communication subsystems 306 can facilitate communication between the first computing device (e.g., the computing device 210-2) and the second computing device (e.g., the computing device 210-3) to allow the first computing device (e.g., the computing device 210-2) to access the data corresponding to the second set of logical addresses (e.g., the set of logical addresses accessible by the second computing device 210-3). That is, the communication subsystem 308 can facilitate communication between the computing device 310 (e.g., 210-1) and additional computing devices (e.g., computing devices 210-2, 210-3, 210-4) to allow address spaces of the computing devices to be visible to one another. [0090] In some embodiments, communication between the computing devices 110, 210, 310 to facilitate address visibility can include receiving, by an event queue of the first computing device (e.g., the computing device 210-1), a message requesting access to the data corresponding to the second set of logical addresses, loading the requested data into a memory resource of the first computing device, and transferring the requested data to a message buffer. Once the data has been buffered by the message buffer, the data can be transferred to the second computing device (e.g., the computing device 210-2) via the communication subsystem 310. [0091] For example, during performance of an extended memory operation, the controller 115, 215, 315 and/or a first computing device (e.g., the computing device 210-1) can determine that the address specified by a host command (e.g., a command to initiate performance of an extended memory operation generated by a host such as the host 102 illustrated in Figure 1) corresponds to a location in a memory resource of a second computing device (e.g., the computing device 210-2) among the plurality of computing devices (110, 210). In this case, a computing device command can be generated and sent from the controller 115, 215, 315 and/or the first computing device (210-1) to the second computing device (210-2) to initiate performance of the extended memory operation using an operand stored in the memory resource of the second computing device (210-2) at the address specified by the computing device command.[0092] In response to receipt of the computing device command, the second computing device (210-2) can perform the extended memory operation using the operand stored in the memory resource of the second computing device (210-2) at the address specified by the computing device command. This can reduce command traffic from between the host and the storage controller and/or the computing devices (210, 310), because the host need not generate additional commands to cause performance of the extended memory operation, which can increase overall performance of a computing system by, for example reducing a time associated with transfer of commands to and from the host.[0093] In some embodiments, the controller 115, 215, 315 can determine that performing the extended memory operation can include performing multiple sub-operations. For example, an extended memory operation may be parsed or broken into two or more sub-operations that can be performed as part of performing the overall extended memory operation. In this case, the controller 115, 215, 315 and/or the communication subsystems (106, 108, 206, 208, 308) can utilize the above described address visibility to facilitate performance of the sub-operations by various computing devices 110, 210, 310. In response to completion of the sub-operation, the controller 115, 215, 315 can cause the results of the sub-operations to be coalesced into a single result that corresponds to a result of the extended memory operation. [0094] In other embodiments, an application requesting data that is stored in the computing devices 110, 210, 310 can know (e.g., can be provided with information corresponding to) which computing devices 110, 210, 310 include the data requested. In this example, the application can request the data from the relevant computing device 110, 210, 310 and/or the address may be loaded into multiple computing devices 110, 210, 310 and accessed by the application requesting the data via the communication subsystems 108, 106, 208, 206, 308.[0095] The controller 115, 215, 315 can be discrete circuitry that is physically separate from the communication subsystems 108, 106, 208, 206, 308 and can each be provided as one or more integrated circuits that allows communication between the computing devices 110, 210, 310, the memory controller 112, 212, 312 and/orthe controller 115, 215, 315. Non-limiting examples of communication subsystems 108, 106, 208, 206, 308 can include a XBAR or other communications subsystem that allows for interconnection and/or interoperability of the controller 115, 215, 315, the computing devices 110, 210, 310, and/orthe memory controller 112, 212, 312.[0096] As described above, responsive to receipt of a command generated by the controller 115, 215, 315, the communication subsystems 108, 106, 208, 206, 308, and/or a host (e.g., the host 102 illustrated in Figure 1), performance of extended memory operations using data stored in the computing devices 110, 210, 310 and/or from blocks of data streamed through the computing devices 110, 210, 310 can be realized.[0097] Figure 4 is a functional block diagram in the form of a computing system 400 including an apparatus 404 including a first plurality of communication subsystems 406, a second communication subsystem 408, and a plurality of memory devices 416 in accordance with a number of embodiments of the present disclosure.[0098] As illustrated in Figure 4, the apparatus 404 can include a computing device (e.g., computing core). In some embodiments, the apparatus 404 can be an FPGA. As illustrated in Figure 4, and similarly in Figure 3, the plurality of computing devices 410 can include ports 411 that can each be coupled to the plurality of communication subsystems 406 (as an example, without being coupled via an additional set of communication subsystems, such as communication subsystems 108 and 208, (which may be multiplexers) illustrated in Figures 1 and 2, respectively. The computing device 410 can be coupled to the plurality of communication subsystems 406 via corresponding port connections including a memory port (“MemPort”) 411-1, system port “SystemPort”) 411-2, peripheral port (“PeriphPort”) 411-3, and front port (“FrontPort”) 411-4).[0099] The memory port 411-1 can be directly coupled to a communication subsystem 406-1 specifically designated to receive data from a memory port and transfer the data to a memory controller 412. In contrast to Figure 3, Figure 4 illustrates the system port 411-2 being directly coupled to a buffer 419 that is coupled directly to the second communication subsystem 406- 2. The second communication subsystem 406-2 is likewise coupled to an additional buffer 417. The additional buffer 417 is coupled to an additional communication subsystem 408. As is shown similarly in Figure 2, Figure 4 illustrates a communication subsystem 408 that is coupled to each of a plurality of hardware accelerators 414-1, 414-2, 414-3, respectively.[00100] Further, the peripheral port 411-3 can be directly coupled to a communication subsystem 406-5 specifically designated to receive data from the peripheral port 411-3 and transfer the data to a serial port 418. The front port 411-4 can be directly coupled to a communication subsystem 406-6 specifically designated to receive data from the front port 411-4 and transfer the data to a host interface 420, and subsequently to a host 402 via channels 403 and/or 405.In this embodiment, the hardware accelerators 414 may be coupled to the computing device 410 via a multiplexer. In contrast, a multiplexer may not be used to couple the controller 412, the serial port 418, and/or the host interface 420 to the computing device 310, but rather the ports and the communication subsystem are directly connected for data transfer.[00101] Figure 5 is a functional block diagram in the form of a computing core 510 including a number of ports 511-1, 511-2, 511-3, 511-4 in accordance with a number of embodiments of the present disclosure. The computing core 510 can include a memory management unit (MMU) 520, a physical memory protection (PMP) unit 522, and a cache 524.[00102] The MMU 520 refers to a computer hardware component used for memory and caching operations associated with a processor. The MMU 520 can be responsible for memory management and be integrated into the processor, or, in some examples, can be on a separate integrated circuit (IC) chip. The MMU 520 can be used for hardware memory management which can include overseeing and regulating the processor’s use of random access memory (RAM) and cache memory. The MMU 520 can be used for operating system (OS) memory management, which can ensure the availability of adequate memory resources for the objects and data structures of each running program. The MMU 520 can be used for application memory management, which can allocate each individual program’s required or used memory, and then recycle freed up memory space when the operation concludes or the space becomes available. [00103] In one embodiment, the PMP unit 522 can be used to restrict access to memory and isolate processes from each other. The PMP unit 522 can be used to set memory access privileges (read, write, execute) for specified memory regions. The PMP unit 522 can support 8 regions with a minimum region size of 4 bytes. In some examples, the PMP unit 522 may only be programmed in M-mode. The PMP unit 522 may enforce permissions on U- mode accesses. However, locked regions may additionally enforce their permissions on M-mode. The cache 524 can be an SRAM cache, a 3D cross- point cache, etc. The cache 524 can include 8 KB, 16 KB, 32 KB, etc. and can include error correction coding (ECC).[00104] The computing core 510 can also include a plurality of ports including a memory port 511-1, a system port 511-2, a peripheral port 511-3, and a front port 511-4. The memory port 511-1 can be directly coupled to a communication subsystem (as illustrated in Figure 3) specifically designated to receive data from a memory port 511-1. The system port 511 -2 can be directly coupled to a communication subsystem specifically designated to receive data from the system port 511-2. The data through the system port 511-2 can be transferred to an accelerator (e.g., an on-chip accelerator). The peripheral port 511-3 can be directly coupled to a communication subsystem specifically designated to receive data from the peripheral port 511-3 and this data can be eventually transferred to a serial port. The front port 511-4 can be directly coupled to a communication subsystem specifically designated to receive data from the front port 511-4 and this data can be eventually transferred to a host interface, and subsequently to a host. [00105] The computing core 510 can be a full-Linux capable, cache- coherent 64-bit RISC-V processor. In some examples, the memory port 511-1, the system port 511-2, and the peripheral port 511-3 can be outgoing ports and the front port 511-4 can be an incoming port. An example of computing core 510 can include a U54-MC computing core. The computing core 510 can include an instruction memory system, an instruction fetch unit, an execution pipeline unit, a data memory system, and support for global, software, and timer interrupts. The instruction memory system can include a 16 Kibibyte (KiB) 2- way set-associative instruction cache. The access latency of all blocks in the instruction memory system can be one clock cycle. The instruction cache may not be kept coherent with the rest of the platform memory system. Writes to the instruction memory may be synchronized with the instruction fetch stream by executing a FENCE.I instructions. The instruction cache can have a line size of 64 byes, and a cache line fill can trigger a burst access outside the computing core 510.[00106] The instruction fetch unit can include branch prediction hardware to improve performance of the processor core. The branch predictor can include a 28-entry branch target buffer (BTB), which can predict a target of taken branches, a 512-entry branch history table (BEIT), which can predict the direction of conditional branches, and a 6-entry return-address stack (RAS) which can predict a target of procedure returns. The branch predictor may have one-cycle latency, so that correctly predicted control-flow instructions result in no penalty. An incorrect prediction of control-flow instructions may incur three- cycle penalty.[00107] The execution pipeline unit can be a single-issue, in-order pipeline. The pipeline can include five stages: instruction fetch, instruction decode and register fetch, execute, data memory access, and register writeback. The pipeline can have a peak execution rate of one instruction per clock cycle, and may be fully bypassed so that most instructions have a one-cycle result latency. The pipeline may interlock on read-after-write and write-after-write hazards, so instructions may be scheduled to avoid stalls.[00108] The data memory system can include a DTIM interface, which can support up to 8 KiB. The access latency from a core to its own DTIM may be two clock cycles for full words and three clock cycles for smaller quantities. Memory requests from one core to any other core’s DTIM may not be as performant as memory requests from a core to its own DTIM. Misaligned accesses are not supported in hardware and may result in a trap to allow software emulation.[00109] In some embodiments, the computing core 510 can include a floating-point unit (FPU) which can provide full hardware support for the IEEE 754-2008 floating-point standard for 32-bit single-precision and 64-bit double precision arithmetic. The FPU can include a fully pipelined fused-multiply-add unit and an iterative divide and square-root unit, magnitude comparators, and float-to-integer conversion units, with full hardware support for subnormals and IEEE default values.[00110] Figure 6 is a flow diagram representing an example method 628 corresponding to an extended memory interface in accordance with a number of embodiments of the present disclosure. At block 630, the method 628 can include receiving, at a processing unit that is coupled between a host device and a non-volatile memory device, signaling indicative of a plurality of operations to be performed on data written to or read from the non-volatile memory device. The quantity of operations can include extended memory operations as described above.[00111] At block 632, the method 628 can include performing, at the processing unit, at least one operation of the plurality of operations in response to the signaling. A computing device (such as computing device 110, 210, 310, 410 in Figures 1-4, respectively) can include the processing unit that performs the at least one operation. The operation can be performed using a block of data in response to receipt of the block of data to reduce a size of data from a first size to a second size by the at least one of the plurality of computing devices.The performance of the operation can be caused by a controller. The controller can be analogous to the controller 115, 215, 315 illustrated in Figures 1-3, herein. In some embodiments, performing the operation can include performing an extended memory operation, as described herein. The operation can further include performing, by the particular computing device, the operation in the absence of receipt of a host command from a host coupleable to the controller.In response to completion of performance of the operation, the method 628 can include sending a notification to a host coupleable to the controller. [00112] At block 634, the method 628 can include accessing, via a controller at the processing unit or non-volatile memory device, a portion of a memory array in the non-volatile memory device. The non-volatile memory device can be accessed by a memory controller and the memory controller can send the accessed data to a computing device, a hardware accelerator, etc. in order to perform one of the quantity of operations. The method 628 can further include causing, using an additional controller (e.g., memory controller), the blocks of data to be transferred from the memory device to a plurality of communication subsystems. The method 628 can further include allocating, via the pluralities of communication subsystems, resources corresponding to respective computing devices among the plurality of computing devices to perform the operation on the block of data.[00113][00114] At block 636, the method 628 can include transmitting, to a hardware accelerator, additional signaling indicative of a command to perform one or more additional operations of the plurality of operations on the data written to or read from the non-volatile memory device. For example, signaling indicative of a first operation can be sent to a first hardware accelerator, signaling indicative of a second operation can be sent to a second hardware accelerator, etc.[00115] In some embodiments, the command to initiate performance of the operation can include an address corresponding to a location in the memory array of the particular computing device and the method 628 can include storing a result of the operation in the address corresponding to the location in the particular computing device. For example, the method 628 can include storing a result of the operation in the address corresponding to the memory location in the particular computing device in which the operand corresponding to performance of the operation was stored prior to performance of the extended memory operation. That is, in some embodiments, a result of the operation can be stored in the same address location of the computing device in which the data that was used as an operand for the operation was stored prior to performance of the operation.[00116] In some embodiments, the method 628 can include determining, by the controller, that the operand corresponding to performance of the operation is not stored by the particular computing device. In response to such a determination, the method 628 can further include determining, by the controller, that the operand corresponding to performance of the operation is stored in a memory device coupled to the plurality of computing devices. The method 628 can further include retrieving the operand corresponding to performance of the operation from the memory device, causing the operand corresponding to performance of the operation to be stored in at least one computing device among the plurality of computing device, and/or causing performance of the operation using the at least one computing device. The memory device can be analogous to the memory devices 116 illustrated in Figure 1.[00117] The method 628 can, in some embodiments, further include determining that at least one sub-operation is to be performed as part of the operation, sending a command to a computing device different than the particular computing device to cause performance of the sub-operation, and/or performing, using the computing device different than the particular computing device, the sub-operation as part of performance of the operation. For example, in some embodiments, a determination that the operation is to be broken into multiple sub-operations can be made and the controller can cause different computing devices to perform different sub-operations as part of performing the operation. In some embodiments, the controller can, in concert with the first and the second pluralities of communications subsystem, such 108, 106, 208, 206, 308, 306, and 408, 406 illustrated in Figures 1-4, herein, assign sub-operations to two or more of the computing devices as part of performance of the operation and/or to two or more of the hardware accelerators.[00118] Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of one or more embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description. The scope of the one or more embodiments of the present disclosure includes other applications in which the above structures and processes are used. Therefore, the scope of one or more embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled.[00119] In the foregoing Detailed Description, some features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.
Methods fabricate DEMOS devices having varied channel lengths and substantially similar threshold voltages. A threshold voltage is selected for first and second devices. First and second well regions are formed. First and second drain extension regions are formed within the well regions. First and second back gate regions are formed within the well regions according to the selected threshold voltage. First and second gate structures are formed over the first and second well regions having varied channel lengths. A first source region is formed in the first back gate region and a first drain region is formed in the first drain extension region. A second source region is formed in the second back gate region and a second drain region is formed in the drain extension region.
1.A method of manufacturing a drain-extended semiconductor device, the method comprising:Forming a first well region within a first region of a semiconductor body designed for a device having a first channel length;Forming a second well region within a second region of a semiconductor body designed for a device having a second channel length;Forming a back gate well region in the first and second regions according to a common threshold voltage, wherein the back gate well regions formed in the first and second regions have equal back gate lengths and doping concentrations;Forming a first drain to extend within the first region;Forming a second drain extension region within the second region;Forming a first gate structure in the first region according to the first channel length;Forming a second gate structure in the second region according to the second channel length;Forming a first drain region within the first drain extension region;Forming a second drain region within the second drain extension region;Forming a first source region within the back gate well region within the first region; andA second source region is formed in an associated back gate well region within the second region.2.A method of manufacturing a symmetrical drain-extended semiconductor device, the method comprising:Forming a first well region and a second well region in a semiconductor body;Forming a first symmetrical drain extension region in the first well region according to a first channel length;Forming a second symmetrical drain extension region in the second well region according to a second channel length;Forming a first back gate region between the first symmetrical drain extension regions in the first well region according to a threshold voltage;Forming a second back gate region between the second symmetrical drain extension regions in the second well region according to a threshold voltage;Forming a first gate structure over the first well region, the first gate structure defining a first channel region having the first channel length; andA second gate structure is formed above the second well region, and the second gate structure defines a second channel region having the second channel length.3.The method according to claim 2, further comprising forming a first source / drain region in the first symmetrical drain extension region; and forming a second source / drain region in the second symmetrical Drain extension area.4.The method according to claim 2 or 3, wherein the first channel length is larger than the second channel length.5.The method according to claim 2 or 3, wherein the first and second well regions formed have p-type conductivity and the first and second back gate regions formed have p-type conductivity.6.A method of manufacturing a drain-extended semiconductor device, the method comprising:Select threshold voltage and channel length;Forming a well region in the semiconductor body;Forming a drain extension region in the well region;Selecting the back gate doping concentration and length according to the selected threshold voltage;Forming a back gate region in the well region according to a selected back gate doping concentration and length providing a selected threshold voltage;Forming a gate structure above the well region defining the channel length;Forming a drain region within the drain extension region; andA source region is formed in the back gate region.7.The method of claim 6, further comprising forming the drain extension region according to the channel length and independently of the threshold voltage.8.The method according to claim 6 or 7, further comprising:Select the second channel length;Forming a second well region in the semiconductor body;Forming a second drain extension region in the second well region;Forming a second back gate region in the second well region according to the selected back gate doping concentration and length providing the selected threshold voltage;Forming a second gate structure above the well region having a second channel length;Forming a second drain region within the second drain extension region; andA second source region is formed in the second back gate region.9.The method of claim 8, further comprising:Select the third channel length;Forming a third well region in the semiconductor body;Forming a third drain extension region within the third well region;Forming a third back gate region in the third well region according to the selected back gate doping concentration and length providing the selected threshold voltage;Forming a third gate structure above the well region having a third channel length;Forming a third drain region within the third drain extension region; andA third source region is formed in the third back gate region.
Method for improving performance of HVMOS deviceTechnical field【0001】The present invention relates generally to semiconductor devices, and more particularly, to uniform threshold voltages for drain-extended MOS transistors of various channel lengths and methods of manufacturing the same.Background technique【0002】Many integrated circuit devices include digital circuits composed of metal-oxide-semiconductor (MOS) transistor devices that are constructed using optimized complementary metal-oxide-semiconductor (CMOS) manufacturing processes to form high-density, high-speed N-channel and P-channel MOS transistor. This high-density circuit is commonly used in modern consumer electronics products such as wireless communication devices, portable computers, etc., where digital circuits are powered by batteries.【0003】Many devices require MOS devices to be operable for low voltage applications and high voltage applications. For example, logic operations typically use low-voltage MOS devices (for example, a voltage of about 1.8V), while power supply operations typically require high-voltage MOS devices (for example, a voltage greater than 6V). MOS devices for low voltage and high voltage applications can and often are implemented on a single die or integrated circuit to save space and manufacturing costs.【0004】The type of MOS transistor device used in a semiconductor device is an N- or P-channel drain extended metal oxide semiconductor (DEMOS) transistor device. The DEMOS device is commonly used in applications such as power conversion circuits. The DEMOS device uses a drain extension region, which substantially increases the operating voltage of the device. Some examples of DEMOS devices include a lateral expansion transistor (LDMOS) device, a reduced surface electric field (RESURF) transistor, and the like. DEMOS devices advantageously combine short-channel operation and high current handling capabilities, relatively reduce drain-source on-resistance (Rdson), and have the ability to resist relatively high drain-source voltages from voltage breakdown faults, where The design of DEMOS devices usually involves a tradeoff between breakdown voltage (BVdss) and Rdson. In addition to performance advantages, the manufacture of DEMOS devices is relatively easy to integrate into the CMOS processing flow, thereby facilitating the use of logic, low-power analog, or other circuits that are also constructed in a single integrated circuit (IC).【0005】One type of DEMOS transistor device commonly used in high voltage applications is a high voltage MOS (HVMOS) transistor device. In addition to the drain extension region, the HVMOS device includes a thicker dielectric layer and a back gate region. The HVMOS device can be manufactured using a low voltage CMOS device and the N and P wells of the low voltage CMOS device can be used as a back gate region and / or a drain extension region. This can save space and cost during manufacturing, but can also cause HVMOS devices to have varying channel lengths. The threshold voltage of HVMOS devices is usually a function of the channel length, so this will also cause HVMOS devices to have varying threshold voltages. This changing threshold voltage can make it difficult to implement memory operations such as programming and reading memory.Summary of the Invention【0006】Aspects of the invention make it easier to manufacture a drain-extended semiconductor device. A fixed back gate length is commonly referred to as a POLY (polysilicon) overlap, which is used for devices with varying channel lengths to have substantially similar threshold voltage values. The gap length value is the distance between the back gate region and the drain extension region, which can be increased to obtain a larger channel length. Then, the threshold value can be selected as the minimum channel length value or an approximate minimum channel length value, and the threshold value can also use other channel lengths with larger values.【0007】The present invention provides a method of manufacturing a DEMOS device having a varying channel length and a substantially similar threshold voltage. Select a threshold voltage for the first and second devices. First and second well regions are formed. First and second drain extension regions are formed in the well region. The first and second back gate regions are formed in the well region according to a selected threshold voltage. First and second gate structures are formed over the first and second well regions having varying channel lengths. The first source region is formed in the first back gate region, and the first drain region is formed in the first drain extension region. A second source back gate region and a second drain region are formed in the drain extension region. Other systems and methods are disclosed.BRIEF DESCRIPTION OF THE DRAWINGS【0008】1A and 1B are cross-sectional views of a conventional HVMOS transistor device with a varying channel length and a varying threshold voltage.【0009】2A and 2B depict first and second asymmetric HVMOS transistor devices having varying channel lengths but substantially similar threshold voltages according to an aspect of the present invention.【0010】3A and 3B depict first and second symmetric HVMOS transistor devices having varying channel lengths but similar threshold voltages in accordance with an aspect of the present invention.【0011】FIG. 4 is a flowchart illustrating a method of manufacturing an HVMOS transistor device having a varying channel length but a substantially similar threshold voltage according to an aspect of the present invention.【0012】FIG. 5 is a flowchart describing a method of manufacturing a symmetric HVMOS transistor device having a varying channel length but a substantially similar threshold voltage according to an aspect of the present invention.detailed description【0013】Aspects of the invention include a method of manufacturing a drain-extended MOS (DEMOS) transistor device with a varying channel length and a similar threshold voltage. The fixed back gate length is also referred to as POLY overlap, which is used for devices with varying channel lengths so that they have substantially similar threshold voltage values. The notch length value is the distance between the back gate region and the drain extension region, which can be increased to obtain a larger channel length while keeping the back gate length constant. Then, the threshold value can be selected as the minimum channel length value or an approximate minimum channel length value, and the threshold value can also use other channel lengths with larger values.【0014】1A and 1B are cross-sectional views of a conventional high-voltage DEMOS (HVMOS) transistor device with a varying channel length and a varying threshold voltage. FIG. 1A illustrates a first device having a channel length L1. A p-well region 104 is formed and / or exists on the semiconductor body or substrate 102. The p-well region 104 typically has a relatively low doping concentration. The p-well region may be an epitaxial layer or another layer having p-type conductivity.【0015】The drain extension region 106 is formed in the p-well region 104 and has opposite conductivity. In this example, the drain extension region 106 has n-type conductivity. A back gate region 108 is also formed in the p-well region 104. The back gate region 108 has the same conductivity type as the p-well region 104, but generally has a higher doping concentration. In this example, the back gate region 108 is p-type conductive.【0016】Isolation structures 110 such as shallow trench isolation structures (STI), LOCOS, etc. exist to isolate individual transistor devices. Generally, these isolation structures are formed before the p-well 104 or the drain extension region 106 is formed.【0017】The source region 112 is formed in the back gate region 108. The source region 112 has conductivity opposite to that of the p-well region 104, and the conductivity in this example is n-type conductivity. The drain region 114 is formed in the drain extension region 106. The drain region 114 also has conductivity opposite to that of the p-well region 104, and the conductivity in this example is n-type conductivity. The drain region 114 has the same conductivity type as the drain extension region 106, but has a higher doping concentration.【0018】The gate structure includes a gate dielectric layer 116, a sidewall 120, and a gate 118. The gate structure is formed above the p-well region 104. Generally, the gate structure is formed before the source region 112 and the drain region 114 are formed. Typically, a gate dielectric layer 116 is formed on the p-well region 104, and a gate layer 118 is formed on the gate dielectric layer 116. Subsequently, the gate dielectric layer 116 and the gate layer 118 are patterned to form a sidewall spacer 120.【0019】According to the inventor of the present invention, the channel length L1 and the resulting threshold voltage of the first device depend on the drain extension length X1, the gap region length G1 (also referred to as POLY overlap), and the back gate length S1. The drain extension length X1 is from one side of the drain extension region 106 to the first side of the gate 120, wherein the first side is above the drain extension region 106. The notch region length G1 is a length from the side of the drain extension region 106 to a side of the back gate region 108. The back gate length S1 is a length from the side of the back gate region to the second side of the gate 120, wherein the second side is located above the back gate region 108.【0020】FIG. 1B illustrates a second HVMOS device having a channel length L2, which is longer than the channel length L1 of the first device. As a result, the threshold voltage of the second device is changed relative to the threshold voltage of the first device. The second device is constructed and formed in a similar manner to the first device of FIG. 1A. Therefore, the following descriptions are omitted, and more details can be obtained by referring to the discussion of FIG. 1A above.【0021】A p-well region 104 is formed and / or exists on the semiconductor body or substrate 102. The p-well region 104 typically has a relatively low doping concentration. The drain extension region 106 is formed in the p-well region 104 and has opposite conductivity. In this example, the drain extension region 106 has n-type conductivity. A back gate region 108 is also formed in the p-well region 104. The back gate region 108 has the same conductivity type as the p-well region 104, but generally has a higher doping concentration. In this example, the back gate region 108 is of p-type conductivity.【0022】The isolation structure 110 such as a shallow trench isolation structure (STI), a LOCOS (Local Oxidation Structure), etc. exists to isolate individual transistor devices. The source region 112 is formed in the back gate region 108. The source region 112 has conductivity opposite to that of the p-well region 104, and the conductivity in this example is n-type conductivity. The drain region 114 is formed in the drain extension region 106. The drain region 114 also has conductivity opposite to that of the p-well region 104, and the conductivity in this example is n-type conductivity. The drain region 114 has the same conductivity type as the drain extension region 106, but has a higher doping concentration.【0023】The gate structure includes a gate dielectric layer 116, a sidewall 120, and a gate 118. The gate structure is formed above the p-well region 104. Generally, the gate structure is formed before the source region 112 and the drain region 114 are formed.【0024】According to the inventor of the present invention, the channel length L2 and the resulting threshold voltage of the first device depend on the drain extension length X2, the gap region length G2 (also referred to as POLY overlap) and the back gate length S2. The drain extension length X2 is a length from one side of the drain extension region 106 to the first side of the gate 120, where the first side is above the drain extension region 106. The notch region length G2 is a length from the side of the drain extension region 106 to a side of the back gate region 108. The back gate length S2 is a length from one side of the back gate region to a second side of the gate 120, wherein the second side is located above the back gate region 108.【0025】A significant drop in threshold voltage occurs in symmetrical and asymmetric DEMOS devices, such as the first and second devices in FIGS. 1A and 1B. The decrease in the threshold voltage is a function of the channel length. Therefore, the threshold voltage of the long-channel drain extension device is higher than the threshold voltage of the short-channel device. This may be due at least in part to a restricted diffusion source from the back gate region or well.【0026】The inventor of the present invention recognizes that the channel length L2 is composed of a drain extension length X2, a notch region length G2, and a back gate length S2. With the same doping type and concentration, increasing the three above-mentioned lengths X2, G2, and S2 will cause the threshold voltage of the second device to increase. However, the inventor of the present invention noticed that the back gate length S2 has a more significant influence on the threshold voltage than the drain extension length X2 and the notch area length G2. The notched region is a lighter doped region than the back gate region, so its impact on the threshold voltage is minimal. Thus, aspects of the present invention include manufacturing symmetrical and asymmetrical DEMOS transistor devices. By maintaining similar or substantially similar back gate lengths, these variable-length DEMOS transistor devices have varying channel lengths but substantially similar thresholds Voltage. In addition, shorter minimum channel lengths can be used for various DEMOS devices by using substantially similar back gate lengths.【0027】It should be noted that FIGS. 1A and 1B describe NMOS devices, but conventional PMOS devices also have the problems pointed out above.【0028】2A and 2B depict first and second asymmetric HVMOS transistor devices having varying channel lengths but having substantially similar threshold voltages according to an aspect of the invention. A method of forming such a device is provided below. The first HVMOS transistor device is described in FIG. 2A. The first device has a channel length L1, which is almost the smallest channel length Lmin in this example.【0029】A well region 204 having a first type of conductivity is formed and / or exists on a semiconductor body or substrate 202. The well region 204 typically has a relatively low doping concentration. The well region may also be an epitaxial layer or another layer having a first type of conductive n-type or p-type.【0030】A drain extension region 206 is formed in the well region 204, which has a second conductivity type opposite to the conductivity type of the well region 204. The back gate region 208 is also formed in the well region 204 and has the same conductivity type as the well region 204, but generally has a higher doping concentration. The back gate region 208 has a back gate length S1 and a doping concentration selected according to a desired and / or selected threshold voltage of the device.【0031】The isolation structure 210 exists to isolate individual transistor devices. The isolation structure 210 may be a local oxidation structure (LOCOS), a shallow trench isolation region (STI), or other suitable integrated circuit isolation schemes. Generally, these isolation structures are formed before the well region 204 or the drain extension region 206 is formed.【0032】A source region 212 is formed in the back gate region 208. The source region 212 has a conductivity opposite to that of the well region 204 and is a second type of conductivity. The drain region 214 is formed in the drain extension region 206. The drain region 214 also has conductivity opposite to that of the well region 204. The drain region 214 has the same conductivity type as the drain extension region 206, but has a higher doping concentration.【0033】The gate structure includes a gate dielectric layer 216, a sidewall 220, and a gate 218, and the gate structure is formed over the well region 204. Generally, the gate structure is formed before the source region 212 and the drain region 214 are formed. Typically, a gate dielectric layer 216 is formed on the well region 204 and a gate layer 218 such as polysilicon is formed on the gate dielectric layer 216. Subsequently, the gate dielectric layer 216 and the gate layer 218 are patterned to form a sidewall isolation region 220.【0034】According to the inventor of the present invention, the threshold voltage of the first device basically depends on the back gate region, particularly the back gate length S1 and the doping concentration of the back gate region. The drain extension length X1 is a length from one side of the drain extension region 206 to the first side of the gate 220, where the first side is above the drain extension region 206. The notch region length G1 is a length from the side of the drain extension region 206 to one side of the back gate region 208. The back gate length S1 is a length from this side of the back gate region to the second side of the gate 220, wherein the second side is located above the back gate region 208.【0035】The second HVMOS transistor device is described in FIG. 2B. The second device has a channel length L2, which in this example is greater than the channel length L1 of the device of FIG. 2A. The second device is similar to the first device, so some descriptions are omitted here. For other details, please refer to the above description of FIG. 2A.【0036】A well region 204 having a first type of conductivity is formed and / or exists on a semiconductor body or substrate 202. The well region 204 typically has a relatively low doping concentration. The drain extension region 206 is formed in the well region 204 and has a second conductivity type opposite to the conductivity type of the well region 204.【0037】The back gate region 208 is formed in the well region 204 and has the same conductivity type as the well region 204, but generally has a higher doping concentration. The back gate region 208 has a selected back gate length S2 and a doping concentration substantially equal to the doping concentration of the first device. Therefore, the threshold voltage of the second HOMOS device is approximately equal to the threshold voltage of the first device of FIG. 2A.【0038】The isolation structure 210 exists to isolate individual transistor devices. The isolation structure 210 may be a local oxidation structure (LOCOS), a shallow trench isolation region (STI), or other suitable integrated circuit isolation schemes. Generally, these isolation structures are formed before the well region 204 or the drain extension region 206 is formed.【0039】A source region 212 is formed in the back gate region 208. The source region 212 has a conductivity opposite to that of the well region 204 and is a second type of conductivity. The drain region 214 is formed in the drain extension region 206. The drain region 214 also has conductivity opposite to that of the well region 204. The drain region 214 has the same conductivity type as the drain extension region 206, but has a higher doping concentration.【0040】The gate structure includes a gate dielectric layer 216, a sidewall 220, and a gate 218, and the gate structure is formed over the well region 204. Generally, the gate structure is formed before the source region 212 and the drain region 214 are formed. Typically, a gate dielectric layer 216 is formed on the well region 204 and a gate layer 218 is formed on the gate dielectric layer 216. Subsequently, the gate dielectric layer 216 and the gate layer 218 are patterned to form a sidewall isolation region 220.【0041】According to the inventor of the present invention, the threshold voltage of the first device basically depends on the back gate region, particularly the back gate length S2 (poly overlap) and the doping concentration of the back gate region 204. In this example, the back gate length S2 and the doping concentration are approximately equal to the back gate length S1 and the doping concentration of the first HVMOD transistor device. The drain extension length X2 is a length from one side of the drain extension region 206 to the first side of the gate 220, where the first side is above the drain extension region 206. The drain extension length X2 is greater than the length X1 of FIG. 2A, but this length increase does not significantly affect or change the threshold voltage. The notch region length G2 is a length from one side of the drain extension region 206 to one side of the back gate region 208. The length of the notch region G2 is also greater than the length of the notch region of FIG. 2A, but this length increase does not significantly affect or change the threshold voltage of the second HVMOS transistor device. As mentioned earlier, the notch region has a slight doping relative to the back gate length S2 and has a smaller effect on the threshold voltage. Generally, G2 is selected to grow in order to increase the channel length without changing the threshold voltage.【0042】Therefore, the threshold voltage of the second device is substantially equal to the threshold voltage of the first device in FIG. 2A, although the channel length L2 of the former is greater than the channel length L1 of the latter.【0043】It should be noted that the back gate lengths S1 and S2 shown in FIGS. 2A and 2B are equal at the time of formation, but they may change after diffusion and / or other processing and the lengths may change. These changes are not shown in Figures 2A and 2B to facilitate a better understanding of the invention.【0044】Further, it should be understood that aspects of the present invention include DEMOS devices and are not limited to HVMOS devices.【0045】3A and 3B depict first and second symmetric HVMOS transistor devices having varying channel lengths but similar threshold voltages in accordance with an aspect of the present invention. Symmetric transistor devices have source and drain regions that are indistinguishable from each other. Methods of forming these devices are provided below. A first symmetric HVMOS transistor device is described in FIG. 3A. The first device has a channel length L1, which is almost the smallest channel length Lmin in this example.【0046】A well region 304 having a first type of conductivity is formed and / or exists on a semiconductor body or substrate 302. The well region 304 typically has a relatively low doping concentration. The well region may also be an epitaxial layer or another layer having a first type of conductive n-type or p-type.【0047】First and second drain extension regions 306 and 308 are formed in the well region 304. The first and second drain extension regions 306 and 308 are symmetrical and have a second conductivity type that is opposite to the conductivity type of the well region 304. A back gate region 322 is also formed in the well region 304 between the first and second drain extension regions 306 and 308. The back gate region has the same conductivity type as the well region 304, but typically has a higher doping concentration. The back gate region 308 has a back gate length S1 and a doping concentration selected according to a threshold voltage expected and / or selected by the device.【0048】The isolation structure 310 exists to isolate individual transistor devices. The isolation structure 310 may be a local oxidation structure (LOCOS), a shallow trench isolation region (STI), or other suitable integrated circuit isolation schemes. Generally, these isolation structures are formed before the well regions 304 or the drain extension regions 306 and 308 are formed.【0049】A first source / drain region 314 is formed in the first drain extension region 306. The first source / drain region 314 has a second type of conductivity that is opposite to the conductivity type of the well region 304. A second source / drain region 312 is formed in the second drain extension region 308. The second source / drain region 312 has a second type of conductivity that is opposite to the conductivity type of the well region 304. The first source / drain region 314 and the second source / drain region 312 are symmetrical.【0050】The gate structure includes a gate dielectric layer 316, a sidewall 320, and a gate 318, and the gate structure is formed over the well region 304. Generally, the gate structure is formed before the source region 312 and the drain region 314 are formed. Typically, a gate dielectric layer 316 is formed on the well region 304 and a gate layer 318 is formed on the gate dielectric layer 316. Subsequently, the gate dielectric layer 316 and the gate layer 318 are patterned, thereby forming a sidewall isolation region 320.【0051】According to the inventor of the present invention, the threshold voltage of the first device basically depends on the back gate region, particularly the back gate length S1 and the doping concentration of the back gate region. The drain extension length X1 is from a side of the drain extension region 306 to a first side of the gate 320, wherein the first side is above the drain extension region 306. The notch region length G1 is a length from one side of the drain extension region 306 to one side of the back gate region 308. The back gate length S1 is a length from the first side of the back gate region 322 to the second side of the back gate region 322. The total channel length L1 is equal to 2 * X1 + 2 * G1 + S1.【0052】A second symmetric HVMOS transistor device is described in FIG. 3B. The second device has a channel length L2, which in this example is greater than the channel length L1 of FIG. 3A. This second device is similar to the first device, so some descriptions are omitted here. For more details, please refer to the description of FIG. 3A above.【0053】A well region 304 having a first type of conductivity is formed and / or exists on a semiconductor body or substrate 302. The well region 304 typically has a relatively low doping concentration.【0054】First and second drain extension regions 306 and 308 are formed in the well region 304. The first and second drain extension regions 306 and 308 are symmetrical and have a second conductivity type that is opposite to the conductivity type of the well region 304. A back gate region 322 is also formed in the well region 304 between the first and second drain extension regions 306 and 308. The back gate region has the same conductivity type as the well region 304, but typically has a higher doping concentration. The back gate region 308 has a back gate length S1 and a doping concentration selected according to a threshold voltage expected and / or selected by the device.【0055】The isolation structure 310 exists to isolate individual transistor devices. The isolation structure 310 may be a local oxidation structure (LOCOS), a shallow trench isolation region (STI), or other suitable integrated circuit isolation schemes. Generally, these isolation structures are formed before the well regions 304 or the drain extension regions 306 and 308 are formed.【0056】A first source / drain region 314 is formed in the first drain extension region 306. The first source / drain region 314 has a second type of conductivity that is opposite to the conductivity type of the well region 304. A second source / drain region 312 is formed in the second drain extension region 308. The second source / drain region 312 has a second type of conductivity that is opposite to the conductivity type of the well region 304. The first source / drain region 314 and the second source / drain region 312 are symmetrical.【0057】The gate structure includes a gate dielectric layer 316, a sidewall 320, and a gate 318, and the gate structure is formed over the well region 304. Generally, the gate structure is formed before the source region 312 and the drain region 314 are formed.【0058】According to the inventor of the present invention, the threshold voltage of the second device basically depends on the back gate region 322, especially the back gate length S2 and the doping concentration of the back gate region. The drain extension length X2 is from one side of the drain extension region 306 to the first side of the gate 320, wherein the first side is above the drain extension region 306. The notch region length G2 is a length from one side of the drain extension region 306 to one side of the back gate region 308. The back gate length S2 is a length from the first side of the back gate region 322 to the second side of the back gate region 322. The total channel length L2 is equal to 2 * X2 + 2 * G2 + S1.【0059】Therefore, the threshold voltage of the second device is substantially equal to the threshold voltage of the first device in FIG. 3A, although the channel length L2 of the former is greater than the channel length L1 of the latter.【0060】The first and second devices are actually examples, and these two examples are provided to facilitate a better understanding of aspects of the present invention. In addition, it should be noted that the back gate lengths S1 and S2 shown in FIGS. 3A and 3B are equal at the time of formation, but will change after diffusion and / or other processing and some changes in length. These changes are not shown in Figures 2A and 2B to facilitate a better understanding of the invention.【0061】Further, it should be understood that aspects of the present invention include DEMOS devices and are not limited to HVMOS devices.【0062】FIG. 4 is a flowchart illustrating a method 400 of manufacturing a DEMOS or HVMOS transistor device with a varying channel length but similar threshold voltages, according to an aspect of the invention. Refer to Figures 2A and 2B shown above for more details. The method 400 forms first and second asymmetric HVMOS transistor devices with varying channel lengths but similar threshold voltages.【0063】Meanwhile, to simplify the description, the method 400 is described as being performed sequentially. It should be understood and appreciated that the invention is not limited to the illustrated order, as some aspects of the invention may occur in a different order and / or concurrently with other aspects introduced and described herein. Moreover, according to one aspect of the invention. Not all illustrated features are required to implement a method.【0064】The method 400 begins at block 402 where a semiconductor substrate or body is provided. The semiconductor body is made of a semiconductor material such as silicon. The semiconductor substrate or body is typically a wafer and may be doped or undoped.【0065】At block 404, an isolation structure is formed on the substrate. The isolation structure is used to electrically isolate individual transistors on the device. The isolation structure may be a local oxidation structure (LOCOS), a shallow trench isolation region (STI), or other suitable integrated circuit isolation schemes. The LOCOS structure is first formed by depositing an oxide film and a nitride film, and then patterned and etched to expose an area in a substrate requiring an isolation structure. After that, the substrate is oxidized to form an isolation structure. The STI structure is first formed by etching a trench in a substrate, and is then filled with an insulator made of an insulating material such as silicon dioxide, silicon nitride, or the like.【0066】At block 406, a well region composed of first and second well regions is formed within the semiconductor body. In one example, n-type or p-type dopants are respectively doped into the semiconductor body to form n-well and p-well regions. In another example, the semiconductor body has been appropriately doped with the desired doping and concentration and can be used as a well region. These well regions have a first conductivity type, such as n-type or p-type. In one example, the p-type well is formed as an epitaxial layer at a dose of about 5E14 per cubic centimeter to about 11E15 per cubic centimeter. Well regions can be formed in accordance with the present invention using other suitable processes.【0067】A first drain extension region is formed in the first well region at the block 408 according to the first channel length L1. The first drain extension region has a second conductivity type opposite to the first conductivity type, and partially defines the first drain extension length X1. The second drain extension region is formed in the second well region at the block 410 according to the second channel length L2, and the second channel length L2 may be changed relative to the length L1. The second drain extension region partially defines a second drain extension length X2.【0068】The drain extension region is formed by implanting a selected dopant having a relatively low dose and low energy. The first and second drain extension regions with selected doses and energies are formed to produce an expected doping concentration that is less than the subsequently formed source and drain regions, so that the drain extension regions increase as the drain voltage increases. Deplete.【0069】At block 412, a first back gate region is formed according to the first channel length L1 and a selected threshold voltage. The first back gate region is formed with a back gate length S1 and a doping concentration that produce a selected threshold voltage. In one example, the back gate region is formed by implanting boron having a dose of about 0.5E12 to about 1.0E13 and an energy of about 30 to about 90 KeV. The back gate region may be formed using other suitable processes.【0070】The first back gate region defines a back gate length S1 and a notch region length G1. G1 is a distance between one side of the first back gate region and the first drain extension region. At block 414, a second back gate region is formed according to the second channel length and the selected threshold voltage. The formed first and second back gate regions each have a length and a doping concentration that produce a selected threshold voltage. The second back gate region also defines a second back gate length S2 and a second notch region length G2, where G2 is a distance between one side of the second back gate region and one side of the second drain extension region. In some examples, the first back gate length S1 and the second back gate length S2 are almost equal when formed, because of the doping concentration or dose used in the formation. In other examples, the first back gate length S1 and the second back gate length S2 may be changed and / or the doping concentration may be changed to obtain a selected threshold voltage. In addition, it should also be understood that in other aspects of the invention, the first back gate length S1 and the second back gate length S2 may be changed and / or the doping concentration may be changed to obtain a changing threshold voltage.【0071】It should be understood that the length of the gap region can be increased without substantially affecting the threshold voltage. Generally, the first notch region length G1 and the second notch region length G2 are selected according to the first and second channel lengths, respectively.【0072】At block 416, a first gate structure is formed over the first well region and includes a gate dielectric layer, a gate electrode layer, and some sidewall isolation regions. The first gate structure defines a first channel length L1 and is also used to define a first notch region length G1 and a first drain extension length X1. At block 418, a second gate structure is formed over the second well region and also includes a gate dielectric layer, a gate electrode layer, and a plurality of sidewall isolation regions. The second gate structure is changed in length from the first gate structure and defines a second channel length L2. In addition, the gate electrode of the second gate structure is also used to define a second gap region length G2 and a second drain extension length X2.【0073】At block 420, a first source region is formed in a first back gate region and a first drain region is formed in a first drain extension region. At block 422, a second source region is formed in the second back gate region and a second drain region is formed in the second drain extension region.【0074】Other processes may also be performed, such as thermal processes. For example, a thermal annealing can be performed, which will activate the implanted doping in the source / drain regions. In another example, appropriate annealing may be performed at a temperature of about 1050 degrees Celsius to about 1100 degrees Celsius for a duration of about 300 to about 600 minutes. In addition, silicide regions can be formed on the gate structure and on the source / drain regions. For example, a suitable silicide region may be composed of cobalt (Co), titanium (Ti), or the like. Generally, a silicide region is formed by applying a mask or spraying a silicide material (such as Co, Ti, etc.) on the first gate layer. The silicide process is then performed, causing the silicide material to react with the underlying material (such as silicon), thereby forming a silicide region. In addition, a thermal process or annealing is usually performed. The silicide region typically provides lower contact resistance to the first gate layer.【0075】Subsequently, an interlayer dielectric layer or other insulating layer may be formed and a contact may be selectively formed therein. After that, other layers including a protective layer and a metallization layer may be formed to complete the manufacturing of the device.【0076】After fabrication, the back gate length (poly overlap) produced can be changed relative to its original length when formed. In addition, the resulting back gate lengths can vary from each other or be approximately equal to each other. Diffusion and / or other manufacturing processes may cause a slight change in the back gate length compared to the implanted length. However, even if there is a change, the electrical characteristics of the two regions can be maintained. In addition, it should be noted that the initial length at the time of formation can be selected to produce a similar back gate length when fabrication is complete.【0077】Although the above method is described with respect to the first and second devices, the method also includes forming a plurality of devices in a region having a first channel length and forming a plurality of devices in other regions having a second channel length. In addition, it should be understood that the method 400 can be extended to multiple devices with varying channel lengths but with fixed or constant back gate lengths (also commonly referred to as POLY overlap). For example, the method 400 may be used to form third devices having different channel lengths but the same back gate length.【0078】FIG. 5 is a flowchart describing a method 500 of manufacturing a symmetric HVMOS transistor device having a varying channel length and a substantially similar threshold voltage in accordance with an aspect of the present invention. Refer to Figures 3A and 3B shown above for more details. The method 500 forms first and second symmetrical HVMOS transistor devices with varying channel lengths but similar threshold voltages.【0079】Meanwhile, to simplify the description, the method 500 is described when it is performed sequentially. It should be understood and appreciated that the invention is not limited to the illustrated order, as some aspects of the invention may occur in a different order and / or concurrently with other aspects introduced and described herein. Moreover, not all illustrated features must implement a method in accordance with an aspect of the invention.【0080】The method 500 begins at block / unit 502 where a semiconductor substrate or body is provided. The semiconductor body is made of a semiconductor material such as silicon. The semiconductor substrate or body is typically a wafer and may be doped or undoped.【0081】At block 504, an isolation structure is formed on the substrate. The isolation structure is used to electrically isolate individual transistors on the device. The isolation structure may be a local oxidation structure (LOCOS), a shallow trench isolation region (STI), or other suitable integrated circuit isolation schemes.【0082】At block 506, a well region composed of first and second well regions is formed within the semiconductor body. These well regions have a first conductivity type, such as n-type or p-type.【0083】At block 508, a first symmetrical drain extension region is formed in the first well region according to the first channel length L1. The first symmetrical drain extension region has a second conductivity type opposite to the first conductivity type. The first symmetrical extension region defines a first drain extension length X1. At block 510, a second symmetrical drain extension region is formed in the second well region according to a second channel length L2 that can be changed with respect to the relative length L1. The second symmetrical drain extension region also has a second conductivity type. In addition, the second symmetrical drain extension region defines a second drain extension length X2.【0084】At block 512, a first back gate region is formed between the first symmetrical drain extension region according to the first channel length L1 and a selected threshold voltage. The first back gate region is formed to have a length and a doping concentration that produce a selected threshold voltage. The first back gate region defines a back gate length S1 and a notch region length G1, and G1 is a distance between one side of the first back gate region and the first drain extension region. At block 514, a second back gate region is formed between the second symmetrical drain extension region according to the second channel length and the selected threshold voltage. The formed first and second back gate regions each have a length and a doping concentration that produce a selected threshold voltage. The second back gate region also defines a second back gate length S2 and a second notch region length G2, where G2 is a distance between one side of the second back gate region and one side of the second drain extension region.【0085】At block 516, a first gate structure is formed over the first well region and includes a gate dielectric layer, a gate electrode layer, and some sidewall isolation regions. The first gate structure partially covers the first symmetrical drain extension region and the first back gate region and defines a first channel length L1. At block 518, a second gate structure is formed over the second well region and also includes a gate dielectric layer, a gate electrode layer, and a sidewall isolation region. The second gate structure is changed in length from the first gate structure and defines a second channel length. In addition, the second gate structure partially covers the second symmetrical drain extension region and covers the second back gate region.【0086】At block 520, a first source / drain region is formed within a first symmetrical drain extension region. At block 522, a second source / drain region is formed within a second symmetrical drain extension region.【0087】The first symmetrical device formed has a first channel length L1 composed of 2 * X1 + 2 * G1 + S1, and the second symmetrical device formed has a second channel composed of 2 * X2 + 2 * G2 + S2 Length L2. However, since the first and second back gate regions have similar lengths (S2 = S1) and similar doping concentrations, the first and second devices have approximately the same threshold voltage.【0088】Other processes may also be performed, such as thermal processes. For example, a rapid thermal anneal may be performed, which will activate the implanted dopants in the source / drain regions. In addition, silicide regions can be formed on the gate structure and on the source / drain regions. For example, a suitable silicide region may be composed of cobalt (Co), titanium (Ti), or the like. Generally, a silicide region is formed by applying a mask or spraying a silicide material (such as Co, Ti, etc.) on the first gate layer. The silicide process is then performed, causing the silicide material to react with the underlying material (such as silicon), thereby forming a silicide region. In addition, a thermal process or annealing is usually performed. The silicide region typically provides lower contact resistance to the first gate layer.【0089】Subsequently, an interlayer dielectric layer or other insulating layer may be formed and a contact may be selectively formed therein. After that, other layers including a protective layer and a metallization layer may be formed to complete the manufacturing of the device.【0090】Although the above method is described with respect to the first and second devices, the method also includes forming a plurality of devices in a region having a first channel length and forming a plurality of devices in other regions having a second channel length. Further, it should be understood that the method 500 can be extended to multiple devices with varying channel lengths but with fixed or constant back gate lengths (also commonly referred to as POLY overlap). For example, the method 500 may be used to form third devices having different channel lengths but the same back gate length.【0091】Those skilled in the art related to the present invention should understand that various modifications can be made to the described embodiment and many other embodiments within the scope of the claimed invention.
A communications system (10) includes a physical layer hardware unit (220) and a processing unit (110). The physical layer hardware unit (220) is adapted to communicate data over a communications channel (40) in accordance with assigned transmission parameters. The physical layer hardware unit (220) is adapted to receive an incoming signal over the communications channel (40) and sample the incoming signal to generate a digital received signal. The processing unit (110) is adapted to execute a standard mode driver (240) in a standard mode of operation and a privileged mode driver (250) in a privileged mode of operation. The standard mode driver (240) includes program instructions adapted to extract control codes (280) from the digital received signal and configure the physical layer hardware unit (220) assigned transmission parameters based on the control codes (280). The privileged mode driver (250) includes prograni instructions adapted to independently extract secure control codes (310) from the digital received signal, determine an operational characteristic of the physical layer hardware unit (220), and signal a security violation in response to the operational characteristic being inconsistent with the secure control codes (310).
CLAIMS1. A communications system (10), comprising: a physical layer hardware unit (220) adapted to communicate data over a communications channel (40) in accordance with assigned transmission parameters, the physical layer hardware unit (220) being adapted to receive an incoming signal over the communications channel (40) and sample the incoming signal to generate a digital received signal ; and a processing unit (110) adapted to execute a standard mode driver (240) in a standard mode of operation and a privileged mode driver (250) in a privileged mode of operation, wherein the standard mode driver (240) includes program instructions adapted to extract control codes (280) from the digital received signal and configure the physical layer hardware unit (220) assigned transmission parameters based on the control codes (280), and the privileged mode driver (250) includes program instructions adapted to independently extract secure control codes (310) from the digital received signal, determine an operational characteristic of the physical layer hardware unit (220), and signal a security violation in response to the operational characteristic being inconsistent with the secure control codes (310). 2. The system (10) of claim 1, wherein the privileged mode driver (250) includes program instructions adapted to compare the control codes (280) generated by the standard mode driver (240) to the secure control codes (310) and signal the security violation in response to the control codes (280) being different than the secure control codes (310). 3. The system (10) of claim 1, wherein the physical layer hardware unit (220) includes a radio (230) configured in accordance with the assigned transmission parameters, and the privileged mode driver (250) includes program instructions to identify an operating state of the radio (230), compare the operating state of the radio (230) to the secure control codes (310), and signal the security violation in response to the operating state being inconsistent with the secure control codes (310). 4. The system (10) of claim 1, wherein the standard mode driver (240) includes program instructions adapted to extract encrypted data (260) from the digital received signal and decrypt the encrypted data (260) to generate decrypted data (270) including the control codes (280), and the privileged mode driver (250) includes program instructions adapted to receive the encrypted data (260), decrypt the encrypted data (260) to generate secure decrypted data (300), and extract the secure control codes (310) from the secure decrypted data (300). 5. The system (10) of claim 1, wherein the privileged mode driver (250) includes program instructions adapted to prohibit further operation of at least one of the standard mode driver (240) and the processing unit (110) in response to identifying the security violation. 6. A method for identifying security violations in a transceiver (50), comprising: receiving digital data over a communications channel (40) in a standard processing mode of a processing unit (I 10) ; extracting control codes (280) from the digital received signal in the standard processing mode; configuring assigned transmission parameters of a physical layer hardware unit (220) in the transceiver (50) in the standard processing mode based on the control codes (280); transitioning the processing unit (110) into a privileged processing mode; extracting secure control codes (310) from the digital received signal in the privileged processing mode; determining an operational characteristic of the physical layer hardware unit (220) in the transceiver (50) in the privileged processing mode; comparing the operational characteristic to the secure control codes (310) in the privileged processing mode; and signaling a security violation in response to the operational characteristic being inconsistent with the secure control codes (310). 7. The method of claim 6, wherein determining the operational characteristic of the physical layer hardware unit (220) comprises determining the control codes (280) extracted from the digital received signal in the standard processing mode. 8. The method of claim 6, wherein the physical layer hardware unit (220) includes a radio (230) configured in accordance with the assigned transmission parameters, and determining the operational characteristic of the physical layer hardware unit (220) comprises identifying an operating state of the radio (230). 9. The method of claim 6, further comprising: extracting encrypted data (260) from the digital received signal in the standard processing mode; decrypting the encrypted data (260) to generate decrypted data (270) including the control codes (280); receiving the encrypted data (270) in the privileged processing mode; decrypting the encrypted data (270) in the privileged processing mode to generate secure decrypted data (300); and extracting the secure control codes (310) from the decrypted data in the privileged processing mode. 10. The method of claim 6, further comprising prohibiting further operation of the processing unit (110) in response to identifying the security violation.
PRIVILEGED MODE OVERSIGHT OF CONTROL PARAMETERSTECHNICAL FIELDThis invention relates generally to modem communications and, more particularly, to a software modem with privileged mode oversight of control parameters. BACKGROUND ARTIn recent years cellular telephones have become increasingly popular. A cellular telephone is one example of what is referred to as a"mobile station"or"mobile terminal. "A mobile station can take on various forms other than a cellular telephone, including a computer (e. g. , a notebook computer) with mobile communication capabilities. Telecommunications services are provided between a cellular telecommunications network and a mobile station over an air interface, e. g. , over radio frequencies. Typically, each subscriber having a mobile station is assigned a unique International Mobile Subscriber Identity (IMSI). At any moment, an active mobile station may be in communication over the air interface with one or more base stations. The base stations are, in turn, managed by base station controllers, also known as radio network controllers. A base station controller together with its base stations comprise a base station system. The base station controllers of a base station system are connected via control nodes to a core telecommunications network, such as the publicly switched telephone network (PSTN). One type of standardized mobile telecommunications scheme is the Global System for Mobile communications (GSM). GSM includes standards that specify functions and interfaces for various types of services. GSM systems may be used for transmitting both voice and data signals. A particular base station may be shared among multiple mobile stations. Because the radio spectrum is a limited resource, the bandwidth is divided using combination of Time-Division and Frequency-Division Multiple Access (TDMA/FDMA). FDMA involves dividing the maximum frequency bandwidth (e. g. , 25 MHz) into 124 carrier frequencies spaced 200 kHz apart. A particular base station may be assigned one or more carrier frequencies. Each carrier frequency is, in turn, divided into time slots. During an active session between the base station and the mobile station, the base station assigns the mobile unit a frequency, a power level, and a time slot for upstream transmissions from the mobile station to the base station. The base station also communicates a particular frequency and time slot for downstream transmissions from the base station destined for the mobile station. The fundamental unit of time defined in GSM is referred to as a burst period, which lasts 15/26 ms (or approx. 0.577 ms). Eight burst periods are grouped into a TDMA frame (120/26 ms, or approx. 4.615 ms), which is the basic unit for the definition of logical channels. One physical channel is defined as one burst period per frame. Individual channels are defined by the number and position of their corresponding burst periods. GSM frames, each frame having 8 burst periods, are grouped into superframes (e. g. , groups of 51 frames) that include both traffic (ie., voice or data signals) and control information. The control information is conveyed over common channels defined in the superframe structure. Common channels can be accessed both by idle mode and dedicated mode mobile stations. The common channels are used by idle mode mobile stations to exchange signaling information for changing to dedicated mode in response to incoming or outgoing calls. Mobile stations already in the dedicated mode monitor the surrounding base stations for handover and other information. The common channels include: a Broadcast Control Channel (BCCH) used to continually broadcasts information including the base station identity, frequency allocations, and frequency-hopping sequences; a Frequency Correction Channel (FCCH) and Synchronization Channel (SCH) used to synchronize the mobile station to the time slot structure of a cell by defining the boundaries of burst periods, and the time slot numbering (ie., every cell in a GSM network broadcasts exactly one FCCH and one SCH, which are, by definition, sent on time slot number 0 within a TDMA frame); a Random Access Channel (RACH) used by the mobile station to request access to the network; a Paging Channel (PCH) used to alert the mobile station of an incoming call ; and an Access Grant Channel (AGCH) used to allocate a Stand-alone Dedicated Control Channel (SDCCH) to a mobile station for signaling (ie., to obtain a dedicated channel) following a request on the RACH. For security reasons, GSM data is transmitted in an encrypted form. Because a wireless medium can be accessed by anyone, authentication is a significant element of a mobile network. Authentication involves both the mobile station and the base station. A Subscriber Identification Module (SIM) card is installed in each mobile station. Each subscriber is assigned a secret key. One copy of the secret key is stored in the SIM card, and another copy is stored in a protected database on the communications network that may be accessed by the base station. During an authentication event, the base station generates a random number that it sends to the mobile station. The mobile station uses a random number, in conjunction with the secret key and a ciphering algorithm (e. g., A3), to generate a signed response that is sent back to the base station. If the signed response sent by the mobile station matches the one calculated by network, the subscriber is authenticated. The base station encrypts data transmitted to the mobile station using the secret key. Similarly, the mobile station encrypts data it transmits to the base station using the secret key. After a transmission received by the mobile station is decrypted, various control information, including the assigned power level, frequency, and time slot for a particular mobile station may be determined by the mobile station. Generally, communication systems are described in terms of layers. The first layer, responsible for the actual transmission of a data carrying signal across the transmission medium, is referred to as the physical layer (PHY). The physical layer groups digital data and generates a modulated waveform based on the data in accordance with the particular transmission scheme. In GSM, the physical layer generates the transmission waveform and transmits during the assigned transmit time slot of the mobile station. Similarly, the receiving portion of the physical layer identifies data destined for the mobile station during the assigned receipt time slot. The second layer, referred to as a protocol layer, processes digital data received by the physical layer to identify information contained therein. For example, in a GSM system, decryption of the data is a protocol layer function. Notice that changes in the operating parameters of the physical layer are identified only after decryption and processing by the protocol layer. Although this particular interdependency does not generally cause a problem in a purely hardware implementation, it may cause a problem when all or portions of the protocol layer are implemented in software. Certain computer systems, especially portable notebook computers, may be equipped with wireless modems. One trend in modem technology involves the use of software modems that implement some of the real-time functions of traditional hardware modems using software routines. Because the hardware complexity of a software modem is less than a hardware counterpart, it is generally less expensive as well as more flexible.For example, the protocol layer decryption and processing may be implemented partially or entirely with software. Software systems, such as PC systems, run interface control software in operating systems environments as software drivers. These drivers are responsible for communicating to the hardware devices and operate at a privileged level in the operating system. Other software applications are precluded from affecting the drivers. However, because drivers are not protected from other drivers, a variety of problems can occur that might affect the operation of a driver, such as by corrupting its operation. These effects may be caused accidentally, or may be caused by purposeful hacking. A corrupted (or co-opted) driver might cause additional problems outside the computer, such as causing a phone line or wireless channel to be used, operating an external peripheral, or deleting important data. Because the operating parameters of the physical layer, which control the operation of the transmitter of the mobile station, are controlled by the protocol layer using software, it may be possible for a computer program or virus to take control of the mobile station and cause it to accidentally or purposefully transmit outside of its assigned time slot. A wireless communications network, such as a cellular network, relies on a shared infrastructure. A mobile station must adhere to the'rules of the road'or it may cause interference on the network. If certain functions of the mobile station are controlled in software, a programmer may determine how the GSM control frames are decoded and how the transmitter module is triggered. A virus may then be written and spread over the network to infiltrate the software-based mobile stations. Then, on a particular time and date, the virus could take direct control of the mobile station and transmit continuously or intermittently and inundate the base stations and other mobile units with random frequencies and full power. Such a virus design could enable and disable at random times to avoid detection, robbing the air-time supplier of some or all of his available bandwidth and may even cause a complete shutdown of the network. Such an attack may take only a few affected devices (ie., as few as one) per cell to disable the cell completely. The security problems associated with mobile stations operating in a shared infrastructure may be segregated into three levels of severity: tamper-proof, non-tamperproof, and class break. First, a hardware/firmware implementation (such as a cell-phone) is the hardest with which to tamper, because each device must be acquired individually and modified (ie., tamper-proof). On the other hand, a software-based solution is easier to tamper with, as a hacker can concentrate on a software-only debugger environment (i. e. , non-tamper-proof). Finally, a system with the ability to be tampered with that is similar on all systems and allows the tampering to be distributed to a large number of systems of the same type is susceptible to a'classbreak.' A software wireless modem is susceptible not only to a class-break, but also it is among those devices whose code may be accessed from the same layer as IP (internet protocol) or another portable code access mechanism. Many software wireless modems may be integrated into computers coupled to networks or theInternet. Such an arrangement increases the susceptibility of the software to being tampered with and controlled. Communication devices implementing other communications protocols using software may also be susceptible to some of the problems identified above, but to differing degrees and levels of consequence. For example, software drivers for communication devices using copper subscriber lines, such voice band modems (V. 90), asymmetric digital subscriber line (DSL) modems, home phone line networks (HomePNA), etc., may be attacked, resulting in the subscriber line being disabled or improperly used. For example, a group of infected software modems may be used in a denial of service attack to continuously place calls to a predetermined number and overwhelm the destination. The software modem could also be used to prevent outgoing or incoming calls on the subscriber line or disrupt HomePNA traffic. Other wireless communication devices implemented in software, such as wireless network devices, could also be commandeered to disrupt traffic on the wireless network. The present invention is directed to overcoming, or at least reducing the effects of, one or more of the problems set forth above. DISCLOSURE OF INVENTIONOne aspect of the present invention is seen in a communications system including a physical layer hardware unit and a processing unit. The physical layer hardware unit is adapted to communicate data over a communications channel in accordance with assigned transmission parameters. The physical layer hardware unit is adapted to receive an incoming signal over the communications channel and sample the incoming signal to generate a digital received signal. The processing unit is adapted to execute a standard mode driver in a standard mode of operation and a privileged mode driver in a privileged mode of operation. The standard mode driver includes program instructions adapted to extract control codes from the digital received signal and configure the physical layer hardware assigned transmission parameters based on the control codes. The privileged mode driver includes program instructions adapted to independently extract secure control codes from the digital received signal, determine an operational characteristic of the physical layer hardware unit, and signal a security violation in response to the operational characteristic being inconsistent with the secure control codes. Another aspect of the present invention is seen in a method for identifying security violations in a transceiver. The method includes receiving digital data over a communications channel in a standard processing mode of a processing unit; extracting control codes from the digital received signal in the standard mode of operation; configuring assigned transmission parameters of a physical layer hardware unit in the transceiver in the standard processing mode based on the control codes; transitioning the processing unit into a privileged processing mode of operation; extracting secure control codes from the digital received signal in the privileged processing mode; determining an operational characteristic of the physical layer hardware unit in the transceiver in the privileged processing mode; comparing the operational characteristic to the secure control codes in the privileged processing mode; and signaling a security violation in response to the operational characteristic being inconsistent with the secure control codes. BRIEF DESCRIPTION OF THE DRAWINGSThe invention may be understood by reference to the following description taken in conjunction with the accompanying drawings, in which like reference numerals identify like elements, and in which:Figure 1 is a simplified block diagram of a communications system in accordance with one illustrative embodiment of the present invention;Figure 2 is a simplified block diagram of an exemplary computer that embodies a user station in the communications system of Figure 1 ; andFigure 3 is a simplified functional block diagram illustrating the interactions between the standard mode driver and the privileged mode driver in the computer of Figure 2 in one particular embodiment of the present invention. While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims. MODE (S) FOR CARRYING OUT THE INVENTIONIllustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers'specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure. Referring to Figure 1, a block diagram of a communications system 10 is provided. The communications system 10 includes a user station 20 in communication with a central station 30 over a communication channel 40. In the illustrated embodiment, the user station 20 is a mobile computing device using a software modem 50 to communicate in accordance with a wireless communication protocol, such asGSM. The central station 30 may be a shared base station capable of serving a plurality of subscribers.Although the invention is described as it may be implemented in a wireless environment, its application is not so limited. The teachings herein may be applied to other communication environments using software implemented communication protocols (e. g. , V. 90, ADSL, HomePNA, Wireless LAN, etc.). The user station 20 may comprise a variety of computing devices, such as a desktop computer, a notebook computer, a personal data assistant (PDA), etc. For purposes of illustration, the user station 20 is described as it may be implemented using a notebook computer. The software modem 50 may be installed as an internal resource. As will be appreciated by those of ordinary skill in the art, the software modem 50 includes a physical layer (PHY) 70 implemented in hardware and a protocol layer 80 implemented in software.For purposes of illustration, the functions of the software modem 50 are described as they might be implemented for a GSM communication protocol, although other protocols may be used. The PHY layer 70 converts digital transmit signals into an analog transmit waveform and converts an incoming analog received waveform into digital received signals. For transmit signals, the output of the protocol layer 80 is the transmit"on-air"information modulated about a zero Hz carrier (i. e., a carrierless signal). The PHY layer 70 mixes (i. e., mixing may also be referred to as upconverting) the carrierless transmit signal generated by the protocol layer 80 in accordance with assigned time slot, frequency, and power level assignments communicated to the user station 20 by the central station 30 to generate the actual analog waveform transmitted by the PHY layer 70. The central station 30 also communicates time slot and frequency assignments to the user station 20 for incoming data. The incoming analog receive waveform is sampled and downconverted based on the assigned time slot and frequency parameters to recreate a carrierless (ie., modulated about zero Hz) receive waveform.The protocol layer 80 receives the carrierless receive waveform from the PHY layer 70 and performs baseband processing, decryption, and decoding to regenerate the received data. Collectively, the time slot, frequency, and power level (i. e., for transmit data only) assignments are referred to as control codes. The particular algorithms used for implementing the software modem 50 are described by the particular industry standards (e. g., GSM standards) and are well known to those of ordinary skill in the art, so for clarity and ease of illustration they are not detailed herein, except as they are modified in accordance with the present invention. In the communications system 10 of the instant invention, the central station 30 transmits data in accordance with traditional GSM techniques. The data received by the protocol layer 80 is encrypted. As described in greater detail below, the protocol layer 80 functions are divided into privileged mode functions and standard mode functions. The standard mode functions include decoding and decrypting the received data, extracting the control codes and user data, and sending the control codes to the PHY layer 70. The privileged mode functions include comparing the actual operational characteristics of the PHY layer 70 with the assignments contained in the control codes to identify improper operation of the software modem 50 (ie., due to co-opting of the modem 50 by a software virus). If the privileged mode driver 250 determines that the PHY layer 70 is being operated inconsistently with its control code assignments, further operation of the software modem 50 and/or the user station 20 is inhibited. Turning now to Figure 2, a block diagram of the user station 20 embodied in a computer 100 is provided. The computer 100 includes a processor complex 110. For clarity and ease of understanding not all of the elements making up the processor complex 110 are described in detail. Such details are well known to those of ordinary skill in the art, and may vary based on the particular computer vendor and microprocessor type.Typically, the processor complex 110 includes a microprocessor, cache memories, system memory, a system bus, a graphics controller, and other devices, depending on the specific implementation. The processor complex 110 has two modes of operation, a standard mode and a privileged mode. An exemplary privileged mode of operation, well known to those of ordinary skill in the art, is the SystemManagement Mode (SMM). Entry into the SMM is initiated through a system management interrupt (SMI). In response to an SMI, the processor complex 110 executes SMM code previously loaded (i. e., during the initialization of the computer 100 and loading of the BIOS code) into a protected portion of the system memory not visible to any other processes (e. g., applications or drivers). The memory locations used to perform the functions of the processor complex 110 during the SMM event are also not apparent to any other process. Although the illustrative embodiment is described as it may be implemented using SMM as a privileged mode, the invention is not so limited, and a different type of privileged mode may be used. In general, a privileged mode is defined as a mode of operation not visible to other processes, such as applications or drivers, executing on the computer 100. SMM is simply one illustrative privileged mode currently available. Other privileged contexts include the use of a separate processing entity, such as a cryptoprocessor, independent from the main system microprocessor. The functions of privileged mode software are executed by the cryptoprocessor and are thus secure from tampering by other software applications executing on the main system microprocessor. Still another privileged context is possible using a main system microprocessor having a secure architecture extension. In such an implementation, the cryptoprocessor is integrated into the main system microprocessor and controlled with secure commands. The processor complex 110 is coupled to a peripheral bus 120, such as a peripheral component interface (PCI) bus. Typically a bridge unit (i. e., north bridge) in the processor complex 110 couples the system bus to the peripheral bus 120. A south bridge 150 is coupled to the peripheral bus 120. The south bridge 150 interfaces with a low pin count (LPC) bus 160 that hosts a system basic input output system (BIOS) memory 170, a universal serial bus (USB) 180 adapted to interface with a variety of peripherals (e. g., keyboard, mouse, printer, scanner, scanner) (not shown), an enhanced integrated drive electronics (EIDE) bus 190 for interfacing with a hard disk drive 200 and a CD-ROM drive (not shown), and an integrated packet bus (IPB) 210. The IPB bus 210 hosts the hardware portion of the software modem 50. In the illustrated embodiment, the software modem 50 is hosted on an advanced communications riser (ACR) card 215. Specifications for theACR card 215 and the IPB bus 210 are available from the ACR Special Interest Group (ACRSIG. ORG). The software modem 50 includes a PHY hardware unit 220 and a radio 230. In the illustrated embodiment, the radio 230 is adapted to transmit and receive GSM signals. Collectively, the PHY hardware unit 220 and the radio 230 form the PHY layer 70 (see Figure 1). The processor complex 110 executes program instructions encoded in a standard mode driver 240 and a privileged mode driver 250. The privileged mode driver 250 is loaded into the SMM space of the processor complex 110 during initialization of the computer 100. The privileged mode driver 250 may be stored in a secure location, such as the system BIOS 170, a secure memory device on the ACR card 215, a secure memory device in the computer 100, etc. An exemplary technique for storing a secure driver is described in U. S. PatentApplication No. 09/901,176 (Attorney Docket No. 2000.053400/DIR, Client Docket No. TT4040), in the names of Terry L. Cole, David W. Smith, Rodney Schmidt, Geoffrey S. Strongin, Brian C. Barnes, and Michael Barclay, entitled, "PERIPHERAL DEVICE WITH SECURE DRIVER."Collectively, the processor complex 110 and the drivers 240,250 implement the functions of the protocol layer 80 (see Figure 1). Turning now to Figure 3, a simplified functional block diagram illustrating the interactions between the standard mode driver 240 and the privileged mode driver 250 in one particular embodiment of the present invention is shown. For incoming data received by the software modem 50, the standard mode driver 240 demodulates the carrier-less waveform to reconstruct encrypted data 260 received by the PHY hardware 220.The process for reconstructing the encrypted data 260 is well known to those of ordinary skill in the art, and is defined in industry GSM standards. For clarity and ease of illustration, the details of the reconstruction process are not included herein. After reconstructing the encrypted data 260, the standard mode driver 240 decrypts the encrypted data 260 using the industry standard decryption techniques defined by the GSM standards to generate decrypted data 270. The standard mode driver 240 decodes the decrypted data 270 and extracts control codes 280 and/or user data 290. The standard mode driver 240 passes the control codes to the PHY hardware 220. In turn, the PHY hardware 220 configures the radio 230 based on the assigned time slot, frequency, and power level information contained in the control codes 280. Periodically, the privileged mode driver 250 is invoked (e. g., using an SMI). The processor complex 110 transitions to privileged mode (ie., SMM) in response to the SMI and executes the privileged mode driver 250. The privileged mode driver 250 operates on the encrypted data 260 to independently generate secure decrypted data 300 and secure control codes 310. Various techniques exist for passing the encrypted data 260 to the privileged mode driver 250. In one embodiment, the standard mode driver 240 passes a pointer indicating the memory location of the encrypted data 260. In another embodiment, a portion of the system memory is designated as a shared mailbox for privileged mode activities. Applications operating in the standard mode, such as the standard mode driver 240, may place data in a designated inbox of the shared memory space, and applications running in the privileged mode, such as the privileged mode driver 250, may place data in a designated outbox of the shared memory space. The outbox may be designated as read-only for standard mode applications. An exemplary computer system having a shared mailbox for passing data between standard mode and privileged mode applications is described in U. S. Patent Application Serial No. 09/853,447 (AttorneyDocket No. 2000.038700/LHI, Client Docket No. TT3760), in the names of Dale E. Gulick and Geoffrey S.Strongin, entitled"INTEGRATED CIRCUIT FOR SECURITY AND MANAGEABILITY."The privileged mode driver 250 accesses the PHY hardware 220 to determine the operational characteristics of the radio 230. If the control codes 280 passed by the standard mode driver 240 have not been altered, the operational characteristics of the radio 230 will be consistent with the secure control codes 310. If the operational characteristics of the radio 230 are not consistent with the secure control codes 310, the privileged mode driver 250 may take a variety of protective actions. For example, the privileged mode driver 250 may inhibit operation of the software modem 50 by disabling up the standard mode driver 240 or by entirely disabling the computer 100 by initiating an unrecoverable error condition. The particular technique for invoking the privileged mode driver 250 and the frequency at which it is invoked may vary. For example, the standard mode driver 240 may call the privileged mode driver 250 at a predetermined frequency (e. g. , every N frames up to and including every frame). In an alternative embodiment, the privileged mode driver 250 may be invoked periodically by another process independent of the standard mode driver 240. For example the operating system under which the computer 100 operates may include a timer that is used to periodically initiate an SMI to invoke the privileged mode driver 250. In another embodiment, security hardware including a secure timer may be included in the computer 100 for periodically invoking the privileged mode driver 250. For example, a restart timer 155, resident on the south bridge 150 may be used to periodically invoke the privileged mode driver 250 after a predetermined amount of time has elapsed. The particular operation of the restart timer 155 is described in greater detail in U. S. PatentApplication Serial No. 09/853,447, incorporated above. Once the privileged mode driver 250 is invoked, it uses the secure control codes 310 extracted from the secure decrypted data 300 to determine the expected operational state of the PHY hardware 220 and radio 230. There are various techniques by which the privileged mode driver 250 might identify a security violation. For example, if the standard mode driver 240 has failed to provided updated encrypted data 260 to the privileged mode driver 250, a violation may be triggered. Synchronization signals embedded by the central station 30 in generating the encrypted data 260 for transmission may be extracted from the secure decrypted data 300 to determine such a failure. One technique for checking the operational characteristics of the software modem 50 includes comparing the secure control codes 310 to the actual control codes 280 sent to the PHY hardware 220. The standard mode driver 240 might be required to write its control codes 280 to the shared mailbox, for example.The privileged mode driver 250 might also query the PHY hardware 220 to determine the control codes 280 sent by the standard mode driver 240. Another technique includes querying the PHY hardware 220 to determine the actual operating state of the radio 230 (e. g., transmitting, silent, frequency, power level, etc. ). If this operating state is not consistent with the secure control codes 310, a violation may be triggered. The specific technique used to compare the actual operating state to the expected operating state may vary, depending on the particular implementation. A combination of the techniques may also be used. For example, on some iterations the control codes 280 generated by the standard mode driver 240 may be evaluated.On other iterations, the PHY layer 220 may be queried, and still on other iterations, the actual operating state of the radio 230 may be determined. Certain techniques may be less computationally taxing, and a combination of techniques may provide increased efficiency. A combination of techniques may also be used to identify different types of potential attacks. For example, the attacks might include co-opting of the standard mode driver 240 to alter the control codes 280; blocking the control codes 280 sent by the standard mode driver 240 and substituting other control codes; preventing the standard mode driver 240 from passing updated encrypted data 260 to the privileged mode driver 250, etc. A combination of techniques, in lieu of one particular technique, may be more effective in identifying violations. For data being transmitted by the software modem 50, the standard mode driver 240 handles all the data processing functions, including encoding, interleaving, burst assembly, encryption, and baseband processing to generate the carrier-less transmit waveform. The standard mode driver 240 passes the transmit waveform to the PHY hardware 220 and radio 230 for upconverting in accordance with the assigned time slot, frequency, and power level previously defined by the control codes 280. By overseeing the operational characteristics of the software modem 50, attempts at surreptitious control of the modem 50 may be identified and stopped relatively quickly. As such, the potential for wide scale disruption of the communications network is reduced. The security of the software modem 50 is increased without sacrificing the flexibility and adaptability features inherent in its software implementation. The particular embodiments disclosed above are illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the invention. Accordingly, the protection sought herein is as set forth in the claims below.
Semiconductor die stacks, and associated methods and systems are disclosed. The semiconductor die stack may include a first die with a memory array and a second die with CMOS circuitry configured to access the memory array. The first die may not have circuitry for accessing the memory array. Further, the first and second dies may be bonded to function as a single memory device, and front surfaces of the first and second dies are conjoined to form electrical connections therebetween. The second die may include a portion uncovered by the first die, where bond pads of the semiconductor die stack are located. The first die may provide a space for bond wires to connect to the bond pads without interfering with another die attached above the semiconductor die stack. Multiple semiconductor die stacks may be stacked on top of and in line with each other.
CLAIMSWhat is claimed is:1 . A semiconductor die assembly, comprising: a first semiconductor die including an array of memory cells, exclusive of circuitry configured to access the array of memory cells; and a second semiconductor die including complementary metal-oxide- semiconductor (CMOS) circuitry configured to access the array of memory cells of the first semiconductor die, wherein: the first semiconductor die includes one or more first conductive components and a set of bond pads on a front side of the first semiconductor die, the first conductive components coupled with the array of memory cells; the second semiconductor die includes one or more second conductive components on a front side of the second semiconductor die, the second conductive components coupled with the CMOS circuitry; the second semiconductor die is arranged over the first semiconductor die such that each of the second conductive components is directly bonded to a corresponding one of the first conductive components; and an edge of the first semiconductor die extends past a corresponding edge of the second semiconductor die such that a portion of the front side of the first semiconductor die is exposed, the portion including the set of bond pads.2. The semiconductor die assembly of claim 1 , wherein a first dielectric material surrounding each of the first conductive components is directly bonded to a second dielectric material surrounding each of the second conductive components.3. The semiconductor die assembly of claim 1 , wherein the first semiconductor die is exclusive of a semiconductor substrate.4. The semiconductor die assembly of claim 1 , wherein the CMOS circuitry accesses the array of memory cells through the second conductive components directly bonded to the first conductive components.5. The semiconductor die assembly of claim 1 , wherein the second semiconductor die includes a thickness greater than a height to which bond wires attached to the set of bond pads rise above the front side of the first semiconductor die.6. The semiconductor die assembly of claim 1 , wherein the first semiconductor die includes a first footprint greater than a second footprint of the second semiconductor die.7. The semiconductor die assembly of claim 1 , further comprising: a support substrate, to which a back side of the first semiconductor die is attached, the support substrate including a plurality of substrate bond pads; and a plurality of bond wires coupling individual bond pads of the set with corresponding substrate bond pads of the plurality.8. A semiconductor die assembly, comprising: a first pair of dies including a first die attached to a second die, wherein front surfaces of the first and second dies are conjoined, and the front surface of the second die includes a first extended portion uncovered by the first die, the first extended portion including a first set of bond pads; a second pair of dies carried by the first pair of semiconductor dies, the second pair including a third die attached to a fourth die, wherein front surfaces of the third and fourth dies are conjoined, and the front surface of the fourth die includes a second extended portion uncovered by the third die, the second extended portion including a second set of bond pads; a support substrate, to which a back side of the second die of the first pair is attached, the support substrate including a plurality of substrate bond pads; a plurality of first bond wires coupling individual bond pads of the first set with corresponding substrate bond pads of the plurality; and a plurality of second bond wires coupling individual bond pads of the second set with corresponding substrate bond pads of the plurality.9. The semiconductor die assembly of claim 8, wherein: the front surfaces of the first and second dies each include a plurality of conductive components, and individual conductive components of the first die are conjoined with corresponding conductive components of the second die; and peripheral circuitry of the first die is configured to access an array of memory cells of the second die through one or more conjoined conductive components of the plurality.10. The semiconductor die assembly of claim 8, wherein: the second die includes a first array of memory cells, exclusive of circuitry configured to access the first array of memory cells; and the fourth die includes a second array of memory cells, exclusive of circuitry configured to access the second array of memory cells.11 . The semiconductor die assembly of claim 8, wherein: the first die includes first peripheral circuitry configured to access a first array of memory cells of the second die; and the third die includes second peripheral circuitry configured to access a second array of memory cells of the fourth die.12. The semiconductor die assembly of claim 8, wherein a footprint of the second pair overlaps the first set of bond pads, and a thickness of the first die is configured to provide a gap for the plurality of first bond wires to be separate from a back side of the fourth die by a distance.13. The semiconductor die assembly of claim 12, wherein the gap includes a thickness of an adhesive located between the back side of the fourth die and a back side of the first die.14. The semiconductor die assembly of claim 12, wherein the gap is configured to allow a wire-bonding head to reach the bond pads of the first set without touching the back side of the fourth die.15. A method comprising: providing a first die including an array of memory cells, exclusive of circuitry configured to access the array of memory cells; providing a second die including complementary metal-oxide-semiconductor (CMOS) circuitry configured to access the array of memory cells of the first die; conjoining the first and second dies to form a first pair of dies, wherein: a front surface of the first die is in direct contact with a front surface of the second die, wherein the front surface of the first die includes a first extended portion uncovered by the second die, the first extended portion including a first set of bond pads; and the front surfaces of the first and second dies each include a plurality of conductive components, each of the conductive components of the first die directly bonded to a corresponding one of the conductive components of the second die, and a first dielectric material surrounding each of the conductive components of the first die directly bonded to a second dielectric material surrounding the corresponding one of the conductive components of the second die; and attaching the first pair of dies to a support substrate including a plurality of substrate bond pads.16. The method of claim 15, wherein conjoining the first and second dies includes: arranging a first semiconductor wafer including the first die over a second semiconductor wafer including the second die such that each of the conductive components of the first die is aligned to the corresponding one of the conductive components of the second die; bonding the first semiconductor wafer to the second semiconductor wafer to directly bond each of the conductive components of the first die to the corresponding one of the conductive components of the second die; and removing a portion of the second semiconductor wafer adjacent to the second die, the portion corresponding to the first extended portion of the first die, after bonding the first semiconductor wafer to the second semiconductor wafer.17. The method of claim 16, wherein removing the portion of the second semiconductor wafer includes: severing the portion from the second die by using an etching process, a dicing process, or both; and removing the severed portion from the second semiconductor wafer.18. The method of claim 15, wherein conjoining the first and second dies includes: arranging the second die over the first die such that each of the conductive components of the first die is aligned to the corresponding one of the conductive components of the second die; and bonding the second die to the first die to directly bond each of the conductive components of the first die to the corresponding one of the conductive components of the second die.19. The method of claim 15, further comprising: forming a plurality of first bond wires to couple individual bond pads of the first set with corresponding substrate bond pads of the plurality; and attaching, after forming the plurality of first bond wires, a second pair of dies to the first pair of dies, the second pair including a third die conjoined with a fourth die, wherein: front surfaces of the third and fourth dies are in direct contact with each other, and the front surface of the third die includes a second extended portion uncovered by the fourth die, the second extended portion including a second set of bond pads; and the front surfaces of the third and fourth dies each include a plurality of conductive components, each of the conductive components of the third die directly bonded to a corresponding one of the conductive components of the fourth die, and a third dielectric material surrounding each of the conductive components of the third die directly bonded to a fourth dielectric material surrounding the corresponding one of the conductive components of the fourth die.20. The method of claim 15, further comprising: attaching a second pair of dies to the first pair of dies, the second pair including a third die conjoined with a fourth die, wherein: front surfaces of the third and fourth dies are in direct contact with each other, and the front surface of the third die includes a second extended portion uncovered by the fourth die, the second extended portion including a second set of bond pads; and the front surfaces of the third and fourth dies each include a plurality of conductive components, each of the conductive components of the third die directly bonded to a corresponding one of the conductive components of the fourth die, and a third dielectric material surrounding each of the conductive components of the third die directly bonded to a fourth dielectric material surrounding the corresponding one of the conductive components of the fourth die; and forming, after attaching the second pair of dies to the first pair of dies, a plurality of first bond wires to couple individual bond pads of the first set with corresponding substrate bond pads of the plurality.
SEMICONDUCTOR DIE STACKS AND ASSOCIATED SYSTEMSAND METHODSTECHNICAL FIELD[0001] The present disclosure generally relates to semiconductor device assemblies, and more particularly relates to semiconductor die stacks and associated systems and methods.BACKGROUND[0002] Semiconductor packages typically include one or more semiconductor dies (e.g., memory chips, microprocessor chip, imager chip) mounted on a substrate and encased in a protective covering. The semiconductor die may include functional features, such as memory cells, processor circuits, or imager devices, as well as bond pads electrically connected to the functional features. The bond pads can be electrically connected to corresponding conductive structures of the substrate, which may be coupled to terminals outside the protective covering such that the semiconductor die can be connected to higher level circuitry.[0003] In some semiconductor packages, two or more semiconductor dies are stacked on top of each other to reduce the footprint of the semiconductor packages. The semiconductor dies in the stack may be arranged in a pattern resembling stair-steps (which may be referred to as “shingle stacking”) such that a portion of the semiconductor dies may be freely accessible - e.g., to form bond wires to one or more bond pads located in the portion. Such an arrangement, however, tends to increase the footprint of the semiconductor packages. In some cases, the semiconductor dies may be stacked in a zig-zag pattern to increase a space above the bond pads with respect to a semiconductor die overlying above the bond pads. Moreover, the semiconductors dies may include through-substrate vias (TSVs) to facilitate stacking of the semiconductor dies, but with an increased cost compared to wire bonding technique.BRIEF DESCRIPTION OF THE DRAWINGS[0004] Many aspects of the present technology can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale. Instead, emphasis is placed on illustrating clearly the overall features and the principles of the present technology.[0005] Figure 1 A is a cross-sectional diagram of a semiconductor die.[0006] Figures 1 B-1 D are cross-sectional diagrams of semiconductor die pairs and a semiconductor die assembly in accordance with embodiments of the present technology.[0007] Figures 2A and 2B are three-dimensional diagrams of a semiconductor die stack and a semiconductor die assembly in accordance with embodiments of the present technology.[0008] Figure 3 illustrates various plan view diagrams of semiconductor die stacks in accordance with embodiments of the present technology.[0009] Figure 4 illustrates example process steps of making semiconductor die stacks in accordance with embodiments of the present technology.[0010] Figure 5 is a block diagram schematically illustrating a system including a semiconductor device assembly configured in accordance with an embodiment of the present technology.[0011] Figure 6 is a flowchart of a method of making a semiconductor die pair in accordance with embodiments of the present technology.DETAILED DESCRIPTION[0012] Specific details of several embodiments of semiconductor die stacks, and associated systems and methods are described below. The term "semiconductor device or die" generally refers to a solid-state device that includes one or more semiconductor materials. Examples of semiconductor devices include logic devices, memory devices, controllers, or microprocessors (e.g., central processing unit (CPU), graphics processing unit (GPU)), among others. Such semiconductor devices may include integrated circuits or components, data storage elements, processing components, and/or other features manufactured on semiconductor substrates. Further, the term “semiconductor device or die” can refer to a finished device or to an assembly or other structure at various stages of processing before becoming a finished functional device. Depending upon the context in which it is used, the term “substrate” can refer to a wafer-level substrate or to a singulated, die-level substrate. Also, a substrate may include a semiconductor wafer, a package support substrate, an interposer, a semiconductor device or die, or the like. A person having ordinary skill in the relevant art will recognize that suitable steps of the methods described herein can be performed at the wafer level or at the die level.[0013] Further, unless the context indicates otherwise, structures disclosed herein can be formed using conventional semiconductor-manufacturing techniques. Materials can be deposited, for example, using chemical vapor deposition (CVD), physical vapor deposition (PVD), atomic layer deposition (ALD), spin coating, plating, and/or other suitable techniques. Similarly, materials can be removed, for example, using plasma etching, wet etching, chemical-mechanical planarization, or other suitable techniques. Some of the techniques may be combined with photolithography processes. A person skilled in the relevant art will also understand that the technology may have additional embodiments, and that the technology may be practiced without several of the details of the embodiments described herein with reference to Figures 1A-1 D, 2A, 2B, and 3 through 6.[0014] Certain semiconductor devices, e.g., a memory device, may include an area with an array of memory cells (which may also be referred to as an array, a memory array, an array region, an array portion, or the like) and another area with peripheral circuitry (which may also be referred to as a periphery, a peripheral region, a peripheral portion, or the like). The array of memory cells may include various types of memory cells, such as dynamic random-access memory (DRAM) cells, phase change memory (PCM) cells, flash memory cells (e.g., NAND cells, NOR cells), among others. The peripheral circuitry can be configured to perform various functions for the semiconductor device, including accessing the memory cells of the array. In some cases, the peripheral region may be referred to as a CMOS region (CMOS, CMOS portion, CMOS area, etc.) in view of complementary-metal-oxide-semiconductor (CMOS) transistors included in the peripheral circuitry. Additionally, or alternatively, the peripheral region may be referred to as a logic region owing to the nature of digital logic functions that the peripheral circuitry performs. As such, the memory device may be regarded to have an array region and a CMOS region (or a peripheral/logic region), among others.[0015] In general, a die size of a memory device may be primarily determined by the area of the array region and the area of the CMOS region. Accordingly, research and development efforts have been focused on reducing both areas - e.g., vertically stacking memory cells (e.g., as in the 3-dimensional (3D) NAND memory technology) to reduce the area of the array region, or CMOS transistor scaling to reduce the area of the CMOS region. Process steps associated with fabricating an array of memory cells, however, may include disparate characteristics than those used for fabricating CMOS circuitry. For example, temperatures of certain CMOS process steps may be higher than those used in memory array process steps (and may be higher than a memory array can withstand without damage). Additionally, or alternatively, defect mechanisms associated with the array of memory cells tend to be different from those associated with the CMOS circuitry.[0016] As such, example embodiments of the present technology involve fabricating the CMOS region and the array region of the memory device as two separate semiconductor devices (or semiconductor dies) to optimize the fabrication processes of the CMOS circuitry and the memory cells independently of each other. Moreover, the two separate dies (e.g., an array die and a CMOS die) may be vertically combined (e.g., stacked to form a pair of semiconductor dies) such that the two (or more) separate dies, in combination, may function as a single device (e.g., one memory device). In some embodiments, front (e.g., active) surfaces of the two semiconductor dies can be arranged to face each other to form the pair such that a distance between the CMOS circuitry and the memory cells may be reduced. Moreover, the front surfaces of the two semiconductor dies may be conjoined to couple the CMOS circuitry with the memory cells of the array through conductive components (e.g., copper (Cu), Cu-containing alloy) at the interface between the array die and the CMOS die. The stack of semiconductor dies (i.e., a semiconductor die stack) may provide a smaller footprint and an improved performance (e.g., a reduced delay time owing to the reduced distance between the CMOS circuitry and the memory cells), when compared to a memory device having the CMOS circuitry and the memory cells laterally distributed.[0017] Further, one of the dies (e.g., the array die) of the stack may be arranged to extend past other die(s) (e.g., the CMOS die) to create a porch (e.g., an extended portion of the array device, uncovered by the CMOS device), where bond pads of the memory device (i.e., the semiconductor die stack) can be located. Such an arrangement creating the porch provides a gap (or a vertical space) between successive semiconductor die stacks disposed on top of another (e.g., a stack of semiconductor die stacks). The gap, in turn, may facilitate bond wires making connections to the bond pads of the memory device located in the porch, to couple the bond pads with substrate bond pads of a support substrate that carries the stack of the semiconductor die stacks. In other words, one of the semiconductor dies of the stack (e.g., the CMOS die) can be configured to provide a space (e.g., the gap) to make wire bond connections to each semiconductor die stack in the stack. In this manner, when the semiconductor die stacks are stacked on top of and in-line with each other, wire bonds to each semiconductor die stack of the stack can be formed without increasing overall footprint of the stack (e.g., avoiding the shingle stacking configuration). Moreover, the wire bonding to individual semiconductor die stacks (e.g., individual memory devices) can provide a lower-cost alternative when compared to forming through-substrate vias (TSVs, which may also be referred to as through-silicon vias) for individual memory devices for stacking them on top of and in-line with each other.[0018] As used herein, the terms "front," "back," "vertical," "lateral," "down," "up," "upper," and "lower" can refer to relative directions or positions of features in the semiconductor device assemblies in view of the orientation shown in the Figures. For example, "upper" or "uppermost" can refer to a feature positioned closer to the top of a page than another feature. These terms, however, should be construed broadly to include semiconductor devices having other orientations. Unless stated otherwise, terms such as “first” and “second” are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements. Moreover, although in the example embodiments herein, semiconductor die stacks with two dies (e.g., semiconductor die pairs) are used to illustrate clearly the overall features and the principles of the present technology, the present technology is not limited thereto. For example, in some embodiments, a semiconductor die stack could include a single larger die carrying two or more smaller ones. Additionally, or alternatively, one or more smaller ones carried by the single larger die may include a stack of dies.[0019] Figure 1 A is an example schematic cross-sectional view of a semiconductor device 101 with a front side 102 and a back side 103. The semiconductor device 101 may include a substrate 104, an array region 105, and a CMOS (peripheral or logic) region 106. The array region 105 may include an array of memory cells 108 (DRAM cells, 3D NAND cells, NOR cell, or the like). The CMOS region 106 may include CMOS circuitry 113 (command and/or address decoders, column decoders, row decoders, sense amplifiers, or the like) configured to access the array of memory cells 108 (or a memory array 108).[0020] The cross-sectional view of the semiconductor device 101 illustrates an issue related to positioning the array region 105 and the CMOS region 106 on a coplanar surface (e.g., the front side 102 of the semiconductor device 101 ). For example, signals propagating between the array region 105 and the CMOS region 106 (e.g., voltage and/or current traveling different distances between the array of memory cells 108 and the CMOS circuitry 113) may exhibit different delays for the semiconductor device 101 to handle. In this regard, the array region 105 includes near cells 109a (e.g., one or more memory cells located proximate to the CMOS region 106) and far cells 109b (e.g., one or more memory cells located relatively far from the CMOS region 106). Further, the CMOS region 106 includes near CMOS-components 114a (e.g., row decoders located proximate to the array region 105) and far CMOS-components 114b (e.g., row decoders located relatively far from the array region 105). The worst-case delay in the signal propagation may be between the far cells 109b and the far CMOS-components 114b while the best-case delay may be between the near cells 109a and the near CMOS- components 114a. Various schemes may be devised to reduce a range in the signal propagation delays, such as coupling the near-CMOS component 114a with the far cells 109b and the far-CMOS component 114b with the near cells 109a, partitioning the array region 105 to two or more sub-regions and/or partitioning the CMOS region 106 to two or more sub-regions such that the sub-regions of the array region 105 and the CMOS region 106 may be interspersed, among others.[0021] Figure 1 B is an example schematic cross-sectional view of a semiconductor die pair 130a (which may also be referred to as a semiconductor die stack) in accordance with an embodiment of the present technology. The semiconductor die pair 130a includes an array die 110 (with the array of memory cells 108 on a front side thereof) and a CMOS die 115 (with the CMOS circuitry 113 on a front side thereof) arranged on top of the array die 110, where front sides of the array die 110 and the CMOS die 115 face each other at an interface 120. Further, the front surfaces of the CMOS die 115 and the array die 110 may be conjoined at the interface 120. The array die 110 and the CMOS die 115 include substrates 104a and 104b, respectively. In some embodiments, a thickness of the substrate 104a of the array die 110 may have been reduced when compared to a thickness of the substrate 104b - e.g., a portion of the substrate 104a has been removed from a back side 103a of the array die 110, by using grinding, polishing, etching, or other suitable process steps. As a result, the substrate 104a of the array die 110 includes a first thickness (T1 a) that is less than a second thickness (T1 b) of the substrate 104b of the CMOS die 115, which may be approximately same as a thickness of the wafer substrate including the CMOS die 115, in some embodiments.[0022] In some cases, the semiconductor die pair 130a may be regarded as the semiconductor device 101 separated into two pieces (one piece corresponding to the array die 110 and another piece corresponding to the CMOS die 115) and coupled together, face to face (the array of memory cells 108 facing the CMOS circuitry 113) - e.g., a back side 103a of the array die 110 and a back side 103b of the CMOS die 115 each forming outer surfaces of the semiconductor die pair 130a. As such, the array die 110 may not include circuitry accessing the array of memory cell 108 because the CMOS die 115 includes the CMOS circuitry 113 configured to access the array of memory cells 108. In this manner, the array die 110 and the CMOS die 115, in combination, may function as a fully functional semiconductor device - e.g., the semiconductor device 101 having the array region 105 and the CMOS region 106. Further, the array die 110 may include a first footprint greater than a second footprint of the CMOS die 115.[0023] The array die 110 may include one or more first conductive components formed on the front side of the array die 110 - e.g., conductive components 220 surrounded by a first dielectric material 225 as depicted in Figure 2A. In some embodiments, the first conductive components may include copper (Cu) and/or Cu-alloy. The first conductive components may be coupled with the array of memory cells 108. The layer including the first dielectric material 225 of the array die 110 may also include conductive traces to distribute (route, direct) electrical signals between the array of memory cells 108 and the first conductive components. Similarly, the CMOS die 115 may include one or more second conductive components formed on the front side of the CMOS die 115 - e.g., conductive components 230 surrounded by a second dielectric material 235 as depicted in Figure 2A. In some embodiments, the second conductive components may include copper and/or Cu-alloy. The second conductive components may be coupled with the CMOS circuitry 1 13. The layer including the second dielectric material 235 of the CMOS die 115 may also include conductive traces to distribute (route, direct) electrical signals between the CMOS circuitry and the second conductive components. Further, the CMOS die 115 may be arranged over the array die 110 such that each of the second conductive components can be coupled with a corresponding one of the first conductive components. In this manner, the CMOS circuitry 113 may access the array of memory cells 108 through the second conductive components coupled with (e.g., directly bonded to) the first conductive components.[0024] In some embodiments, each of the first conductive components of the array die 110 may be directly bonded to a corresponding one of the second conductive components of the CMOS die 115 at the interface 120. In addition, the first dielectric material may be directly bonded to the second dielectric material at the interface 120. Such a bonding scheme (e.g., a bonding interface including two or more materials (copper, nitride and/or oxide) directly bonded together) may be referred to as a combinational bonding scheme. Further, the front surfaces of the CMOS die 115 and the array die 110 may be regarded as conjoined at the interface 120. In other embodiments, each of the first conductive components may be connected to a corresponding one of the second conductive components through a conductive pillar, a conductive bump, a conductive ball, or the like.[0025] The semiconductor die pair 130a includes an edge 112a of the array die 110 that extends past a corresponding edge 1 17a of the CMOS die 115 such that a portion 125 of the front side of the array die 110 is exposed (e.g., uncovered by the CMOS die 115). Also, the portion 125 of the array die 110 may include one or more bond pads 145 of the semiconductor die pair 130a. Moreover, the semiconductor die pair 130a includes an edge 112b of the array die 110 that is in line with (e.g., flush with) a corresponding edge 117b of the CMOS die 115. In some embodiments, an area of the portion 125 of the array die 110 may be based on a quantity of bond pads 145 of the semiconductor die pair 130a - e.g., to accommodate the quantity of bond pads 145 within the area of the portion 125. In other embodiments, the CMOS die 115 may be arranged to couple with the array die 110, away from two or more edges of the array die 110 such that two or more portions of the array die 110 may be uncovered by the CMOS die 115. Such multiple uncovered portions 125 of the array die 110 may be advantageous to accommodate a large quantity of bond pads 145 of the semiconductor die pair 130a as described in more detail with reference to Figure 3.[0026] As depicted in Figure 1 B, the array die 110 and the CMOS die 115 include a substrate 104a and a substrate 104b, respectively. In some cases, the substrate 104 (e.g., the substrate 104a, the substrate 104b, or both) may be polished from back sides to reduce an overall thickness (denoted as “T1” in Figure 1 B) of the semiconductor die pair 130a. The substrate 104b of the CMOS die 115 may provide a space (gap) with respect to another semiconductor die (or semiconductor die stack) stacked on top of the CMOS die 115, which may facilitate forming of bond wires, as described in more details with reference to Figure 1 D. For example, the CMOS die 115 may include a thickness greater than a height to which bond wires attached to the bond pads 145 rise above the front side of the array die 110.[0027] Figure 1 C is an example schematic cross-sectional view of a semiconductor die pair 130b (which may also be referred to as a semiconductor die stack) in accordance with an embodiment of the present technology. The semiconductor die pair 130b may be an example of or include aspects of the semiconductor die pair 130a. The semiconductor die pair 130b may correspond to the semiconductor die pair 130a with the substrate 104a of the array die 110 removed - i.e. , the array die 110 may be exclusive of a substrate. As such, the semiconductor die pair 130b includes the CMOS die 115 and the array of memory cells 108. In some embodiments, a support structure 135 may be bonded to (or otherwise attached to) the array of memory cells 108 to provide mechanical support, in lieu of the substrate 104a. As a result of removing the substrate 104a (or replacing the substrate 104a with the support structure 135), an overall thickness (denoted as “T2” in Figure 1 C) of the semiconductor die pair 130b may be less than the thickness (T1 ) of the semiconductor die pair 130a. Such a reduction in the thickness of the semiconductor die pair 130b may be advantageous if two or more semiconductor die pairs 130b are stacked on top of another to reduce a height of the stack (hence a height of a package including the stack).[0028] Although in the foregoing examples, a memory die (e.g., a first semiconductor die) and a CMOS die (e.g., a second semiconductor die) are described and illustrated to form the semiconductor die pair to function as a single memory device, the present technology is not limited thereto. In other words, the present technology may be applied to any semiconductor devices having two or more functional levels and/or blocks that may be separated into corresponding semiconductor dies that each include one or more functional levels and/or blocks. For example, a CPU die may include an arithmetic logic region and a cache memory region, among other regions supporting various functions of the CPU. The present technology may facilitate separating the cache memory region into a separate semiconductor die (e.g., a cache memory die), and then combining the cache memory die with an arithmetic logic die (e.g., the CPU die less the cache memory region) such that the cache memory die and the arithmetic logic die can be conjoined together to form a semiconductor die stack that operates as a single CPU. In other examples, a GPU may be partitioned into two portions - e.g., a first portion including graphics and compute array and a second portion including various peripheral circuitry, such as interface blocks, controlling circuit blocks, etc. As such, a first semiconductor die (including the graphics and compute array) and a second semiconductor dies (including various peripheral circuitry) may be generated separately such that the first and second semiconductor dies can be combined (e.g., stacked on top of another, face-to-face) to form a semiconductor die stack that operates as a single GPU.[0029] Further, although in the foregoing examples, the memory die 110 is described and illustrated to be greater in size than the CMOS die 115 (e.g., the memory die 110 having a footprint greater than the footprint of the CMOS die 115) such that the memory die 110 can “carry” the CMOS die 115, and includes the bond pads of the semiconductor die pair 130, the present technology is not limited thereto. For example, the CMOS die may be greater than the array die - e.g., when the semiconductor device 101 is a controller with an embedded memory occupying a relatively small area of the semiconductor device 101. Accordingly, the CMOS die (including various functional blocks of the controller) may be greater in size than the memory die (including the embedded memory) such that the CMOS die may carry the memory die and include the bond pads for the semiconductor die pair.[0030] Figure 1 D is an example schematic cross-sectional view of a semiconductor die assembly 170 including a stack of semiconductor die pairs (which may also be referred to as semiconductor die stacks) in accordance with an embodiment of the present technology. The semiconductor die assembly 170 includes two semiconductor die pairs (e.g., semiconductor die pair 130a-1 , semiconductor die pair 130a-2) in the stack. The semiconductor die pairs 130a-1 and 130a-2 may be examples of or include aspects of the semiconductor die pair 130a described with reference to Figure 1 B. The semiconductor die assembly 170 further includes a support substrate 150, to which the stack of semiconductor die pairs is attached. In some embodiments, an adhesive layer may be added between two adjacent semiconductor die pairs to form the stack - e.g., the adhesive layer 140 between the semiconductor die pair 130a-1 and semiconductor die pair 130a-2. The support substrate 150 includes one or more substrate bond pads (e.g., substrate bond pads 155, one of which is shown). Moreover, the semiconductor die assembly 170 includes bond wires (e.g., bond wires 160a and 160b) that couple individual bond pads of the semiconductor die pairs (e.g., the bond pads 145a of the semiconductor die pair 130a-1 , the bond pads 145b of the semiconductor die pair 130a- 2) to corresponding substrate bond pads 155.[0031] The stack of semiconductor die pairs may be configured to include a space (denoted as “S” in Figure 1 D) such that bond wires (e.g., bond wire 160a) can make connections to the bond pads of the semiconductor die pairs (e.g., bond pad 145a of the semiconductor die pair 130a-1 ) without having to interfere with a semiconductor die positioned above (e.g., the array die 110b of the semiconductor die pair 130a-2). In some embodiments, the space is configured to allow a wire-bonding head to reach the bond pads (e.g., bond pad 145a) without touching the back side of the array die 1 10b of the semiconductor die pair 130a-2. By way of example, the highest portion of the bond wire 160a is separate from the back side of the array die 110b of the semiconductor die pair 130a-2 by a distance D. In some cases, the CMOS die 115 may include a thickness (denoted as “T3”) greater than a height (denoted as “FI”) to which bond wires 160 attached to the bond pads 145 (e.g., bond pad 145a) rise above the front side of the array die 110. In some cases, the space S may include a thickness of the adhesive layer 140, in addition to the thickness (e.g., T3) of the CMOS die 115a. So long as the space S (a sum of T3 and a thickness of the adhesive layer 140) is greater than FI, electrical connections to each semiconductor die pair (e.g., multiple semiconductor die pairs in a stack) can be made with a wire bonding technique, which may provide a low-cost alternative to TSVs implemented to transmit signals in a vertically stacked semiconductor devices. [0032] Although the foregoing example depicted in Figure 1 D includes two (2) semiconductor die pairs 130a, in other embodiments, the stack of semiconductor die pairs may include a greater quantity than two (e.g., four (4), six (6), eight (8), twelve (1 ), or even more). Moreover, although the stack of semiconductor die pairs in Figure 1 D illustrates two (2) semiconductor die pairs 130a stacked in line (e.g., to minimize the footprint of the stack), in other embodiments, the stack of semiconductor die pairs may be formed in a shingled pattern (or stair-steps pattern). Also, in some embodiments, one or more semiconductor die pairs 130 may be rotated by 90-degrees, 180-degrees, or 270-degrees with respect to each other such that the bond pads of the semiconductor die pairs 130a may be accessed more easily (e.g., by the bond wires).[0033] In some embodiments, a semiconductor die assembly may include a first pair of dies (e.g., the semiconductor die pair 130a-1 ) including a first die (e.g., the CMOS die 115a) attached to a second die (e.g., the array die 110a), where front surfaces of the first and second dies are conjoined (e.g., conjoined at the interface 120a), and the front surface of the second die includes a first extended portion (e.g., the extended portion 125a) uncovered by the first die, the first extended portion including a first set of bond pads (e.g., bond pads 145a, one of which is shown in Figure 1 D). Further, the semiconductor die assembly may include a second pair of dies (e.g., the semiconductor die pair 130a-2) carried by the first pair of semiconductor dies, the second pair including a third die (e.g., the CMOS die 115b) attached to a fourth die (e.g., the array die 110b), where front surfaces of the third and fourth dies are conjoined (e.g., conjoined at the interface 120b), and the front surface of the fourth die includes a second extended portion (e.g., the extended portion 125b) uncovered by the third die, the second extended portion including a second set of bond pads (e.g., bond pads 145b, one of which is shown in Figure 1 D).[0034] In some embodiments, the semiconductor die assembly may include a support substrate (e.g., support substrate 150), to which a back side of the second die (e.g., the array die 110a) of the first pair is attached, the support substrate including a plurality of substrate bond pads (e.g., substrate bond pads 155, one of which is shown in Figure 1 D). Further, the semiconductor die assembly may include a plurality of first bond wires (e.g., bond wires 160a, one of which is shown in Figure 1 D) coupling individual bond pads of the first set (e.g., bond pad 145a) with corresponding substrate bond pads of the plurality (e.g., substrate bond pads 155), and a plurality of second bond wires (e.g., bond wires 160b, one of which is shown in Figure 1 D) coupling individual bond pads of the second set (e.g., bond pad 145b) with corresponding substrate bond pads of the plurality (e.g., substrate bond pads 155).[0035] In some embodiments, the second die includes a first array of memory cells (e.g., the array of memory cells 108a), exclusive of circuitry configured to access the first array of memory cells, and the first die includes first peripheral circuitry (e.g., the CMOS circuitry 113a) configured to access a first array of memory cells of the second die. In some embodiments, the front surfaces of the first and second dies each include a plurality of conductive components (e.g., conductive components 220 and 230 depicted in Figure 2A), and individual conductive components of the first die are conjoined with corresponding conductive components of the second die. Further, the peripheral circuitry of the first die (e.g., the CMOS circuitry 113a) may be configured to access the array of memory cells of the second die (e.g., the array of memory cells 108a) through one or more conjoined conductive components of the plurality.[0036] Similarly, the fourth die may include a second array of memory cells (e.g., the array of memory cells 108b), exclusive of circuitry configured to access the second array of memory cells, and the third die may include second peripheral circuitry (e.g., the CMOS circuitry 113b) configured to access a second array of memory cells of the fourth die. Further, the peripheral circuitry of the third die (e.g., the CMOS circuitry 1 13b) may be configured to access the array of memory cells of the fourth die (e.g., the array of memory cells 108b) through one or more conjoined conductive components (e.g., conductive components 220 and 230 depicted in Figure 2A) included in front surfaces of the third and fourth dies.[0037] In some embodiments, a footprint of the second pair overlaps the first set of bond pads (e.g., bond pad 145a), and a thickness of the first die (e.g., CMOS die 115a) is configured to provide a gap for the plurality of first bond wires (e.g., bond wire 160a) to be separate from a back side of the fourth die (e.g., array die 110b) by a distance. Further, the gap may include a thickness of an adhesive (e.g., adhesive layer 140) located between the back side of the fourth die and a back side of the first die. In some embodiments, the gap is configured to allow a wire-bonding head to reach the bond pads of the first set (e.g., bond pad 145a) without touching the back side of the fourth die. Accordingly, a stack of semiconductor die pairs 130a may be formed (e.g., the semiconductor die pair 130a-2 attached to the semiconductor die pair130a-1) prior to forming bond wires that couples the bond pads of the first set to corresponding substrate bond pads of the plurality (e.g., substrate bond pads 155).[0038] Figure 2 A is an example schematic three-dimensional view of a semiconductor die stack 205 including a first semiconductor die 210 and second semiconductor die 215. The semiconductor die stack 205 may be an example of the semiconductor die pair 130 (e.g., the semiconductor die pair 130a, the semiconductor die pair 130b) or include aspects of the semiconductor die pair 130 described with reference to Figures 1A through 1 D. Further, the first semiconductor die 210 and the second semiconductor die 215 may be examples of or include aspects of the memory die 110 and the CMOS die 115, respectively. The first semiconductor die 210 includes a front surface (or a front side) 211 and a back surface (or a back side) 212. Similarly, the second semiconductor die 215 includes a front surface (or a front side) 216 and a back surface (or a back side) 217. The first semiconductor die 210 includes an extended portion 255 (e.g., the portion 125 described with reference to Figures 1A-1 D) and a set of bond pads 240 (e.g., the bond pads 145 described with reference to Figures 1A-1 D) in the extended portion 255.[0039] In some embodiments, the first semiconductor die 210 may include an array of memory cells, exclusive of circuitry configured to access the array of memory cells. The second semiconductor die 215 may include CMOS circuitry configured to access the array of memory cells of the first semiconductor die 210 - e.g., through one or more conductive components configured to couple the array of memory cells with the CMOS circuitry. In some embodiments, the first semiconductor die 210 includes one or more first conductive components 220 on the front side 211 of the first semiconductor die 210 (further details of the first conductive components 220 are illustrated in an enlarged schematic cross-sectional diagram 206), where the first conductive components 220 are coupled with the array of memory cells of the first semiconductor die 210. Also, the second semiconductor die 215 may include one or more second conductive components 230 on the front side 216 of the second semiconductor die 215 (further details of the second conductive components 230 are illustrated in the diagram 206), where the second conductive components 230 are coupled with the CMOS circuitry of the second semiconductor die 215. Moreover, the second semiconductor die 215 is arranged over the first semiconductor die 210 such that each of the second conductive components 230 is directly bonded to a corresponding one of the first conductive components 220.[0040] Referring to the diagram 206, a first dielectric material 225 surrounding each of the first conductive components 220 may be directly bonded to a second dielectric material 235 surrounding each of the second conductive components 230, in some embodiments. Such a bonding configuration (e.g., an interface between the front side 211 of first semiconductor die 210 and the front side 216 of second semiconductor die 215 including a direct bonding interface 245 between the first and second conductive components 220 and 230, as well as a direct bonding interface 250 between the first and second dielectric materials 225 and 235) may be referred to as a combinational bonding configuration. In some embodiments, the first and second dielectric materials 225 and 235 may include additional conductive features (e.g., metallic components and traces, including copper, Cu-alloy, tungsten, aluminum, or the like) to distribute (e.g., route, direct) electrical signals from the array of memory cells and the CMOS circuitry to the first conductive components 220 and the second conductive components 230, respectively.[0041] Further, an edge of the first semiconductor die 210 may extend past a corresponding edge of the second semiconductor die 215 such that a portion (e.g., the portion 255) of the front side 211 of the first semiconductor die 210 is exposed, where the portion includes the set of bond pads 240 (e.g., bond pads 145). The first semiconductor die 210 further includes conductive traces 241 connecting the bond pads 240 to the first conductive components 220 that are coupled with the second conductive components 230. In some embodiments, the CMOS circuitry of the second semiconductor die 215 may access the array of memory cells of the first semiconductor die 210 through the second conductive components 230 directly bonded to the first conductive components 220.[0042] In some embodiments, the first semiconductor die 210 may be exclusive of a semiconductor substrate as described in the semiconductor die pair 130b with reference to Figure 1C - i.e., the first semiconductor die may include an array of memory cells (e.g., array of memory cells 108), which may be attached to a support structure (e.g., support structure 135). In some embodiments, the second semiconductor die 215 includes a thickness greater than a height to which bond wires attached to the set of bond pads 240 rise above the front side 211 of the first semiconductor die 210. In some embodiments, the first semiconductor die 210 includes a first footprint greater than a second footprint of the second semiconductor die 215. In some embodiments, a back side 212 of the first semiconductor die 210 is attached to a support substrate (e.g., support substrate 150, support substrate 260 depicted in Figure 2B) including a plurality of substrate bond pads (e.g., substrate bond pads 155). Further, a plurality of bond wires (e.g., bond wires 160) may couple individual bond pads 240 with corresponding substrate bond pads.[0043] Figure 2B is an example schematic three-dimensional view of a semiconductor die assembly 275 including a stack of semiconductor die stacks (e.g., semiconductor die pair 130, semiconductor die stack 205) attached to a support substrate 260 (e.g., support substrate 150). The semiconductor die assembly 275 may be an example of or include aspects of the semiconductor die assembly 170. The stack depicted in Figure 2B includes four (4) semiconductor die stacks 205 (e.g., semiconductor die stacks 205a through 205d), but in other embodiments, the stack may include a less quantity of semiconductor die stacks (e.g., three (3), two (2)) or a greater quantity of semiconductor die stacks (e.g., six (6), twelve (12), or even greater). As described herein, the individual semiconductor die stack 205 including the array die and CMOS die stacked, face-to-face, may provide a smaller footprint and an improved performance for a memory device (e.g., when compared to a semiconductor device 101 described with reference to Figure 1 ). Further, the semiconductor die stack 205 including the extended portion 255 with bond pads (e.g., bond pads 145, bond pads 240) may facilitate stacking two or more semiconductor die stacks 205 in-line to reduce the footprint of the stack (e.g., when compared to a stack with a shingled stacking pattern), as well as forming bond wires (e.g., bond wires 160, bond wires 270) to individual semiconductor die stacks 205 in the stack - e.g., coupling individual bond pads 240 with corresponding substrate bond pads 265.[0044] Figure 3 illustrates various plan view diagrams 300 of semiconductor die stacks in accordance with embodiments of the present technology. Each diagram includes a first semiconductor die 310 (e.g., array die 110, first semiconductor die 210), a second semiconductor die 315 (e.g., CMOS die 115, second semiconductor die 215), one or more extended (or exposed) portions 355 (e.g., portion 125, portion 255), a set of bond pads 340 (e.g., bond pads 145, bond pads 240), and a set of bond wires 360 (bond wires 160, bond wires 270). The diagrams 300 depict various options to form the extended (exposed) portions in the semiconductor die stack based on several factors, such as die sizes of the first and second semiconductor dies, a quantity of bond pads of the semiconductor die stack, a footprint of the semiconductor die stack, shapes of the a first semiconductor die 310 and/or the second semiconductor die 315, among others.[0045] For example, the diagram 300a may correspond to the stacking configuration of the semiconductor die pair 130 and/or the semiconductor die stack 205 - e.g., creating a single extended portion 355. In some embodiments, the diagram 300c may be advantageous to form a semiconductor die stack if a quantity of bond pads is significantly greater than that of the semiconductor die stack depicted in the diagram 300a - e.g., creating four (4) extended portions 355a-d with an increased footprint of the semiconductor die stack. In yet another example, the diagram 300b may be advantageous to form a semiconductor die stack if a quantity of bond pads is moderately greater than that of the semiconductor die stack depicted in the diagram 300a - e.g., creating two (2) extended portions 355a and 355b.[0046] Diagrams 300 provide various examples of the stacking configuration to form a semiconductor die stack for illustration purposes, and the present technology is not limited thereto. For example, the second semiconductor die 315 may be rotated (e.g., by 90-degrees) with respect to the first semiconductor die 310 to provide a wider area for the exposed portion 355 - i.e., one or more edges of the second semiconductor die 315 may lie outside of corresponding edges of the first semiconductor die 310. In other words, the semiconductor die stack may have some parts of the second semiconductor die 315 overhanging beyond a boundary of the first semiconductor die 310. In some cases, such an arrangement may be based on trade-offs between a quantity of bond pads of the semiconductor die stack and a footprint (e.g., overall size of the semiconductor die stack), among others. Further, although the bond pads 340 are depicted in a single column (and/or row), the bond pads 340 may be arranged in multiple columns (and/or rows). In some cases, each column (or row) of the multiple columns (or rows) of bond pads may be shifted relative to each other to provide easier access of bond wires to the bond pads.[0047] Figure 4 illustrates diagrams 400a-c describing example process steps of making semiconductor die stacks in accordance with embodiments of the present technology. Diagram 400a shows a first semiconductor wafer 411 including a plurality of first die fields 412 on its front side and a second semiconductor wafer 416 including a plurality of second die fields 417 on its front side. In some embodiments, each of the first die fields 412 corresponds to a first die 410 (e.g., array die 110) that includes an array of memory cells (e.g., array of memory cells 108), exclusive of circuitry configured to access the array of memory cells. Further, each of the second die fields 417 may have the same area as each of the first die fields 412. Each of the second die fields 417 may correspond to a second die 415 (e.g., CMOS die 115) and a segment 435 (or a portion) of the second semiconductor wafer adjacent to the second die 415. In some embodiments, the second die 415 includes CMOS circuitry configured to access the array of memory cells of the first die 410. Further, each of the front surfaces of the first and second dies may include a plurality of conductive components - e.g., the first conductive components 220, the second conductive components 230, respectively.[0048] In some embodiments, the second semiconductor wafer 416 may be flipped and brought over the first semiconductor wafer 411 such that front sides of the first and second wafers 411 and 416 may face each other. Subsequently, the first semiconductor wafer 411 including the first die 410 may be arranged over the second semiconductor wafer 416 including the second die 415 (or the second semiconductor wafer 416 including the second die 415 may be arranged over the first semiconductor wafer 411 including the first die 410) such that each of the conductive components of the first die 410 is aligned to the corresponding one of the conductive components of the second die 415. Subsequently, the first semiconductor wafer 411 may be bonded to the second semiconductor wafer 416 to directly bond each of the conductive components of the first die 410 to the corresponding one of the conductive components of the second die 415. Additionally, a first dielectric material surrounding each of the conductive components of the first die 410 may be directly bonded to a second dielectric material surrounding the corresponding one of the conductive components of the second die 415 (e.g., forming the combinational bonding configuration described with reference to Figure 2A). The diagram 400b depicts a single die field after the bonding is complete, where the single die field includes the first die 410 bonded to the second die 415.[0049] In some embodiments, the segment 435 of the second semiconductor wafer 416 may be removed, after bonding the first semiconductor wafer 411 and the semiconductor wafer 416, to expose a set of bond pads 440 (e.g., bond pads 145, bond pads 240, bond pads 340) of the first die 410. In other words, removing the segment 435 creates an extended portion 455 (e.g., extended portion 125, extended portion 255, extended portion 355) of the first die 410, where the set of bond pads 440 is located as depicted in the diagram 400c. In some embodiments, removing the segment 435 may include severing the segment 435 from the second die 415 by using a dicing process - e.g., separating the segment 435 from the second die 415 by dicing through a border 425 between the second die 415 and the segment 435, in conjunction with utilizing other dicing lanes configured to singulate individual die fields 412 and 417 of the first and second semiconductor wafers 411 and 416, respectively. Thereafter, the separated segment 435 may be removed from the second semiconductor wafer 416 (bonded to the first wafer 411 ), e.g., by using a cleaning process. In this manner, the first and second dies 410 and 415 are conjoined at the wafer level to form a semiconductor die stack, where a front surface of the first die 410 is in direct contact with a front surface of the second die 415, and where the front surface of the first die 410 includes a first extended portion 455 uncovered by the second die 415. The extended portion includes set of bond pads 440.[0050] In some embodiments, removing the segment 435 may include severing the segment 435 (or otherwise removing or detaching the segment 435) from the second die 415 by using an etch process. In some cases, a photolithography process may cover the second die 415 with a photoresist while exposing a section 430 corresponding to the segment 435. Subsequently, the etch process can remove the segment 435 uncovered by the photoresist. In other cases, a photolithography process may expose sections of the second semiconductor wafer 416 including the border 425. Subsequently, the etch process may separate the segment 435 from the second die 415 by creating a trench, where a width of the trench includes the border 425 and a depth of the trench approximately corresponds to a thickness of the second semiconductor wafer 416. Thereafter, a cleaning process may be used to remove the separated segment 435 - in conjunction with utilizing other dicing lanes configured to singulate individual die fields 412 and 417 of the first and second semiconductor wafers 411 and 416, respectively.[0051] In other embodiments, individual second die fields 417 of the second semiconductor wafer 416 may correspond to the second die 415 - e.g., without the segment 435 adjacent to the second die 415. Hence, an area of the second die field 417 may be less than that of the first die field 412 that corresponds to the first die 410. Individual first die 410 may be singulated from the first semiconductor wafer 411 , and individual second die 415 may be singulated from the second semiconductor wafer 416. Thereafter, the second die 415 may be arranged over the first die 410 such that each of the conductive components of the first die 410 is aligned to the corresponding one of the conductive components of the second die 415. Further, the second die 415 may be bonded to the first die 410 to directly bond each of the conductive components of the first die 410 to the corresponding one of the conductive components of the second die 415 to form a semiconductor die stack, as shown in the diagram 400c.[0052] Figure 5 is a block diagram schematically illustrating a system 570 including a semiconductor device assembly configured in accordance with an embodiment of the present technology. The semiconductor die stack (e.g., semiconductor die stacks 130a, 130b, 205) described with reference to Figures 1 B, 1 C, and 2A may be included in a semiconductor device assembly 500 (e.g., semiconductor device assembly 170, semiconductor device assembly 275) described with reference to Figures 1 D and 2B. The semiconductor device assembly 500 can be incorporated into any of a myriad of larger and/or more complex systems, a representative example of which is the system 570 shown schematically in Figure 5. The system 570 can include a semiconductor device assembly 500, a power source 572, a driver 574, a processor 576, and/or other subsystems or components 578.[0053] The semiconductor die stack included in the semiconductor device assembly 500 can have features generally similar to the semiconductor die stack 205 including the array die and CMOS die stacked, face-to-face, including a smaller footprint and an improved performance (e.g., when compared to a semiconductor device 101 ) for the semiconductor device assembly 500. Further, the semiconductor die stack included in the semiconductor device assembly 500 may include the extended portion with bond pads (e.g., bond pads 145, bond pads 240) that can facilitate stacking two or more semiconductor die stacks in-line to reduce the footprint of the stack (e.g., when compared to a stack with a shingled stacking pattern), as well as forming bond wires to individual semiconductor die stacks in the stack (e.g., a lower-cost alternative to forming TSVs). The resulting system 570 can perform any of a wide variety of functions, such as memory storage, data processing, and/or other suitable functions. Accordingly, representative systems 570 can include, without limitation, hand-held devices (e.g., mobile phones, tablets, digital readers, and digital audio players), computers, and appliances. Components of the system 570 may be housed in a single unit or distributed over multiple, interconnected units (e.g., through a communications network). The components of the system 570 can also include remote devices and any of a wide variety of computer readable media.[0054] Figure 6 is a flowchart 600 of a method of making a semiconductor die pair in accordance with embodiments of the present technology. The flowchart 600 may include aspects of methods as described with reference to Figure 4.[0055] The method includes providing a first die including an array of memory cells, exclusive of circuitry configured to access the array of memory cells (box 610). The method further includes providing a second die including CMOS circuitry configured to access the array of memory cells of the first die (box 615). The method further includes conjoining the first and second dies to form a first pair of dies, where a front surface of the first die is in direct contact with a front surface of the second die, where the front surface of the first die includes a first extended portion uncovered by the second die, the first extended portion including a first set of bond pads, and the front surfaces of the first and second dies each include a plurality of conductive components, each of the conductive components of the first die directly bonded to a corresponding one of the conductive components of the second die, and a first dielectric material surrounding each of the conductive components of the first die directly bonded to a second dielectric material surrounding the corresponding one of the conductive components of the second die (box 620). The method further includes attaching the first pair of dies to a support substrate including a plurality of substrate bond pads (box 625).[0056] In some embodiments, conjoining the first and second dies includes arranging a first semiconductor wafer including the first die over a second semiconductor wafer including the second die such that each of the conductive components of the first die is aligned to the corresponding one of the conductive components of the second die, bonding the first semiconductor wafer to the second semiconductor wafer to directly bond each of the conductive components of the first die to the corresponding one of the conductive components of the second die, and removing a portion of the second semiconductor wafer adjacent to the second die, the portion corresponding to the first extended portion of the first die, after bonding the first semiconductor wafer to the second semiconductor wafer. In some embodiments, removing the portion of the second semiconductor wafer includes severing the portion from the second die by using an etching process, a dicing process, or both, and removing the severed portion from the second semiconductor wafer. In some embodiments, conjoining the first and second dies includes arranging the second die over the first die such that each of the conductive components of the first die is aligned to the corresponding one of the conductive components of the second die, and bonding the second die to the first die to directly bond each of the conductive components of the first die to the corresponding one of the conductive components of the second die.[0057] In some embodiments, the method may further include forming a plurality of first bond wires to couple individual bond pads of the first set with corresponding substrate bond pads of the plurality, and attaching, after forming the plurality of first bond wires, a second pair of dies to the first pair of dies, the second pair including a third die conjoined with a fourth die, where front surfaces of the third and fourth dies are in direct contact with each other, and the front surface of the third die includes a second extended portion uncovered by the fourth die, the second extended portion including a second set of bond pads, and the front surfaces of the third and fourth dies each include a plurality of conductive components, each of the conductive components of the third die directly bonded to a corresponding one of the conductive components of the fourth die, and a third dielectric material surrounding each of the conductive components of the third die directly bonded to a fourth dielectric material surrounding the corresponding one of the conductive components of the fourth die.[0058] In some embodiments, the method may further include attaching a second pair of dies to the first pair of dies, the second pair including a third die conjoined with a fourth die, where front surfaces of the third and fourth dies are in direct contact with each other, and the front surface of the third die includes a second extended portion uncovered by the fourth die, the second extended portion including a second set of bond pads, and the front surfaces of the third and fourth dies each include a plurality of conductive components, each of the conductive components of the third die directly bonded to a corresponding one of the conductive components of the fourth die, and a third dielectric material surrounding each of the conductive components of the third die directly bonded to a fourth dielectric material surrounding the corresponding one of the conductive components of the fourth die. The method may further include forming, after attaching the second pair of dies to the first pair of dies, a plurality of first bond wires to couple individual bond pads of the first set with corresponding substrate bond pads of the plurality.[0059] It should be noted that the methods described above describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Furthermore, embodiments from two or more of the methods may be combined. From the foregoing, it will be appreciated that specific embodiments of the technology have been described herein for purposes of illustration, but that various modifications may be made without deviating from the disclosure. In addition, while in the illustrated embodiments certain features or components have been shown as having certain arrangements or configurations, other arrangements and configurations are possible. Moreover, certain aspects of the present technology described in the context of particular embodiments may also be combined or eliminated in other embodiments.[0060] The devices discussed herein, including a semiconductor device, may be formed on a semiconductor substrate or die, such as silicon, germanium, silicon- germanium alloy, gallium arsenide, gallium nitride, etc. In some cases, the substrate is a semiconductor wafer. In other cases, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOP), or epitaxial layers of semiconductor materials on another substrate. The conductivity of the substrate, or sub-regions of the substrate, may be controlled through doping using various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping may be performed during the initial formation or growth of the substrate, by ion- implantation, or by any other doping means.[0061] As used herein, including in the claims, “or” as used in a list of items (for example, a list of items prefaced by a phrase such as “at least one of or “one or more of”) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.”[0062] From the foregoing, it will be appreciated that specific embodiments of the invention have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. Rather, in the foregoing description, numerous specific details are discussed to provide a thorough and enabling description for embodiments of the present technology. One skilled in the relevant art, however, will recognize that the disclosure can be practiced without one or more of the specific details. In other instances, well-known structures or operations often associated with memory systems and devices are not shown, or are not described in detail, to avoid obscuring other aspects of the technology. In general, it should be understood that various other devices, systems, and methods in addition to those specific embodiments disclosed herein may be within the scope of the present technology.
A uniform layer of non-conductive material, e.g., epoxy, is screen printed onto the backside of an integrated circuit wafer to a required thickness, and then heated until it is hard cured (C-stage). The integrated circuit wafer having the hard cured coating is then sawn apart to separate the individual integrated circuit dice. A non-conductive adhesive is dispensed onto mating faces of die attach paddles of leadframes. The dice are placed into the non-conductive adhesive and then the die and die attach paddle assembly are heated to hard cure the adhesive between the mating faces of the die and die attach paddle. This provides long term electrical isolation of the integrated circuit die from the die attach paddle, and effectively eliminates silver migration from the die attach paddle which causes conductive paths to form that increase unwanted leakage currents in the die and ultimately cause failure during operation thereof.
CLAIMS What is claimed is: 1 , A method for attaching semiconductor dice to leadframe die attach paddles, said method comprising the steps of: applying a non-conductive material to a back face of a semiconductor integrated circuit wafer, the semiconductor integrated circuit wafer comprising a plurality of integrated circuit dice; heating the semiconductor integrated circuit wafer and the non-conductive material thereon until the non-conductive material is hard cured; mounting the semiconductor integrated circuit wafer on a wafer carrier; wherein the hard cured non-conductive material is between the semiconductor integrated circuit wafer and the wafer carrier; separating each of the plurality of integrated circuit dice from each other; dispensing non-conductive adhesive on faces of die attach paddles of a plurality of leadframes; placing the plurality of integrated circuit dice into the non-conductive adhesive on the faces of respective ones of the die attach paddles; heating the plurality of integrated circuit dice and die attach paddles until the non-conductive adhesive is hard cured; attaching bond pads of the plurality of integrated circuit dice to respective conductive leads of the plurality of leadframes with bond wires; and separating each of the plurality of leadframes into integrated circuits. 2. The method according to claim 1, further comprising the step of encapsulating the integrated circuits to produce packaged integrated circuits. 3. The method according to claim 1, wherein the non-conductive material is applied at a thickness from about one milli-inch to about three milli-inches. 4. The method according to claim 1 , wherein the non-conductive material is applied at a thickness from about 1.5 milli-inches to about 2.5 milli-inches. 5. The method according to claim 1, wherein the non-conductive material is applied at a thickness of about two milli-inches. 6, The method according to claim 1, wherein the non-conductive material is epoxy. 7. The method according to claim 1 , wherein the non-conductive adhesive is epoxy, 8. The method according to claim 1 , wherein the step of applying the non- conductive material is done by screen printing. 9. The method according to claim 8, wherein the screen printed non-conductive material is applied in a pattern. 10. The method according to claim 1 , wherein the step of heating the semiconductor integrated circuit wafer and the non-conductive material comprises the step of heating at about 150 degrees Centigrade for about two hours. 1 1. The method according to claim 1 , wherein the step of heating the plurality of integrated circuit dice and die attach paddles until the non-conductive adhesive is hard cured comprises the step of heating at about 175 degrees Centigrade for about one hour.
SEMICONDUCTOE DIE ATTACHMENT METHOD USING NON-CONDUCTIVE SCEEEN PEINT AND DISPENSE ADHESIVE TECHNICAL FIELD The present disclosure relates to attachment of semiconductor integrated circuit dice to leadframes, and more particularly, to attachment of the semiconductor dice to respective die attach paddles of the leadframes by using non-conductive screen print and dispense adhesive that also electrically isolates the semiconductor die from the leadframe die attach paddle. BACKGROUND During fabrication of semiconductor integrated circuits, a semiconductor integrated circuit die (backside thereof) is attached to a die attach paddle of a leadframe. Then bond pads of the semiconductor integrated circuit die are attached to conductors of the leadframe with bond wires. Typically, the backside of the semiconductor integrated circuit die is attached to the die attach paddle of the leadframe with an adhesive such as epoxy. Over time, conductive paths may form between the semiconductor integrated circuit die and the die paddle of the leadframe. These conductive paths may be created by migration of silver from the die attach paddle of the leadframe to the backside of the integrated circuit die. Eventually this silver migration creates a connection between the backside of the semiconductor integrated circuit die and the die paddle, thus causing an electrical short therebeween. Silver molecules from the die attach paddle can migrate over time when there is an electrical potential difference between the semiconductor integrated circuit die and the die attach paddle. This electrical potential difference is present when operating and/or standby power is applied to the semiconductor integrated circuit die. Silver migration is particularly active when the semiconductor integrated circuit die is drawing low quiescent current, e.g., during a standby mode of operation (sleep mode). Running the semiconductor integrated circuit die at a low quiescent current is necessary so that the semiconductor integrated circuit die may be brought from the standby (sleep) mode to an operating mode. Silver migration creates electrical paths between the semiconductor integrated circuit die and the die attach paddle and thereby causes high quiescent current in the semiconductor integrated circuit die to the point eventually where the circuits of the semiconductor integrated circuit die fail.Various physical attachment configurations have been used to electrically isolate the .semiconductor integrated circuit die from the die attach paddle. One such attachment configuration uses non-conductive epoxy to achieve physical attachment and electrical isolation. However, this has proven over time to be ineffective and unreliable in preventing high quiescent currents due to conductive paths between the semiconductor integrated circuit die and the die attach paddle. Another similar but more effective approach is to screen print one or two layers of non-conductive epoxy onto the backside of an integrated circuit wafer. The integrated circuit wafer comprises a plurality of semiconductor integrated circuit dice. The screen printed epoxy is partially cured (B-stage) and is thereafter ready for singulation into individual dice. The individual integrated circuit dice are then attached to respective die attach paddles of a plurality of leadframes by heating the die attach paddles then scrubbing the non-conductive B-stage epoxy coated dice onto the heated die attach paddles. After the dice have been attached to the respective die attach paddles, they are heated until the non-conductive B-stage epoxy is hard cured (C-stage). This form of attachment does solve the silver migration problem long term, but the useful storage life of the B-stage epoxy integrated circuit wafer is only about two to four weeks. SUMMARY Therefore, a need exists for a better way of coating the backside of a semiconductor integrated circuit wafer with a non-conductive material so that the coating has a long storage (shelf) life, aids in the attachment of singulated semiconductor integrated circuit dice to die attach paddles of leadframes, and prevents or substantially reduces silver migration from the die attach paddle during operation of the semiconductor integrated circuit die. According to teachings of this disclosure, a uniform layer of non-conductive material, e.g., epoxy, is screen printed onto the backside of an integrated circuit wafer to a required thickness, and then heated until it is hard cured (C-stage). The screen printing may apply the non-conductive material in a pattern onto the backside of the integrated circuit wafer. The integrated circuit wafer, having the hard cured coating thereon, is then sawn apart to separate each individual integrated circuit die from each other in the wafer (singulated) for preparation to be attached to respective die attach paddles of leadframes. The required thickness may befrom about one to three milli-inches, preferably from about 1.5 to 2,5 milli-inches, and most preferably about two milli-inches. During the process of attaching the die to a die attach paddle, a non-conductive adhesive, e.g., epoxy, is dispensed onto the mating face of the die attach paddle. Then the face of the die having the hard cured coating is placed into the recently dispensed non- conductive adhesive on the mating face of the die attach paddle. Next the die and die attach paddle assembly are heated to hard cure the adhesive between the mating faces of the die and die attach paddle. After the adhesive has been hard cured, the integrated circuit bond pads may be wire bonded to the leadframe leads (conductive leads used for the finished integrated circuit external circuit connections). Thereby providing long term electrical isolation of the integrated circuit die from the die attach paddle of the leadframe, and effectively eliminating silver migration from the die attach paddle which causes conductive paths to form that increase unwanted leakage currents in the die and ultimately cause failure during operation thereof. According to a specific example embodiment of this disclosure, a method for attaching semiconductor dice to leadframe die attach paddles comprises the steps of: applying a non-conductive material to a back face of a semiconductor integrated circuit wafer, the semiconductor integrated circuit wafer comprising a plurality of integrated circuit dice; heating the semiconductor integrated circuit wafer and the non-conductive material thereon until the non-conductive material is hard cured; mounting the semiconductor integrated circuit wafer on a wafer carrier; wherein the hard cured non-conductive material is between the semiconductor integrated circuit wafer and the wafer carrier; separating each of the plurality of integrated circuit dice from each other; dispensing non-conductive adhesive on faces of die attach paddles of a plurality of leadframes; placing the plurality of integrated circuit dice into the non-conductive adhesive on the faces of respective ones of the die attach paddles; heating the plurality of integrated circuit dice and die attach paddles until the non- conductive adhesive is hard cured; attaching bond pads of the plurality of integrated circuit dice to respective conductive leads of the plurality of leadframes with bond wires; and separating each of the plurality of leadframes into integrated circuits.Bffilf DESCRJUPTIQN OF THE DRAWINGS A more complete understanding of the present disclosure may be acquired by referring to the following description taken in conjunction with the accompanying drawings wherein: Figure 1 is a schematic process diagram for semiconductor die attachment using non- conductive screen print and dispense adhesive, according to a specific example embodiment of this disclosure; and Figure 2 is a schematic flow diagram of the semiconductor die attachment process shown in Figure 1, according to a specific example embodiment of this disclosure. While the present disclosure is susceptible to various modifications and alternative forms, specific example embodiments thereof have been shown in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific example embodiments is not intended to limit the disclosure to the particular forms disclosed herein, but on the contrary, this disclosure is to cover all modifications and equivalents as defined by the appended claims. D ETAILED DESCRI PTION Referring now to the drawing, the details of specific example embodiments are schematically illustrated. Like elements in the drawings will be represented by like numbers, and similar elements will be represented by like numbers with a different lower case letter suffix. Referring to Figure 1 , depicted is a schematic process diagram for semiconductor die attachment using non-conductive screen print and dispense adhesive, according to a specific example embodiment of this disclosure. A semiconductor integrated circuit wafer 102 is placed into a screen printing fixture 106, and a non-conductive material 108 is spread over the backside (non-circuit pad side) of the wafer 102 with a screen printing spreader 104. Use of the screen printing fixture 106 is well known in the art of coating surfaces. With the screen printing fixture 106, a desired pattern of non-conductive material 108 may be applied to the backside face of the wafer 102. The non-conductive material 108 may be, for example but is not limited to, epoxy and the like.After screen printing the backside face of the wafer 102 with the non-conductive material 108, the coated wafer 102 is placed into a curing oven 1 10 for hard curing of the non-conductive material 108. Hard curing of the non-conductive material 108 may comprise, for example but is not limited to, heating at about 150 degrees Centigrade for about two hours. After the non-conductive material 108 has been hard cured onto the backside face of the wafer 102, the wafer 102 may be stored for extended periods of time before further processing, or the hard cured coated wafer 102 may be attached to a process substrate 1 12, e.g., mounting tape, so that the wafer 102 can be singulated (separated by cutting) into a plurality of semiconductor integrated circuit dice 124, A leadframe carrier 1 14 comprises a plurality of leadframes 1 16. Each of the plurality of leadframes 1 16 comprises a die attach paddle 1 16a and leadframe conductors 1 16b. Leadframes 1 16 are well known to one having ordinary skill in the art of integrated circuit fabrication. Adhesive 1 18 is dispensed onto a face of each of the die attach paddles 1 16a. Then the singulated (separated) dice 124 are place onto the adhesive 1 18 on each of the respective die attach paddles 1 16a. The assembly of the leadframe carrier 1 14, plurality of leadframes 1 16, adhesive 1 18 and coated dice 124 are placed into a curing oven 120 so that the adhesive 1 18 is thereby hard cured. Hard curing of the non-conductive adhesive 1 18 may comprise, for example but is not limited to, heating at about 175 degrees Centigrade for about one hour. The adhesive material 1 18 is substantially non-conductive and may be, for example but is not limited to, epoxy and the like. After the adhesive material 1 18 has been hard cured and cooled, the leadframes 1 16 are removed from the curing oven 120 and the bond pads 126 on each of the dice 124 are connected to respective ones of the leadframe conductors 1 16b with bond wires 122. Wire bonding of integrated circuit bond pads 126 to leadframe conductors 1 16b is well known to one having ordinary skill in the art of integrated circuit fabrication. After wire bonding of each of the die bond pads 126 to respective ones of the leadframe conductors 1 16b, the finished integrated circuit assemblies are separated and may be encapsulated into packaged integrated circuits (not shown). Referring to Figure 2, depicted is a schematic flow diagram of the semiconductor die attachment process shown in Figure 1, according to a specific example embodiment of thisdisclosure. In step 202, the backside of a semiconductor integrated circuit wafer ( 102) is coated with non-conductive material (108). In step 204, the non-conductive material (108) coated on the backside of a semiconductor integrated circuit wafer (102) is hard cured. In step 206, the coated wafer (102) is mounted onto a carrier ( 1 12), and then in step 208 the plurality of integrated circuit dice (124) comprising the wafer ( 102) are separated apart from each other (singulated), In step 210, adhesive (1 18) is dispensed onto faces of die attach paddles (1 16a) of leadframes (1 16) on a leadframe carrier (1 14). Then in step 212, the singulated integrated circuit dice (124) are placed into the adhesive (1 18) on the faces of the respective die attach paddles (1 16a). In step 214, the adhesive (1 18) is heated to hard cure it (C-stage). In step 216, bond pads ( 126) of the dice ( 124) are wire bonded to respective ones of the conductors (e.g., leadfingers) (1 16b) of the leadframes (1 16). In step 218 the leadframes (1 16) are separated from the leadframe carrier (1 14) to become finished integrated circuits. These finished integrated circuits may be encapsulated in step 220 to become packaged integrated circuits, as is well know in the electronics arts. While embodiments of this disclosure have been depicted, described, and are defined by reference to example embodiments of the disclosure, such references do not imply a limitation on the disclosure, and no such limitation is to be inferred. The subject matter disclosed is capable of considerable modification, alteration, and equivalents in form and function, as will occur to those ordinarily skilled in the pertinent art and having the benefit of this disclosure. The depicted and described embodiments of this disclosure are examples only, and are not exhaustive of the scope of the disclosure.
A circuit includes first through fifth transistors. The first transistor (MN1) has a first control input and first and second current terminals. The second transistor (MN2) has a second control input and third and fourth current terminals. The third transistor (MP1) has a third control input and fifth and sixth current terminals. The third control input is coupled to the third current terminal, and the fifth current terminal is coupled to a supply voltage node. The fourth transistor (MP2) has a fourth control input and seventh and eighth current terminals. The fourth control input is coupled to the first current terminal, and the seventh current terminal coupled to the supply voltage node. The fifth transistor (MN5) has a fifth control input and ninth and tenth current terminals. The fifth control input is coupled to the first control input, and the tenth current terminal coupled to the second current terminal.
CLAIMSWhat is claimed is:1. A circuit, comprising:a first transistor having a first control input and first and second current terminals;a second transistor having a second control input and third and fourth current terminals; a third transistor having a third control input and fifth and sixth current terminals, the third control input coupled to the third current terminal, and the fifth current terminal coupled to a supply voltage node;a fourth transistor having a fourth control input and seventh and eighth current terminals, the fourth control input coupled to the first current terminal, and the seventh current terminal coupled to the supply voltage node; anda fifth transistor having a fifth control input and ninth and tenth current terminals, the fifth control input coupled to the first control input, and the tenth current terminal coupled to the second current terminal.2. The circuit of claim 1, further comprising a sixth transistor having a sixth control input and eleventh and twelfth current terminals, the sixth control input coupled to the second control input, and the twelfth current terminal coupled to the fourth current terminal.3. The circuit of claim 2, further comprising a seventh transistor having a seventh control input and thirteenth and fourteenth current terminals, the thirteenth current terminal coupled to the second and fourth current terminals.4. The circuit of claim 1, further comprising a sixth transistor having a sixth control input and eleventh and twelfth current terminals, the eleventh current terminal coupled to the second and fourth current terminals.5. The circuit of claim 1, further comprising a sixth transistor having a sixth control input and eleventh and twelfth current terminals, the eleventh current terminal coupled to the sixth current terminal, and the twelfth current terminal coupled to the first current terminal.6. The circuit of claim 5, further comprising a seventh transistor having a seventh control input and thirteenth and fourteenth current terminals, the thirteenth current terminal coupled to the eighth current terminal, and the fourteenth current terminal coupled to the third current terminal.7. The circuit of claim 6, wherein the first and second transistors comprise n-type metal oxide semiconductor field effect transistors, and the sixth and seventh transistors comprise p-type metal oxide semiconductor field effect transistors.8. The circuit of claim 1, further comprising:a sixth transistor having a sixth control input and eleventh and twelfth current terminals, the sixth control input coupled to the second control input, and the twelfth current terminal coupled to the fourth current terminal;a seventh transistor having a seventh control input and thirteenth and fourteenth current terminals, the thirteenth current terminal coupled to the sixth current terminal, and the fourteenth current terminal coupled to the first current terminal; andan eighth transistor having a eighth control input and fifteenth and sixteenth current terminals, the fifteenth current terminal coupled to the eighth current terminal, and the sixteenth current terminal coupled to the third current terminal.9. A circuit, comprising:a first transistor having a first control input and first and second current terminals;a second transistor having a second control input and third and fourth current terminals; a third transistor having a third control input and fifth and sixth current terminals, the third control input coupled to the third current terminal, and the fifth current terminal coupled to a supply voltage node;a fourth transistor having a fourth control input and seventh and eighth current terminals, the fourth control input coupled to the first current terminal, and the seventh current terminal coupled to the supply voltage node; anda fifth transistor having a fifth control input and ninth and tenth current terminals, the fifth control input coupled to an output node of the circuit; anda sixth transistor having a sixth control input and eleventh and twelfth current terminals, the sixth control input coupled to the first current terminal, the eleventh current terminal coupled to the tenth current terminal, and the twelfth current terminal coupled to the third current terminal.10. The circuit of claim 9, further comprising:a seventh transistor having a seventh control input and thirteenth and fourteenth current terminals, the fifth control input configured to receive a control signal that is a logical inverse of a signal on the output node of the circuit; anda eighth transistor having an eighth control input and fifteenth and sixteenth current terminals, the eighth control input coupled to the third current terminal, the fifteenth current terminal coupled to the fourteenth current terminal, and the sixteenth current terminal coupled to the first current terminal.11. The circuit of claim 10, further comprising a ninth transistor having a ninth control input and seventeenth and eighteenth current terminals, the seventeenth current terminal coupled to the second and fourth current terminals.12. The circuit of claim 9, further comprising a seventh transistor having a seventh control input and thirteenth and fourteenth current terminals, the thirteenth current terminal coupled to the second and fourth current terminals.13. The circuit of claim 9, further comprising a seventh transistor having a seventh control input and thirteenth and fourteenth current terminals, the thirteenth current terminal coupled to the sixth current terminal, and the fourteenth current terminal coupled to the first current terminal.14. The circuit of claim 13, further comprising an eighth transistor having an eighth control input and fifteenth and sixteenth current terminals, the fifteenth current terminal coupled to the eighth current terminal, and the sixteenth current terminal coupled to the third current terminal.15. The circuit of claim 14, wherein the first and second transistors comprise n-type metal oxide semiconductor field effect transistors, and the seventh and eighth transistors comprise p-type metal oxide semiconductor field effect transistors.16. A circuit, comprising:a first transistor having a first control input and first and second current terminals;a second transistor having a second control input and third and fourth current terminals; a third transistor having a third control input and fifth and sixth current terminals, the third control input coupled to the third current terminal, and the fifth current terminal coupled to a supply voltage node;a fourth transistor having a fourth control input and seventh and eighth current terminals, the fourth control input coupled to the first current terminal, and the seventh current terminal coupled to the supply voltage node; anda fifth transistor having a fifth control input and ninth and tenth current terminals, the ninth current terminal coupled to the first and second current terminals.17. The circuit of claim 16, further comprising a sixth transistor having a sixth control input and eleventh and twelfth current terminals, the eleventh current terminal coupled to the sixth current terminal, and the twelfth current terminal coupled to the first current terminal.18. The circuit of claim 17, further comprising a seventh transistor having a seventh control input and thirteenth and fourteenth current terminals, the thirteenth current terminal coupled to the eighth current terminal, and the fourteenth current terminal coupled to the third current terminal.19. The circuit of claim 16, further comprising:a sixth transistor having a sixth control input and eleventh and twelfth current terminals, the sixth control input coupled to the first control input, the eleventh current terminal coupled to the sixth current terminal, and the twelfth current terminal coupled to the second current terminal; anda seventh transistor having a seventh control input and thirteenth and fourteenth current terminals, the seventh control input coupled to the second control input, the thirteenth current terminal coupled to the fourth current terminal, and the fourteenth current terminal coupled to the fourth current terminal.20. The circuit of claim 16, further comprising:a sixth transistor having a sixth control input and eleventh and twelfth current terminals, the sixth control input coupled to an output node of the circuit; anda seventh transistor having a seventh control input and thirteenth and fourteenth current terminals, the seventh control input coupled to the first current terminal, the thirteenth current terminal coupled to the twelfth current terminal, and the fourteenth current terminal coupled to the third current terminal;an eighth transistor having an eighth control input and fifteenth and sixteenth current terminals, the eighth control input configured to receive a control signal that is a logical inverse of a signal on an output node of the circuit; anda ninth transistor having a ninth control input and seventeenth and eighteenth current terminals, the ninth control input coupled to the third current terminal, the seventeenth current terminal coupled to the sixteenth current terminal, and the eighteenth current terminal coupled to the first current terminal.
A VOLTAGE LEVEL SHIFTERBACKGROUND[0001] A voltage level shifter (or simply“level shifter”) is a circuit that translates a signal from one voltage domain to another voltage domain. The voltage of the output signal may be larger or smaller than the voltage of the input signal. A level shifter can be used, for example, when an input signal to a circuit has been generated in accordance with a particular voltage domain which differs from the supply voltage domain of the circuit itself. An n-type metal oxide semiconductor field effect transistor (NMOS) often has its source connected to the ground potential. As such, turning the NMOS device on simply requires a gate voltage in excess of the threshold voltage for the transistor, and the NMOS device is turned off with a gate voltage below the threshold, closer to ground. A p-type metal oxide semiconductor field effect transistor (PMOS) often has its source connected to the supply voltage. As such, turning the PMOS device off requires a gate voltage closer to the supply voltage (i.e., within the transistor’s threshold voltage of the supply voltage). In a level shifter, the voltage levels to turn an NMOS device on and off, thus, will be different than the voltage levels to turn on and off a PMOS device.SUMMARY[0002] In one example, a circuit includes first through fifth transistors. The first transistor has a first control input and first and second current terminals. The second transistor has a second control input and third and fourth current terminals. The third transistor has a third control input and fifth and sixth current terminals. The third control input is coupled to the third current terminal, and the fifth current terminal is coupled to a supply voltage node. The fourth transistor has a fourth control input and seventh and eighth current terminals. The fourth control input is coupled to the first current terminal, and the seventh current terminal coupled to the supply voltage node. The fifth transistor has a fifth control input and ninth and tenth current terminals. The fifth control input is coupled to the first control input, and the tenth current terminal coupled to the second current terminal.BRIEF DESCRIPTION OF THE DRAWINGS[0003] For a detailed description of various examples, reference will now be made to the accompanying drawings in which:[0004] FIG. 1 illustrates an example of a level shifter.[0005] FIG. 2 illustrates another example of a level shifter.[0006] FIG. 3 illustrates another example of a level shifter.[0007] FIG. 4 illustrates another example of a level shifter.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS[0008] FIG. 1 shows an example of a high to low level shifter 100 which translates the incoming signal from a voltage domain that is lower than the supply voltage domain. The level shifter 100 includes NMOS transistors MN1, MN2, MN3, and MN4, PMOS transistors MPl and MP2, and inverters 102, 104, and 106. Because the input signal is in a higher voltage domain than the voltage domain of the output signal (OUT HV2), MN1 and MN2 are high voltage transistors for reliability reasons, and thus the threshold voltage of MN1 and MN2 is higher than for MN3, MN4, MPl, and MP2. The sources of MPl and MP2 are connected to the supply voltage node 110 (VDDHV2). The drain of MPl is connected to the drain of MN3 at node Nl, and the drain of MP2 is connected to the drain of MN4 at node N2. The source of MN3 is connected to the drain of MN1, and the source of MN4 is connected to the drain of MN2. The sources of MN1 and MN2 are connected to the ground node 115. The gate of MPl is connected to N2, and the gate of MP2 is connected to Nl. The gates of MN3 and MN4 are connected together and receive an enable (EN1) input signal. When EN1 is asserted high, MN3 and MN4 are both on; otherwise with EN1 being low, MN3 and MN4 are both off, and the level shifter is disabled. The gate of MN1 is configured to receive an input signal IN_HV1. Inverter 102 inverts IN_HV1 to drive the gate of MN2. As such, only one of MN1 and MN2 are on at any point in time. Series connected inverters 104 and 106 are connected to N2, and the output of inverter 106 provides the output signal OUT HV2 from the level shifter 100[0009] When IN HVl is high, OUT HV2 also is high, and when IN HVl is low, OUT HV2 also is low. When high, however, IN HVl is at a different voltage than OUT HV2. The voltage level of OUT HV2 is generally lower than the voltage level of IN HVl (although in some conditions, OUT HV2 is higher than IN HVl). VDDHV2 is the supply voltage for the level shifter 100, and dictates the voltage level of OUT HV2. When IN HVl is logic high, MN1 is on and MN2 is off. With EN1 asserted high, both MN3 and MN4 are on. With MN1 and MN3 being on, Nl is pulled low to ground. As the voltage on Nl drives the gate of MP2, the gate-to-source voltage (VGS) of MP2 is sufficiently high to turn on MP2. With MP2 on, N2 is pulled high to VDDHV2, and OUT HV2 is thus also VDDHV2. Conversely, when IN HVl is logic low, MN1 is off and MN2 is on. With MN2 and MN4 being on, N2 is pulled low to ground, and thus OUT_HV2 also is low. As the voltage on N2 drives the gate of MP1, the VGS of MP1 is sufficiently high to turn on MP1. With MP1 on, N1 is pulled high which, in turn, turns off MP2.[0010] The NMOS devices MN1 and MN2 must be“strong” enough to cause the drains of MP1 and MP2 to discharge when IN HVl transitions between low and high. For example, if IN HVl is currently low, as explained above, MN2, MN4, and MP1 are on. In this state N1 is pulled high toward VDDHV2. During the transition of IN HVl from low to high, MN1 turns on and the charge on the drain of MP1 should discharge to ground through MN3 and MN1. For MPl’s drain to discharge, the drain current (II) through MN1 should be larger than the drain current (12) through MP1. II is the sum of 12 and the discharge current from the drain of MP1 (the source-to- drain of MP1 represents a capacitance that is charged when MN1 is off, and then discharges when MN1 is turned on).[0011] The drain current through a MOS transistor is a function, at least in part, of its VGS and its size (size referring to the ratio of the transistor’s channel width (W) to the channel length (L)). Under normal operating conditions, the VGS of MN1 and MN2 is higher than the VGS of MP1, and can easily pull down the voltage on the drains of MP1 and MP2. Under conditions when the level shifter is enabled for lower voltage values of VDDHVl (close to the threshold voltage of MN1 and MN2), MN1 and MN2 are much weaker when their respective inputs become logic high (than with VDDHVl at higher voltages). In this latter condition (low value of VDDHVl), because VDDHVl is smaller than VDDHV2, when IN HVl transitions from low to high, the VGS of MP1 is larger than the VGS of MN1. Thus, to discharge the drain of MP1, the size of MN1 must be substantially larger than the size of MP2 so that the drain current of MN1 will be larger than the drain current of MP1, which in turn will cause N1 to discharge. This problem is thus addressed in the example level shifter of 100 of FIG. 1 by making MN1 larger than MP1. The same problem exists on the righthand side of the level shifter 100 for a high to low transition of IN HVl, which causes MN2 to turn on in an attempt to discharge the drain of MP2. MN1 and MN2 in this design are larger than MP1 and MP2. There is thus a size penalty with the level shifter 100 of FIG. 1. Further, the leakage current and average switching current is quite large as well.[0012] FIG. 2 shows an example of a level shifter 200 that addresses the aforementioned problems. The level shifter 200 includes NMOS transistors MN1, MN2, and MN3A, PMOS transistors MP1, MP2, MP3, and MP4, and inverters 202, 204, and 206. The sources of MPl and MP2 are connected to the supply voltage node 110 (VDDHV2). The drain of MPl is connected to the source of MP3 at intermediate node int3, and the drain of MP2 is connected to the source of MP4 at intermediate node int4. The drain of MP3 is connected to the drain of MN1 at intermediate node inti, and the drain of MP4 is connected to the drain of MN2 at intermediate node int2. The sources of MN1 and MN2 are connected to drain of MN3A. The source of MN3 is connected the ground node 115, and an enable signal EN2 is provided to the gate of MN3A to enable operation of the level shifter 100. EN2 being high (e.g., larger than the threshold voltage of MN3A) causes MN3A to be on, and EN2 being low (e.g., ground potential) causes MN3A to be off. The level shifter 200 is enabled with EN2 asserted high, and disabled otherwise.[0013] The gate of MPl is connected to int2, and the gate of MP2 is connected to inti. The gates of MP3 and MN 1 are connected together and receive IN_HV 1. Inverter 202 inverts IN_HV 1 to drive the gates of MP4 and MN2, which are also connected together. Series connected inverters 204 and 206 are connected to int2, and the output of inverter 206 provides the output signal OUT HV2 from the level shifter 200.[0014] When IN HVl is high, OUT HV2 also is high, and vice versa. When high, however, IN HVl is at a different voltage than OUT HV2. The voltage level of OUT HV2 may be higher or lower than the voltage level of IN HVl . VDDHV2 is the supply voltage for the level shifter 200, and dictates the voltage level of OUT_HV2. When IN_HV1 is logic high, MN1 is on, and MN2 and MP3 are off. With MN1 being on, ini is pulled low to ground. As the voltage on inti drives the gate of MP2, the VGS of MP2 is sufficiently high to turn on MP2. With MP2 on, the source voltage of MP4 increases thereby causing MP4 to turn on. As a result, int2 is pulled high to VDDHV2, and OUT HV2 is thus also VDDHV2. Conversely, when IN HVl is logic low MN2 is on. With MN2 being on, int2 is pulled low to ground, and thus OUT HV2 also is low. As the voltage on int2 drives the gate of MPl, the VGS of MPl is sufficiently high to turn on MPl. With MPl on, MP3 also turns on, and inti is pulled high which, in turn, turns off MP2.[0015] MP3 and MP4 in the example of FIG. 2 are used to isolate the input NMOS transistors MN1 and MN2 from the cross coupled PMOS transistors MPl and MP2. When IN_HV1 is 0, MN1 is off, and MPl and MP3 are on. As IN_HV1 transitions from low to high, MN1 turns on and MP3 turns off. As such, MN1 need only sink enough current to discharge the drain of MP3. With MP3 otherwise off, no current flows through MP3 from MPl . In contrast to the level shifter 100 of FIG. 1, MP3 and MP4 reduce or completely eliminate the drive fight between MN1 and MPl and between MN2 and MP2.[0016] Instead of having MN3 and MN4 in FIG. 1 being the enable transistors, in FIG. 2, that feature has been implemented with single tail transistor MN3 A connected between the sources of MN1, MN2 and the ground node 115. Using a single tail transistor MN3A (versus two transistors MN3 and MN4 in FIG. 1) provides an area benefit, as well as improved performance from common mode noise as the level shifter 200 is perfectly differential.[0017] In the example of FIG. 2, while MN2 turns on (when IN HVl transitions from high to low), MP4 turns off. At that point, MP4 does not actively pull intermediate node int4 low. This issue is addressed by having the input to inverter 204 connected to intermediate node int2 instead of intermediate node int4. However, the rising transition of OUT HV2 will be slower if intermediate node int2 is used rather than intermediate node int4.[0018] FIG. 3 shows an example of a level shifter 300, which includes NMOS transistors MN1, MN2, MN3A, MN5, and MN6, PMOS transistors MPl, MP2, MP3, and MP4, and inverters 302, 304, and 306. The sources of MPl and MP2 are connected to the supply voltage node 110 (VDDHV2). The drain of MPl is connected to the source of MP3 at intermediate node int3, and the drain of MP2 is connected to the source of MP4 at intermediate node int4. The drain of MP3 is connected to the drain of MN1 at intermediate node inti, and the drain of MP4 is connected to the drain of MN2 at intermediate node int2. The sources of MN1 and MN2 are connected to drain of MN3A. The source of MN3 is connected the ground node 115, and an enable signal EN2 is provided to the gate of MN3A to enable operation of the level shifter 100. As described above, EN2 being high causes MN3A to be on, and EN2 being low causes MN3A to be off. The level shifter 300 is enabled with EN2 asserted high, and disabled otherwise.[0019] Intermediate node int4 is connected to the input of inverter 304, and the output of inverter 304 is connected to the input of inverter 306. The output of inverter 306 provides the output signal OUT HV2 from the level shifter 300.[0020] As was the case for FIG. 2, MP3 and MP4 in the example of FIG. 3 are used to isolate the input NMOS transistors MN1 and MN2 from the cross coupled PMOS transistors MPl and MP2. When IN_HV1 is 0, MN1 is off, and MPl and MP3 are on. As IN_HV1 transitions from low to high, MN1 turns on and MP3 turns off. As such, MN1 need only sink enough current to discharge the drain of MP3. With MP3 otherwise off, no current flows through MP3 from MPl . In contrast to the level shifter 100 of FIG. 1, MP3 and MP4 reduce or completely eliminate the drive fight between MN1 and MPl and between MN2 and MP2.[0021] The drain of MN5 connects to the drain of MPl and to the source of MP3 at int3. The source of MN5 connects to the source of MN1. The drain of MN6 connects to the drain of MP2 and to the source of MP4 at int4. The source of MN6 connects to the source of MN2. As explained above regarding FIG. 2, intermediate node int4 is not actively pulled low when IN HVl transitions from high to low thereby turning off MP4. This issue was addressed in the example level shifter 200 of FIG. 2 by having the series connected inverters 204 and 206 connected to intermediate node int2 instead of intermediate node int4, but resulting in a lower slew rate for OUT HV2 when making a low to high transition.. MN5 and MN6 in FIG. 3 solve this problem. MN6 turns on wen MN2 is turned on and MP4 is turned off. As such, the charge on intermediate node int4 is discharged through MN6 to ground thereby quickly pulling the voltage on intermediate node int4 low. The same action occurs when MN1 is turned on and MP3 is turned off (MN5 is also turned on thereby quickly discharging intermediate nod int3). As a result of quickly discharging the intermediate node int4, the voltage on the source of MP4 is quickly pulled low which, in turn, causes MP4 to turn off sooner during the high to low transition of IN HVl than would have been the case in the example of FIG. 2. Similarly, with MN5 turning on during the low to high transition of IN HVl, MP3 is caused to turn off sooner during that transition than would have otherwise been the case in the example of FIG. 2. Thus, the advantages of adding MN5 and MN6 are two fold. First, discharge paths to int4 and int3 are provided. Second, isolation is improved for MN1, MPl, and for MN2, MP2[0022] FIG. 4 shows an example of a level shifter 400, which includes NMOS transistors MNl, MN2, and MN3A, PMOS transistors MPl, MP2, MP3, MP4, MP5, MP6, MP7, and MP8, and inverters 402, 404, and 406. The sources of MPl and MP2 are connected to the supply voltage node 110 (VDDHV2). The drain of MPl is connected to the source of MP3, and the drain of MP2 is connected to the source of MP4. The drain of MP3 is connected to the drain of MNl at intermediate node inti, and the drain of MP4 is connected to the drain of MN2 at intermediate node int2. The sources of MNl and MN2 are connected to drain of MN3A. The source of MN3 is connected the ground node 115, and an enable signal EN2 is provided to the gate of MN3A to enable operation of the level shifter 100. As described above, EN2 being high causes MN3A to be on, and EN2 being low causes MN3A to be off. The level shifter 300 is enabled with EN2 asserted high, and disabled otherwise.[0023] Intermediate node int2 is connected to the input of inverter 404, and the output of inverter 404 is connected to the input of inverter 406 at intermediate node int5. The output of inverter 406 provides the output signal OUT HV2 from the level shifter 400. The gate of MP5 is connected to intermediate node int5. The gate of MP6 is connected to intermediate node int2. The gate of MP7 is connected to the output node of the level shifter 400 (OUT HV2). The gate of MP8 is connected to intermediate node inti .[0024] As was the case for FIG. 2, MP3 and MP4 in the example of FIG. 3 are used to isolate the input NMOS transistors MN1 and MN2 from the cross coupled PMOS transistors MPl and MP2. When IN_HV1 is 0, MN1 is off, and MPl and MP3 are on. As IN_HV1 transitions from low to high, MN1 turns on and MP3 turns off. As such, MN1 need only sink enough current to discharge the drain of MP3. With MP3 otherwise off, no current flows through MP3 from MPl . In contrast to the level shifter 100 of FIG. 1, MP3 and MP4 reduce or completely eliminate the drive fight between MN1 and MPl and between MN2 and MP2.[0025] MP7 and MP8 form a pull-up stack of transistors that are operative to quickly pull int2 (and thus OUT HV2) from ground to VDDHV2. MP7 and MP8 assist the pull-up functionality of MP2 and MP4 in this regard. Similarly, MP5 and MP6 also form a pull-up stack of transistors that are operative to quickly pull inti from ground to VDDHV2. MP5 and MP6 assist the pull-up functionality of MPl and MP3 in this regard. The operation of MP7 and MP8 will now be described when IN HVl transitions from low to high. The same or similar explanation also applies to MP5 and MP6 when IN HV 1 transitions from high to low.[0026] When IN_HV1 is low, MN2, MP3, and MPl are on. With MN2 being on, int2 is low, and through inverters 404 and 406, OUT HV2 also is low. Because OUT HV2 is used as the gate voltage to MP7, MP7 is on. However, MP8 is off because intermediate node inti is high through MPl and MP3, both of which are on.[0027] During the transition of IN HVl from low to high, once IN HVl reaches the threshold voltage of MN1, MN1 turns on thereby pulling intermediate node inti low. With inti being low, MP2 and MP8 are both turned on. MP4 also turns on. At this point, intermediate node int2 begins to charge up via two transistor stacks. One transistor stack comprises MP2 and MP4. The other transistor stack comprises MP7 and MP8. Thus, MP7 and MP8 help to quickly increase the voltage on intermediate node int2 (and thus OUT HV2) from ground towards VDDHV2.[0028] As OUT HV2 begins to increase, once OUT HV2 reaches one transistor threshold voltage from VDDHV2, the VGS of MP7 falls below its threshold voltage and MP7 turns off, thereby effectively disabling the transistor stack of MP7/MP8. Thus, the rising transition of OUT HV2 is improved (i.e., its slew rate increases) due to the action of MP7 and MP8 during a portion of the transition phase of IN HVl. The same explanation is applicable to MP5 and MP6 when IN HVl transitions from high to low.[0029] In this description, the term“couple” or“couples” means either an indirect or direct wired or wireless connection. Thus, if a first device couples to a second device, that connection may be through a direct connection or through an indirect connection via other devices and connections. The recitation“based on” means“based at least in part on.” Therefore, if X is based on Y, X may be a function of Y and any number of other factors.[0030] Modifications are possible in the described embodiments, and other embodiments are possible, within the scope of the claims.
Embodiments of systems, methods, and apparatuses for monitoring address conflicts are described. In some embodiments, an apparatus includes execution circuitry to execute instructions; a plurality of registers to store data coupled to the execution circuitry; and performance monitoring circuitry to perform address conflict counting by at least determining address conflicts between an executing instruction and previously executed instructions and counting each instance of a conflict.
We claim:1. An apparatus comprising:execution means to execute instructions;a plurality of registers to store data coupled to the execution means; andperformance monitoring means to perform address conflict counting by at least determining address conflicts between an executing instruction and previously executed instructions and counting each instance of a conflict.2. The apparatus of claim 1, wherein the performance monitoring means comprises:an address conflict counter to store the count of each instance of a conflict; and potential conflicting address storage to store addresses of previously executed instructions; andcomparison means to make a comparison of an address of an executed instruction to addresses stored in the potential conflicting address storage.3. The apparatus of claim 2, wherein the performance monitoring means furthercomprises:a model specific register to configure the performance monitoring means for address conflict counting.4. The apparatus of claim 2, wherein the performance monitoring means furthercomprises:a finite state machine to track the grouping of instructions during address conflict counting.5. The apparatus of any of claims 1-4, wherein the addresses are write addresses.6. The apparatus of any of claims 1-5, wherein the execution means is scalar.7. The apparatus of any of claims 1-5, wherein the execution means is single instruction, multiple data (SIMD).8. The apparatus of any of claims 1-7, wherein the performance monitoring means to perform address conflict counting over a single iteration of a loop.9. The apparatus of any of claims 1-7, wherein the performance monitoring means toperform address conflict counting over a multiple iteration of a loop.10. The apparatus of any of claims 1-7, wherein the performance monitoring means toperform address conflict counting over a grouping of instructions delineated by a start and stop instruction.11. The apparatus of any of claims 1-7, wherein the performance monitoring means toperform address conflict counting over a grouping of instructions delineated by a start instruction and a value indicating a number of instructions to evaluate after the start instruction.12. A method comprising:executing a first instruction;storing an address of the first instruction into a potential address conflict storage which stores address of previously executed instructions;executing a second instruction;determining that an address of the second instruction matches an address in the potential address conflict storage; andincrementing an address conflict counter.13. The method of claim 13, wherein addresses stored in a potential address conflict storage are unique.14. The method of any of claims 12-13, further comprising:outputting a value of the address conflict counter.15. The method of any of claims 12-14, wherein the potential address conflict storage is a list.16. The method of any of claims 12-14, wherein the potential address conflict storage is a content addressable memory.17. The method of any of claims 12-16, wherein the addresses are write addresses.18. The method of any of claims 12-17, wherein the method is performed in performance monitoring circuitry of processor.19. The method of any of claims 12-18, wherein the determining is made by ANDing the address of the second instruction with each addresses of the potential address conflict storage and ORing the result of the ANDings.
COUNTER TO MONITOR ADDRESS CONFLICTSFIELD OF INVENTION[0001] The field of invention relates generally to computer processor architecture, and, more specifically, to conflict detection.BACKGROUND[0002] Conflict detection instructions enable vectorization for loops where addresses accessed in nearby iterations cannot be determined to be in dependent at compile time. However, conflict detection instructions and corresponding sequences are expensive and whether their use results in a speedup or a slowdown depends on how many conflicts actually occur within one vector worth of iterations.BRIEF DESCRIPTION OF THE DRAWINGS[0003] The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:[0004] Figure 1 illustrates an embodiment of processor (core) that supports address conflict counting;[0005] Figure 2 illustrates an embodiment of a method for address conflict counting using an address conflict counter;[0006] Figure 3 illustrates an embodiment of execution of an instruction to configure an address conflict counter using a configuration instruction;[0007] Figure 4 illustrates an embodiment of address comparison hardware;[0008] Figure 5 illustrates an embodiment of comparison hardware;[0009] Figure 6 illustrates an example of pseudo-code for tracking store address conflicts within one vector iteration;[0010] Figure 7 is a block diagram of a register architecture according to oneembodiment of the invention;[0011] Figure 8A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention;[0012] Figure 8B is a block diagram illustrating both an exemplary embodiment of an in- order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention;[0013] Figures 9A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip;[0014] Figure 10 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention;[0015] Figures 11-14 are block diagrams of exemplary computer architectures; and[0016] Figure 15 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. DETAILED DESCRIPTION[0017] In the following description, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understanding of this description.[0018] References in the specification to "one embodiment," "an embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.[0019] To beneficially vectorize real dependences, or conflicts between vector elements, conflicts are efficiently dynamically detected and enforced. The cost in instructions for each vector iteration (i.e., each VLEN scalar iterations) is conflict detection instructions + (original instructions/by SIMD efficiency) + conflict handling instructions, where the denominator of the middle term is the SIMD efficiency of the computation absent the conflict detection and enforcement.[0020] A straightforward way to detect duplicate indices is with a brute force scalar comparison loop. For each index, a check for equality with earlier indices in the vector is made. Another way to do to this detection is the use of a SIMD instruction to perform all of the needed comparisons (e.g., a vpconflict instruction). Unfortunately, such an instruction is very expensive.[0021] To guarantee correctness in the presence of conflicts, one may choose to use scalar execution. For a vectorized loop where a conflict in a given vector is detected, falling back to scalar execution for just that vector, for that vector and all future iterations of the loop, or anywhere in between may be done.[0022] Since a scalar fallback has such a dramatic effect on SIMD efficiency in the presence of a significant number of conflicts, one may choose to use scalar execution only when enough duplicates are detected. This could mean detecting either enough index elements that are not unique, or that the most common index in a vector has enough copies.[0023] Detailed below are embodiments to use a performance counter to track a number of address conflicts. This information can be used to help software developers limit the performance penalty of using conflict detection instructions and maximize the performance speedup from using such instructions (including using scalar execution instead of vector execution, etc.). This counter may be implemented (or configured) in a number of ways depending on the microarchitecture as well as the type of profiling needed. For example, it can be configured to count all address conflicts anywhere within a loop.Alternatively, it can be used to count specific cases of address conflicts. For example, the counter can be used to count cases where the conflicts are between store addresses to different locations within the same array that occurred within n number of iterations.Typically, n would correspond to the size of a vector: such as 8 iterations for 64-bit data types or 16 for 32-bit data types when using a 512-bit vector.[0024] Figure 1 illustrates an embodiment of processor (core) that supports address conflict counting. In this embodiment, a core 101 includes both scalar and single- instruction, multiple data (SIMD) circuitry 113 and 115 to execute scalar and SIMD/vector instructions respectively.[0025] The execution circuitry 113 and 115 is coupled to a memory unit 107 and registers 109. The memory unit 107 accesses memory locations such as random access memory (RAM) and non-volatile memory (such as disk). Registers 109 include general purpose registers and floating point registers used by the scalar execution circuitry 113 and packed data registers (such as 128-bit, 256-bit, or 512-bit packed data registers) used by the SIMD execution circuitry 115.[0026] Performance monitoring circuitry 103 (sometimes called "perfmon") monitors functions of the core such as execution cycles, power state, etc. Embodiments of performance monitoring circuitry 103 include an address conflict counter 105 to count instances of address conflicts between instructions in a grouping of instructions. For example, the address conflict counter 105 is configurable to count instances of address conflicts within a loop (including limiting that count to a number of iterations of a loop), of a specific type, a number of instructions, between delineating instructions marking the group, a combination of any of these, etc. Typically, this counter 105 is accessible to a programmer via an application program interface (API) call or execution of an instruction to retrieve the counter value. In some embodiments, the counter 105 is a register.[0027] The performance monitoring circuitry 103 includes, or has access to, potential conflicting address storage 107 to store addresses of previously executed instructions.Typically, only unique addresses are stored. In some embodiments, this storage is a content addressable memory (CAM) that allows for searching all entries in parallel for a match. In other embodiments, this storage is an array of address. In other embodiments, this storage is a one or more registers (such as a plurality of general purpose registers or packed data registers wherein data elements of the packed data registers are addresses).[0028] In some embodiments, the performance monitoring circuitry 103 includes a model specific register (MSR) 111 to define the parameters of the address checking.Typically, this register is accessible via a high privilege or ring 0 application.[0029] The performance monitoring circuitry includes comparison circuitry 117 to make a comparison of an address of an executed instruction to the potential conflicting address storage.[0030] In some embodiments, the performance monitoring circuitry includes a finite state machine (FSM) 119 to track the grouping of instructions during address conflict counting. For example, the FSM tracks a number of instructions processed to the number of instructions that are to be compared, or tracks a number of iterations of a loop for which conflict counting is desired, etc.[0031] In some embodiments, performance monitoring circuitry performs address conflict counting over a grouping of instructions delineated by a start and stop instruction. In some embodiments, the performance monitoring circuitry to perform address conflict counting over a grouping of instructions delineated by a start instruction and a value indicating a number of instructions to evaluate after the start instruction[0032] Figure 2 illustrates an embodiment of a method for address conflict counting using an address conflict counter. At 201, a first instruction is executed by execution circuitry. For example, any instruction that causes a write/store into an address or addresses is executed. This execution may be done by scalar or SIMD execution circuitry depending upon the instruction. [0033] The address(es) from the first instruction are stored into potential conflicting address storage at 203. For example, if the first instruction is a store, the destination address is stored into potential conflicting address storage such as storage 107.[0034] At 205, a subsequent instruction is executed by execution circuitry. For example, a second store is executed.[0035] A determination of if the address of the subsequent instruction is in the potential conflicting address storage is made at 207. For example, has the destination address been previously used as determined by comparing the address to those addresses previously stored in the storage location? When the address used by the subsequent instruction was not previously used, that address is stored in the potential conflicting address storage at 209 and the next subsequent instruction is evaluated.[0036] When the address used by the subsequent instruction was previously used, the address conflict counter is incremented at 211 and the next subsequent instruction is evaluated.[0037] Not shown in this exemplar embodiment, but present in many embodiments, is a determination of when the counting should stop. For example, at the end of a loop or after a number of iterations of a loop.[0038] Nor is an output of the counter shown, but in many usage patterns aprogrammer will call for the counter value to be read out in a file or onto a screen for review. A reading of the value of the counter may be used by a programmer or other entity to make a decision on vectorization such as detailed above. Different vectorization situations require different optimization strategies: 1) if it is known that there is no conflict within any vector of the loop (8 iterations for 64-bit data or 16 for 32-bit), then better performance is normally obtained by vectorizing without using conflict detectioninstructions; 2) if there is on average a high number of conflicts within one vector iteration (actual threshold is a microarchitecture dependent), then often the best approach is to not vectorize at all (not use conflict detection instructions to vectorize) and run a scalar sequence instead; and 3) if the number of conflicts within one vector iteration is small (small than a microarchitecture dependent threshold) then often vectorization with using conflict detection instructions yields7 the best performance.[0039] Figure 3 illustrates an embodiment of execution of an instruction to configure an address conflict counter using a configuration instruction. At 301, an instruction is fetched. Depending upon the embodiment, the instruction includes an opcode and one or more fields to indicate a loop begin, a loop end, conflict type, number of iterations, etc.[0040] At 303, the instruction is decoded.[0041] At 305, data associated with the fields is retrieved as needed. For example, data is retrieved from registers or memory.[0042] At 307, the decoded instruction is executed to configure an address conflict counter. In some embodiments, a model specific register is set to indicate the configuration within performance monitoring circuitry.[0043] Figure 4 illustrates an embodiment of address comparison hardware. A group of previously used addresses 401 is compared to an address of an address to check 407. For example, an address of an instruction is compared against previously used addresses. The addresses to test against are typically stored in a storage location of, or accessible to, performance monitoring circuitry as detailed above.[0044] Comparison hardware (circuitry) 403 performs the comparison. In some embodiments, the comparisons are done one at a time. In other embodiments, the comparisons are done in parallel.[0045] A result of the comparison 405 indicates when an address conflict counter should be updated. This result is fed to the address conflict register such as address conflict counter 105 as needed. In some embodiments, only increments to the counter are fed to the counter.[0046] Figure 5 illustrates an embodiment of comparison hardware. The hardware 503 includes a plurality of AND gates 509. Each AND gate is feed a previously used address (501 and 505) and an address to test 507.[0047] An OR gate 511 receives the results of the ANDings and outputs a result 513. Any "1" from the AND gates 509 indicates that the address was previously used and should therefore increment the counter.[0048] Figure 6 illustrates an example of pseudo-code for tracking store address conflicts within one vector iteration.[0049] The figures below detail exemplary architectures and systems to implement embodiments of the above. In some embodiments, one or more hardware components and/or instructions described above are emulated as detailed below, or implemented as software modules. Exemplary Register Architecture[0050] Figure 7 is a block diagram of a register architecture 700 according to one embodiment of the invention. In the embodiment illustrated, there are 32 vector registers 710 that are 512 bits wide; these registers are referenced as zmmO through zmm31. The lower order 256 bits of the lower 16 zmm registers are overlaid on registers ymm0-16. The lower order 128 bits of the lower 16 zmm registers (the lower order 128 bits of the ymm registers) are overlaid on registers xmm0-15.[0051] Scalar operations are operations performed on the lowest order data element position in an zmm/ymm/xmm register; the higher order data element positions are either left the same as they were prior to the instruction or zeroed depending on the embodiment.[0052] Write mask registers 715 - in the embodiment illustrated, there are 8 write mask registers (kO through k7), each 64 bits in size. In an alternate embodiment, the write mask registers 715 are 16 bits in size. As previously described, in one embodiment of the invention, the vector mask register kO cannot be used as a write mask; when the encoding that would normally indicate kO is used for a write mask, it selects a hardwired write mask of OxFFFF, effectively disabling write masking for that instruction.[0053] General-purpose registers 725 - in the embodiment illustrated, there are sixteen 64-bit general-purpose registers that are used along with the existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.[0054] Scalar floating point stack register file (x87 stack) 745, on which is aliased the MMX packed integer flat register file 750 - in the embodiment illustrated, the x87 stack is an eight-element stack used to perform scalar floating-point operations on 32/64/80-bit floating point data using the x87 instruction set extension; while the MMX registers are used to perform operations on 64-bit packed integer data, as well as to hold operands for some operations performed between the MMX and XMM registers.[0055] Alternative embodiments of the invention may use wider or narrower registers. Additionally, alternative embodiments of the invention may use more, less, or different register files and registers.Exemplary Core Architectures, Processors, and Computer Architectures [0056] Processor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a highperformance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific(throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures.Exemplary Core ArchitecturesIn-order and out-of-order core block diagram[0057] Figure 8A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. Figure 8B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of- order issue/execution architecture core to be included in a processor according to embodiments of the invention. The solid lined boxes in Figures 8A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in- order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described. [0058] In Figure 8A, a processor pipeline 800 includes a fetch stage 802, a length decode stage 804, a decode stage 806, an allocation stage 808, a renaming stage 810, a scheduling (also known as a dispatch or issue) stage 812, a register read/memory read stage 814, an execute stage 816, a write back/memory write stage 818, an exception handling stage 822, and a commit stage 824.[0059] Figure 8B shows processor core 890 including a front end unit 830 coupled to an execution engine unit 850, and both are coupled to a memory unit 870. The core 890 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 890 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.[0060] The front end unit 830 includes a branch prediction unit 832 coupled to an instruction cache unit 834, which is coupled to an instruction translation lookaside buffer (TLB) 836, which is coupled to an instruction fetch unit 838, which is coupled to a decode unit 840. The decode unit 840 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 840 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 890 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 840 or otherwise within the front end unit 830). The decode unit 840 is coupled to a rename/allocator unit 852 in the execution engine unit 850.[0061] The execution engine unit 850 includes the rename/allocator unit 852 coupled to a retirement unit 854 and a set of one or more scheduler unit(s) 856. The scheduler unit(s) 856 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 856 is coupled to the physical register file(s) unit(s) 858. Each of the physical register file(s) units 858 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point,, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 858 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 858 is overlapped by the retirement unit 854 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 854 and the physical register file(s) unit(s) 858 are coupled to the execution cluster(s) 860. The execution cluster(s) 860 includes a set of one or more execution units 862 and a set of one or more memory access units 864. The execution units 862 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 856, physical register file(s) unit(s) 858, and execution cluster(s) 860 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 864). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.[0062] The set of memory access units 864 is coupled to the memory unit 870, which includes a data TLB unit 872 coupled to a data cache unit 874 coupled to a level 2 (L2) cache unit 876. In one exemplary embodiment, the memory access units 864 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 872 in the memory unit 870. The instruction cache unit 834 is further coupled to a level 2 (L2) cache unit 876 in the memory unit 870. The L2 cache unit 876 is coupled to one or more other levels of cache and eventually to a main memory.[0063] By way of example, the exemplary register renaming, out-of-orderissue/execution core architecture may implement the pipeline 800 as follows: 1) the instruction fetch 838 performs the fetch and length decoding stages 802 and 804; 2) the decode unit 840 performs the decode stage 806; 3) the rename/allocator unit 852 performs the allocation stage 808 and renaming stage 810; 4) the scheduler unit(s) 856 performs the schedule stage 812; 5) the physical register file(s) unit(s) 858 and the memory unit 870 perform the register read/memory read stage 814; the execution cluster 860 perform the execute stage 816; 6) the memory unit 870 and the physical register file(s) unit(s) 858 perform the write back/memory write stage 818; 7) various units may be involved in the exception handling stage 822; and 8) the retirement unit 854 and the physical register file(s) unit(s) 858 perform the commit stage 824.[0064] The core 890 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 890 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.[0065] It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel®Hyperthreading technology).[0066] While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture.While the illustrated embodiment of the processor also includes separate instruction and data cache units 834/874 and a shared L2 cache unit 876, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (LI) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.Specific Exemplary In-Order Core Architecture[0067] Figures 9A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high- bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application.[0068] Figure 9A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 902 and with its local subset of the Level 2 (L2) cache 904, according to embodiments of the invention. In one embodiment, an instruction decoder 900 supports the x86 instruction set with a packed data instruction set extension. An LI cache 906 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 908 and a vector unit 910 use separate register sets (respectively, scalar registers 912 and vector registers 914) and data transferred between them is written to memory and then read back in from a level 1 (LI) cache 906, alternative embodiments of the invention may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back).[0069] The local subset of the L2 cache 904 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 904. Data read by a processor core is stored in its L2 cache subset 904 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 904 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring data-path is 1012-bits wide per direction.[0070] Figure 9B is an expanded view of part of the processor core in Figure 9A according to embodiments of the invention. Figure 9B includes an LI data cache 906A part of the LI cache 904, as well as more detail regarding the vector unit 910 and the vector registers 914. Specifically, the vector unit 910 is a 16-wide vector processing unit (VPU) (see the 16-wide ALU 928), which executes one or more of integer, single-precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 920, numeric conversion with numeric convert units 922A-B, and replication with replication unit 924 on the memory input. Write mask registers 926 allow predicating resulting vector writes.[0071] Figure 10 is a block diagram of a processor 1000 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. The solid lined boxes in Figure 10 illustrate a processor 1000 with a single core 1002A, a system agent 1010, a set of one or more bus controller units 1016, while the optional addition of the dashed lined boxes illustrates an alternative processor 1000 with multiple cores 1002A-N, a set of one or more integrated memory controller unit(s) 1014 in the system agent unit 1010, and special purpose logic 1008.[0072] Thus, different implementations of the processor 1000 may include: 1) a CPU with the special purpose logic 1008 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 1002A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 1002A-N being a large number of special purpose cores intended primarily for graphics and/or scientific(throughput); and 3) a coprocessor with the cores 1002A-N being a large number of general purpose in-order cores. Thus, the processor 1000 may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network orcommunication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 1000 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS. [0073] The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 1006, and external memory (not shown) coupled to the set of integrated memory controller units 1014. The set of shared cache units 1006 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 1012 interconnects the integrated graphics logic 1008, the set of shared cache units 1006, and the system agent unit 1010/integrated memory controller unit(s) 1014, alternative embodiments may use any number of well- known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 1006 and cores 1002-A-N.[0074] In some embodiments, one or more of the cores 1002A-N are capable of multithreading. The system agent 1010 includes those components coordinating and operating cores 1002A-N. The system agent unit 1010 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 1002A-N and the integrated graphics logic 1008. The display unit is for driving one or more externally connected displays.[0075] The cores 1002A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 1002A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.Exemplary Computer Architectures[0076] Figures 11-14 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.[0077] Referring now to Figure 11, shown is a block diagram of a system 1100 in accordance with one embodiment of the present invention. The system 1100 may include one or more processors 1110, 1115, which are coupled to a controller hub 1120. In one embodiment the controller hub 1120 includes a graphics memory controller hub (GMCH) 1190 and an Input/Output Hub (IOH) 1150 (which may be on separate chips); the GMCH 1190 includes memory and graphics controllers to which are coupled memory 1140 and a coprocessor 1145; the IOH 1150 is couples input/output (I/O) devices 1160 to the GMCH 1190. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 1140 and the coprocessor 1145 are coupled directly to the processor 1110, and the controller hub 1120 in a single chip with the IOH 1150.[0078] The optional nature of additional processors 1115 is denoted in Figure 11 with broken lines. Each processor 1110, 1115 may include one or more of the processing cores described herein and may be some version of the processor 1000.[0079] The memory 1140 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1120 communicates with the processor(s) 1110, 1115 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection 1195.[0080] In one embodiment, the coprocessor 1145 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 1120 may include an integrated graphics accelerator.[0081] There can be a variety of differences between the physical resources 1110, 1115 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.[0082] In one embodiment, the processor 1110 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 1110 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 1145. Accordingly, the processor 1110 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 1145. Coprocessor(s) 1145 accept and execute the received coprocessor instructions. [0083] Referring now to Figure 12, shown is a block diagram of a first more specific exemplary system 1200 in accordance with an embodiment of the present invention. As shown in Figure 12, multiprocessor system 1200 is a point-to-point interconnect system, and includes a first processor 1270 and a second processor 1280 coupled via a point-to- point interconnect 1250. Each of processors 1270 and 1280 may be some version of the processor 1000. In one embodiment of the invention, processors 1270 and 1280 are respectively processors 1110 and 1115, while coprocessor 1238 is coprocessor 1145. In another embodiment, processors 1270 and 1280 are respectively processor 1110 coprocessor 1145.[0084] Processors 1270 and 1280 are shown including integrated memory controller (IMC) units 1272 and 1282, respectively. Processor 1270 also includes as part of its bus controller units point-to-point (P-P) interfaces 1276 and 1278; similarly, second processor 1280 includes P-P interfaces 1286 and 1288. Processors 1270, 1280 may exchange information via a point-to-point (P-P) interface 1250 using P-P interface circuits 1278, 1288. As shown in Figure 12, IMCs 1272 and 1282 couple the processors to respective memories, namely a memory 1232 and a memory 1234, which may be portions of main memory locally attached to the respective processors.[0085] Processors 1270, 1280 may each exchange information with a chipset 1290 via individual P-P interfaces 1252, 1254 using point to point interface circuits 1276, 1294, 1286, 1298. Chipset 1290 may optionally exchange information with the coprocessor 1238 via a high-performance interface 1239. In one embodiment, the coprocessor 1238 is a special- purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.[0086] A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.[0087] Chipset 1290 may be coupled to a first bus 1216 via an interface 1296. In one embodiment, first bus 1216 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited. [0088] As shown in Figure 12, various I/O devices 1214 may be coupled to first bus 1216, along with a bus bridge 1218 which couples first bus 1216 to a second bus 1220. In one embodiment, one or more additional processor(s) 1215, such as coprocessors, high- throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 1216. In one embodiment, second bus 1220 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 1220 including, for example, a keyboard and/or mouse 1222, communication devices 1227 and a storage unit 1228 such as a disk drive or other mass storage device which may include instructions/code and data 1230, in one embodiment. Further, an audio I/O 1224 may be coupled to the second bus 1220. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 12, a system may implement a multi-drop bus or other such architecture.[0089] Referring now to Figure 13, shown is a block diagram of a second more specific exemplary system 1300 in accordance with an embodiment of the present invention. Like elements in Figures 12 and 13 bear like reference numerals, and certain aspects of Figure 12 have been omitted from Figure 13 in order to avoid obscuring other aspects of Figure 13.[0090] Figure 13 illustrates that the processors 1270, 1280 may include integrated memory and I/O control logic ("CL") 1272 and 1282, respectively. Thus, the CL 1272, 1282 include integrated memory controller units and include I/O control logic. Figure 13 illustrates that not only are the memories 1232, 1234 coupled to the CL 1272, 1282, but also that I/O devices 1314 are also coupled to the control logic 1272, 1282. Legacy I/O devices 1315 are coupled to the chipset 1290.[0091] Referring now to Figure 14, shown is a block diagram of a SoC 1400 inaccordance with an embodiment of the present invention. Similar elements in Figure 10 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 14, an interconnect unit(s) 1402 is coupled to: an application processor 1410 which includes a set of one or more cores 202A-N and shared cache unit(s) 1006; a system agent unit 1010; a bus controller unit(s) 1016; an integrated memory controller unit(s) 1014; a set or one or more coprocessors 1420 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 1430; a direct memory access (DMA) unit 1432; and a display unit 1440 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 1420 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.[0092] Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches.Embodiments of the invention may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.[0093] Program code, such as code 1230 illustrated in Figure 12, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.[0094] The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.[0095] One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.[0096] Such machine-readable storage media may include, without limitation, non- transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable's (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.[0097] Accordingly, embodiments of the invention also include non-transitory, tangible machine-readable media containing instructions or containing design data, such asHardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.Emulation (including binary translation, code morphing, etc.)[0098] In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.[0099] Figure 15 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 15 shows a program in a high level language 1502 may be compiled using an x86 compiler 1504 to generate x86 binary code 1506 that may be natively executed by a processor with at least one x86 instruction set core 1516. The processor with at least one x86 instruction set core 1516 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 1504 represents a compiler that is operable to generate x86 binary code 1506 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 1516. Similarly, Figure 15 shows the program in the high level language 1502 may be compiled using an alternative instruction set compiler 1508 to generate alternative instruction set binary code 1510 that may be natively executed by a processor without at least one x86 instruction set core 1514 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 1512 is used to convert the x86 binary code 1506 into code that may be natively executed by the processor without an x86 instruction set core 1514. This converted code is not likely to be the same as the alternative instruction set binary code 1510 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 1512 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 1506.
The present disclosure includes methods, devices, and systems for controlling a memory device. One method for controlling a memory device embodiment includes storing device class dependent information and a command in one or more of host system memory and host controller memory, setting a pointer to the command in a register in a host controller, directing access to the one or more of host system memory and host controller memory with the memory device via the host controller; and executing the command with the memory device.
What is Claimed is: 1. A method for controlling a memory device, comprising: storing device class dependent information and a command in one or more of host system memory and host controller memory; setting a pointer to the command in a register in a host controller; directing access to the one or more of host system memory and host controller memory with the memory device via the host controller; and executing the command with the memory device. 2. The method of claim 1, wherein directing access to the host system memory includes managing direct memory access (DMA) of data and providing flow control of data between a host and the memory device with a host controller. 3. The method of claim 2, wherein directing access to the host system memory includes managing DMA of data and providing flow control without host processor intervention. 4. The method of claim 1, wherein the method includes storing device class independent information in one or more registers in a host controller. 5. The method of claim 4, wherein storing device class independent information includes storing one or more types of information selected from the group including: device control and interrupt information; device enable/disable information; device power state control/status information; one or more pointers to device-class dependent information in the host system memory; one or more pointers to commands in the host system memory; a particular device class associated with a particular transaction layer register entry; and a reset command. 6. The method of claim 4, wherein the command is written with small computer system interface (SCSI) protocol. 7. The method of claim 6, wherein the command is modified with the device class dependent information and the device class independent information. 8. The method of claim 1 , wherein directing access to the one or more of host system memory and host controller memory with the memory device via the host controller includes providing access to a read command and a write command and executing the read command and write command with the memory device. 9. The method of claim 1, wherein the method includes associating the command with a segment identifier (SID) and wherein the SID identifies a base address in a SID map table for the memory device to write data to the one or more of host system memory and host controller memory based on the command. 10. The method of claim 9, wherein the SID identifies a base address in a SID map table for the memory device to read data from the one or more of host system memory and host controller memory based on the command. 1 1. A method for controlling a memory device, comprising: storing device class dependent information and a command in one or more of host system memory and host controller memory; setting a pointer to the command in a register in a host controller; directing access to the one or more of host system memory and host controller memory with the memory device via the host controller; executing the command with the memory device; and transferring data between the memory device and a hardware port. 12. The method of claim 1 1, wherein a peripheral device is coupled to the host controller via the hardware port. 13. The method of claim 12, wherein the peripheral device is selected from the group including: a digital camera; a digital music device; a network device; and a USB device. 14. The method of claim 1 1, wherein the command causes data to be read from the memory device and written to the peripheral device. 15. The method of claim 11, wherein the command includes a segment identifier (SID) to indicate where data is written on the peripheral device. 16. The method of claim 11, wherein the command causes data to be read from the peripheral device and written to the memory device. 17. The method of claim 1 1, wherein the command includes a SID to indicate where data is read from on the peripheral device. 18. The method of claim 1 1 , wherein a media codec is coupled to the host controller via the hardware part for audio and/or video play-back. 19. A method for operating a memory device, comprising: storing device class independent information in one or more registers in a host controller; storing device class dependent information in a memory array; building a read command in the memory array with a host system processor; sending the read command from the memory array to the memory device via a host system controller; and initiating a direct memory access (DMA) write from the memory device to the memory array with the memory device. 20. The method of claim 19, wherein the method includes allocating data space in the memory array for the read command data with the host system processor; setting an identifier associated with the read command with the host system processor, wherein the identifier indicates a location of the data space in the memory array for the read command data; setting a memory pointer in a register with the host system processor, wherein the memory pointer indicates the location of the read command in the memory array. 21. The method of claim 20, wherein the identifier is a segment identifier (SID) and wherein the SID locates a base address in a SID table in the one or more of host system memory and host controller memory for the location of read command data. 22. The method of claim 21, wherein the SID indentifies a range of addresses in the one or more of host system memory and host controller memory indicating valid locations for the read command data. 23. The method of claim 20, wherein the method includes initiating a memory write to the data space allocated in the memory array at the location indicated by the identifier associated with the read command by the memory device. 24. The method of claim 19, wherein the memory array is a memory array type selected from the group including: host system memory and host controller memory. 25. The method of claim 19, wherein the method includes updating the read command with a completion status in the memory array with the memory device. 26. The method of claim 19, wherein the method includes: writing an interrupt to the register with the memory device;interrupting the host system processor with the host system controller; and reading the register with the host system processor to determine a reason for the interrupt. 27. A method for operating a memory device, comprising: storing device class independent information in registers in a host controller; storing device class dependent information in one or more of host system memory and host controller memory; building a write command with a host system processor in one or more of host system memory and host controller memory; notifying the memory device of the write command in the one or more of host system memory and host controller memory with the host system controller; acting upon the notification of the write command by executing the write command with the memory device; and initiating a direct memory access (DMA) read of write command data in the one or more of host system memory and host controller memory and / returning the write command data to the memory device with the memory device. 28. The method of claim 27, wherein the method includes allocating space in one or more of host system memory and host controller memory for the write command data with the host system processor; setting an identifier associated with the write command with the host system processor, where the identifier indicates a location of the data space for the write command data; setting a memory pointer in a register with the host system processor, wherein the memory pointer indicates the location of the write command in one or more of host system memory and host controller memory. 29. The method of claim 28, wherein the identifier is a segment identifier (SID) and wherein the SID locates a base address in a SID table in the one ormore of host system memory and host controller memory indicating the location of write command data. 30. The method of claim 29, wherein the SID indentifies a range of addresses in the one or more of host system memory and host controller memory indicating valid locations for the write command data. 31. The method of claim 27, wherein the method includes updating the write command with a completion status to the one or more of host system memory and host controller memory with the memory device. 32. A memory system, comprising: one or more memory devices each coupled to at least one other of the one or more memory devices via a bus; a host controller coupled to one or more of the memory devices; a host processor, wherein the host controller is coupled to the host controller; and system memory, wherein device class dependent information is stored in the system memory and the host controller transmits commands built by the host processor in the system memory to the one or more memory devices to read data from and/or write data to the one or more memory devices. 33. The system of claim 32, wherein the host controller transmits commands built by the host processor in the system memory to the number of memory devices to read data from and/or write data to the system memory at a base address in the system memory specified in the command. 34. The system of claim 32, wherein the host controller transmits commands built by the host processor in the system memory to the number of memory devices to read data from and/or write data to the number of memory devices at a base address in the system memory indicated by a segment identifier (SID). 35. The system of claim 32, wherein a SID table located in host controller memory indicates the base address in the system memory where to read data from and/or write data to in the system memory. 36. The system of claim 32, wherein a SID table located in host controller memory indicates a range of acceptable base addresses in the system memory where to read data from and/or write data to in the system memory. 37. The system of claim 36, wherein the SID is compared with the range of acceptable base addresses in the SID table to determine if a base address indicated by the SID in the command is valid. 38. The system of claim 32, wherein device class dependent information stored in the system memory is written to the system memory on power-up of the one or more memory devices. 39. The system of claim 32, wherein the one or more memory devices have device class independent information stored in one or more registers in a host controller and wherein each of the one or more registers has device class independent information that is associated with one of the one or more memory devices. 40. The system of claim 39, wherein device class independent information stored in the one or more registers is written to the registers on power-up of the one or more memory devices. 41. A memory system, comprising: one or more memory devices each coupled to at least one other of the one or more memory devices via a bus; a host controller, wherein the host controller includes host controller memory and is coupled to one or more of the memory devices; a host processor, wherein the host processor is coupled to the host controller;system memory, wherein device class dependent information is stored in the system memory and/or the host controller memory and the host controller transmits commands built by the host processor in the system memory and/or the host controller memory to the one or more memory devices to read data from and/or write data to the one or more memory devices. 42. The system of claim 41 , wherein the host controller has a hardware port and a peripheral device is coupled to the hardware port. 43. The system of claim 42, wherein the host controller transmits commands built by the host processor in the system memory and/or the host controller memory to the number of memory devices to read data from and/or write data to the peripheral device. 44. The system of claim 41, wherein the host controller transmits commands built by the host processor in the system memory and/or the host controller memory to the number of memory devices to read data from and/or write data to system memory. 45. The system of claim 41, wherein the host controller transmits commands built by the host processor in the system memory and/or the host controller memory to the number of memory devices to read data from and/or write data to host controller memory. 46. The system of claim 41, wherein device class independent information is stored in one or more registers in system memory and/or host controller memory. 47. A host controller device, comprising: a host controller coupled to one or more memory devices, a host processor, and system memory, wherein device class dependent information is stored in the system memory and/or host controller memory and the host controller transmits commands built by the host processor in the system memory to the number of memory devices to read data from and/or write data to the one or more memory devices. 48. The device of claim 47, wherein device class dependent information is information that indicates operating parameters that are specific to the memory device. 49. The device of claim 47, wherein the host processor runs a device driver to store device class independent information in the system memory and/or host controller memory. 50. The device of claim 49, wherein device class independent information is information that indicates operating parameters for memory devices independent of the device type. 51. The device of claim 47, wherein device class independent information is stored in one or more registers in system memory and/or host controller memory.
HOST CONTROLLER Technical Field [0001] The present disclosure relates generally to semiconductor memory devices, methods, and systems, and more particularly, to host controllers. Background [0002] Memory devices are typically provided as internal, semiconductor, integrated circuits and/or external removable devices in computers, personal digital assistants (PDAs), digital cameras, and cellular telephones, among various other electronic devices. There are many different types of memory including random-access memory (RAM), read only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), phase change random access memory (PCRAM), and flash memory, among others. [0003] Flash memory devices are utilized as non-volatile memory for a wide range of electronic applications. Flash memory devices typically use a one-transistor memory cell that allows for high memory densities, high reliability, and low power consumption. [0004] Various types of memory can be used in memory systems. The various types of memory can be used in any combination to provide memory for a host device. For example, Flash memory can be included in a memory system. Flash memory can be part of a memory system as internal memory or as removable memory that can be coupled to the memory system through an interface, such as a USB connection. [0005] A memory system can include a host device, host system memory, and a number of external memory devices. The host device can have a number of processors, a host controller, host controller memory that is located on the host controller, and a number of internal memory devices. The host device can use the internal and/or the external memory devices by interacting with the memory devices via a host controller. The host controller can communicate with the memory devices to perform operations on the memorydevices, such as reading data from the memory devices to the host system or writing data from the host system to the memory devices. The commands that control the reading and writing of data can be built by the host system. The host controller can have hardware that controls the memory device capabilities in the commands. In such cases when a host controller has hardware that defines the memory devices capabilities, the host controller is limited to building commands that have the capabilities associated with the hardware that is on the host controller. Brief Description of the Drawings [0006] Figure 1 illustrates a block diagram of a memory system in accordance with one or more embodiments of the present disclosure. [0007] Figure 2 illustrates a block diagram of a host controller in accordance with one or more embodiments of the present disclosure. [0008] Figure 3 illustrates a block diagram of a transaction layer register space and host system memory in accordance with one or more embodiments of the present disclosure. [0009] Figure 4 illustrates a block diagram of a transaction layer and host system memory in accordance with one or more embodiments of the present disclosure. [0010] Figure 5 illustrates a block diagram of a host system, host system memory, and a memory device in accordance with one or more embodiments of the present disclosure. [0011] Figure 6 is a block diagram illustrating the operation of a memory system in accordance with one or more embodiments of the present disclosure. Detailed Description [0012] The present disclosure includes methods, devices, and systems for controlling a memory device. One method for controlling a memory device embodiment includes storing device class dependent information and a command in one or more of host system memory and host controller memory, setting a pointer to the command in a register in a host controller, directing access to the one or more of host system memory and host controller memorywith the memory device via the host controller; and executing the command with the memory device. [0013] In the following detailed description of the present disclosure, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration how one or more embodiments of the disclosure may be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the embodiments of this disclosure, and it is to be understood that other embodiments may be utilized and that process, electrical, and/or structural changes may be made without departing from the scope of the present disclosure. As used herein, the designator "N," particularly with respect to reference numerals in the drawings, indicates that a number of the particular feature so designated can be included with one or more embodiments of the present disclosure. The designators can represent the same or different numbers of the particular features. [0014] The figures herein follow a numbering convention in which the first digit or digits correspond to the drawing figure number and the remaining digits identify an element or component in the drawing. Similar elements or components between different figures may be identified by the use of similar digits. For example, 112 may reference element "12" in Figure 1, and a similar element may be referenced as 212 in Figure 2. As will be appreciated, elements shown in the various embodiments herein can be added, exchanged, and/or eliminated so as to provide a number of additional embodiments of the present disclosure. In addition, as will be appreciated, the proportion and the relative scale of the elements provided in the figures are intended to illustrate the embodiments of the present disclosure, and should not be taken in a limiting sense. [0015] Figure 1 illustrates a block diagram of a memory system 100 in accordance with one or more embodiments of the present disclosure. In Figure 1, a host system 1 10 is shown. In one or more embodiments, the host system can be a computing device, such as a personal computer, among other computing device types. Examples of a host system 1 10 include laptop computers, personal computers, mobile phone, digital cameras, digital recording and play back devices, PDA's, memory card readers, and interface hubs, among other examples. The host system 1 10 of Figure 1 includes a host controller 1 12, a hostsystem processor 1 14, a port 102, and a direct memory access (DMA) engine 122, among other computing device elements not shown. As illustrated in Figure 1, the host controller 1 12 can include a transaction layer, link layer, and/or physical layer and can be coupled to host system memory 1 16 via the DMA engine 122 and the host system memory controller 1 18. Also, in Figure 1 , host controller 112 is coupled to memory devices 120-1, 120-2,...,120-N. [0016] In one or more embodiments, the host controller 1 12 can be used to communicate information between the number of memory devices 120-1, 120-2,...,120-N and another device, such as the host system 110. One of ordinary skill in the art will appreciate that "a processor" can intend one or more processors, such as a parallel processing system, a number of coprocessors, etc. In some embodiments, the host controller 112 can manage transport, link, and physical layer activity without processor intervention and manage command retries without processor intervention. [0017] In one or more embodiments, the host controller 1 12 can be coupled to a standardized interface. For example, when the memory devices 120-1, 120-2,...,120-N is used for data storage for a memory system, the host controller can implement a serial advanced technology attachment (SATA), a peripheral component interconnect express (PCIe), a universal serial bus (USB), and/or a small computer system interface (SCSI), among other connectors and interfaces. In general, however, host controller 1 12 can be coupled to an interface for passing control, address, data and other signals between the memory devices 120-1, 120-2,...,120-N, the host system 1 10, and attached devices, such as host system memory 1 16. [0018] In one or more embodiments, the memory devices 120-1 , 120- 2,...,120-N can include one or more memory device controllers that can be used to facilitate operations, such as read, write, and/or erase commands, among other operations, that are communicated to the memory devices 120-1, 120-2,..., 120- N from the host system 1 10. The memory devices 120-1, 120-2,...,120-N can be chained together and coupled to a bus and in some embodiments, the last memory device, e.g., 120-N, can be removed from the chain. In one or more embodiments, the circuitry in one or more memory device controllers can include control circuitry for providing a translation layer between host system 1 10 and the memory devices 120-1, 120-2,...,120-N. Thus, a memory devicecontroller could selectively couple an I/O connector (not shown in Figure 1) of memory devices 120-1, 120-2,...,120-N to receive the appropriate signal at the appropriate I/O connection at the appropriate time. Similarly, the communication protocol between a host system 110 and the memory devices 120-1, 120-2,...,120-N may be different than what is required for access to the memory devices 120-1, 120-2,...,120-N. The memory device controllers could then translate the command sequence received from a host system 1 10 into appropriate command sequences to achieve the desired access to memory devices 120-1, 120-2,...,120-N. Each translation may further include changes in signal voltage levels in addition to command sequences. [0019] In one or more embodiments, the port 102 can be a hardware port. A hardware port can be used to couple a peripheral device, such as a digital camera, an MP3 player, a network device, and/or USB device, among other devices. A hardware port can also be used to couple a media codec for playback of audio and/or video. The coupling of a hardware device to the host system 110 via port 102 can allow the hardware devices to communicate with the memory devices 120-1, 120-2,...,120-N, host system memory 116, and/or other memory in the host system 110. Communication can include reading, writing, and/or erasing data to and/or from the hardware devices, the memory devices, and/or the memory on or coupled to the host system 1 10. [0020] The embodiments of Figure 1 can include additional circuitry that is not illustrated so as not to obscure embodiments of the present disclosure. For example, the memory devices 120-1, 120-2,...,120-N can include address circuitry to latch address signals provided over I/O connectors through I/O circuitry. Address signals can be received and decoded by a row decoder and a column decoder, to access the memory devices 120-1, 120-2,...,120-N. It will be appreciated by those skilled in the art that the number of address input connectors depends on the density and architecture of the memory devices 120- 1, 120-2,...,120-N. [0021] Figure 2 illustrates a block diagram of a host controller 212 in accordance with one or more embodiments of the present disclosure. In Figure 2, the host controller 212 includes a transaction layer 230, a link layer 232, and a physical (PHY) layer 234. In one more embodiments, the host controller 212 can use the transaction layer 230, link layer 232, and physical layer 234 to helpensure that error free packets of data are reliably transported. In Figure 2, the host controller communicates packets of data between memory, such as system and/or host controller memory, memory devices, and the host processor via the direct memory access (DMA) bus 238, the register bus 236, and/or the execute in place (XIP) bus 240. [0022] In one or more embodiments, the host controller can notify the memory devices that data is ready to be transferred between system and/or host controller memory and the memory devices. The memory devices can request the transfer of data from the system and/or host memory controller. The memory devices send the appropriate commands, status information, and data based on the state of the memory devices. The host controller can manage the DMA transfer of data and can provide flow control to and from the memory devices without processor intervention. As described herein, the memory device capabilities are mapped to memory and the host controller 212 is used to transfer and/or control flow of commands, data and/or status, among other signals, between memory devices and system and/or host controller memory. [0023] In the embodiment of Figure 2, the DMA bus 238 can be used to communicate signals between the transaction layer 230 of host controller 212 and system and/or host controller memory. The DMA bus 238 can includes address and byte count information when communicating signals. The DMA bus 238 can be coupled to system and/or host controller memory via a DMA engine, e.g., DMA engine 122 in Figure 1. The transaction layer 230 can provide the DMA interface for the host system. The DMA bus 238 can transfer read and/or write commands that are built in system and/or host controller memory. The DMA bus 238 can also transfer device commands and device dependent information that is stored in system and/or host controller memory, as described herein. [0024] In one or more embodiments, a host system can include memory on a host controller and/or system memory coupled to the host system. A host system with system and/or host controller memory can include a DMA bus 238 to both the host controller memory and the system memory to transfer signals from the host controller and/or system memory to the transaction layer 230 on the host controller 212.[0025] In Figure 2, the register bus 236 can be used to transfer signals between a number of registers on the host controller 212 and the host processor. The registers can include DID information and can be used when building commands on the system and/or host controller memory to provide device class independent information, such as device enable/disable and /or power state control/status, among other device class independent information, as described herein. [0026] In Figure 2, the transaction layer 230 can receive information from the system and/or host controller memory via the DMA bus 238. The transaction layer 230 is in communication with the link layer 232. The link layer is in communication with the XIP bus 240. The XIP bus 240 can transfer signals between the link layer 232 in the host controller 212 and the host processor. The commands built in the system and/or host controller memory can be transferred to the memory devices through the XIP bus 240. The host controller provides flow control of signals between memory devices and system and/or host controller memory via the XIP bus 240 and the DMA bus 238. The XIP bus can transfer signals and/or data from memory devices to the host control and onto the system and/or host controller memory via the transaction layer 230, link layer 232, and the physical layer 234 without processor intervention. The physical layer 234 can be in communication with the link layer. The PHY can provide cyclic redundancy check (CRC), an acknowledged/not acknowledged indication, and/or can handle arbitration and scheduling for the signals and/or data transferred via the host controller 212 between memory devices and system and/or host controller memory. The link layer 232 can provide encoding and/or decoding of the signals and/or data from memory devices coupled to the host controller 212. Also, the link layer 232 can facilitate and indicate the reception and transmission of signals and/or data from memory devices coupled to the host controller 212. [0027] Figure 3 illustrates a block diagram of a transaction layer register space 331 and memory 316 in accordance with one or more embodiments of the present disclosure. The memory 316 in Figure 3 can be system memory and/or host controller memory. In one or more embodiments, one or more registers can be included in the host controller. In Figure 3, transaction layer register space 331 is included in the transaction layer of a host controller. The register spacecan be used to store data relating to memory devices that the host controller can use to perform functions on memory devices and system and/or host controller memory and/or peripheral devices coupled to the host controller. [0028] In the embodiment illustrated in Figure 3, the register space 331 can include a number of DID registers 350, 352, and 354. Each of the devices can have a DID register associated with the device. For example, DID register 352 labeled DID 0 registers is associated with device 0 that is coupled to the host controller. In one or more embodiments, a number of devices can be coupled to the host controller. DID register 350 labeled DID N registers is associated with the Nth device where N is an integer. [0029] In one or more embodiments, DID register 350, 352, and 354 can store device class independent information. Device class independent information can include information regarding parameters that many device types use in their operation. For example, the device class independent information can include memory device control and interrupt information, device enable/disable, power state control/status, pointers to device class dependent information in system and/or host controller memory, pointers to command in system and/or host controller memory, an indication of the device class associated with the DID register, non-masked interrupt status, and/or immediate commands/operation, such as reset and other link layer commands, among other types of information. [0030] In one or more embodiments, the device class independent information is written to the transaction layer register space 331 on power-up of the memory device associated with the DID register, e.g., on system power-up, on device insertion when the device is coupled to the host system, etc. A driver, such as a software driver, for the device can cause the device class independent data to be written to the DID register. A driver can be used to write the device class independent data at power-up of a device, therefore the memory array in the register space 331 can be volatile memory, such as DRAM and/or nonvolatile memory, such as Flash. Also, in one or more embodiments the DID register can have device class independent information written to it at initial power-up of a device when the device is first coupled to the host system. The register space memory can be non-volatile memory, such as Flash, and store thedevice class independent data in the register permanently or until a new driver adds or replaces the data in the DID register. [0031] In Figure 3, memory 316 can include data space 351, 353, and 355, which can include commands and device dependent information. Device class dependent information can include parameters and/or configuration information that is associated with a device that is coupled to the host system. The parameters and/or configuration information included in the device class dependent information can be used when building commands to operate devices associated with the device class dependent information. Data space N 351 can be associated with device N, data space 1 353 can be associated with device 1, and data space 2 355 can be associated with device 2. Also, data spaces 351 , 353, and 355 can be associated with DID registers 350, 352, and 354, respectively. [0032] In one or more embodiments, the device class dependent information stored in the data spaces in memory 316 can include information for controlling the device capabilities when the device is executing a command. By putting the device class dependent information in the memory 316, the processor is relieved of operational burdens. The device capabilities can be encoded in hardware on the host controller according to some previous approaches, however according to one or more embodiments of the present disclosure the device capabilities can be removed from hardware on the host controller and stored in the memory 316, where they can be written to and/or read from a number of locations in memory 316. [0033] In one or more embodiments, the device class dependent information is written to memory 316 on power-up of the memory device associated with the DID register, e.g., on system power-up, on device insertion when the device is coupled to the host system, etc.. A driver, such as a software driver, for the device can cause the device class dependent data to be written to the memory 316. A driver can be used to write the device class dependent data at power-up of a device, therefore the memory array in the memory 316 memory can be volatile memory, such as DRAM and/or non-volatile memory, such as Flash. Also, in one or more embodiments the memory 316 can have device class dependent information written to it at initial power-up of a device when the device is first coupled to the host system. The memory 316 can be non-volatilememory, such as Flash, and store the device class dependent data in the memory 316 permanently or until a new driver adds or replaces the data in the memory 316. [0034] In one or more embodiments, a pointer can be included in the DID registers. The pointer can consist of the address in memory 316 where the data space is located. The address included in the pointer can point the DID registers containing device class independent information to the device class dependent information in memory 316. The pointer in the DID registers on the transaction layer register space 331 can be used to memory map the device dependent and independent information. The pointer can identify the location of the device class dependent information in memory 316. [0035] In one or more embodiments, the data spaces containing the device class dependent information can be written to the host system memory, and/or host controller memory on the host controller. In such embodiments, the data spaces on the system and/or host controller memory can be memory mapped together with pointers stored in the DID register in the transaction layer register space 331. [0036] Figure 4 illustrates a block diagram of a transaction layer 431 and memory 416 in accordance with one or more embodiments of the present disclosure. The memory 316 in Figure 3 can be system memory and/or host controller memory. In Figure 4, memory 416 can include device commands and device dependent information in device spaces, as discussed above in association with Figure 3. Device spaces can include device class dependent information, such as parameters and/or configuration information. A host system processor can build a command in device space N 451 , such as a read command and/or a write command for the memory device associated with the device space, in this case device N. The commands written to the device space N 451 can be written using a communication protocol, such as SCSI, among other protocols. The command is built by the host system processor and stored in device space N 451 in memory 416 using a communication protocol and the device class dependent information in device space N 451 and the device class independent information in DID N registers 450 is used to modify the command for use by the memory device.[0037] In or more embodiments, the command can include a corresponding data buffer. The data buffer can be used to store data that is written to system and/or host controller memory during a device read command. Also, the data buffer can be used to store data that is written to the device during a device write command. The data buffer can be in system and/or host controller memory and can be allocated by the command built and stored in device space N 451. [0038] The device commands in memory 416 can include a base address that indicates the location of the data buffer. The command can alternatively use a segment identifier (SID) in the command to indicate the location of the data buffer. The SID can reference a SID map table 460 on the transaction layer 431. The SID map table 460 can be located in a memory array on the transaction layer 431 or in other embodiments the SID map table can be located in other memory locations, such as system memory. The SID map table can include a number of SIDs that are associated with a number of base addresses, where each SID is associated with a base address. The base address is an address in system and/or host controller memory that can indicate a data buffer location. The SID map table can be updated by the processor to assign base addresses to SIDs based on the availability of memory location in system and/or host controller memory. [0039] In one or more embodiments, a number of SIDs can be used with a command based on the availability of system and/or host controller memory to accommodate a data buffer. The SID(s) associated with the command can be used when the command is executed. The SID will reference the SID map table 460. The SID can be located in the SID map table 460. Once the SID is located in the SID map table, a base address associated with the SID is identified. The base address is used during execution of the command to write data to and/or send data from a data buffer, e.g., data for command with SID 0 462, indicated by the base address location in system and/or host controller memory. [0040] In one or more embodiments, the SID can indicate a range of addresses that can be used as the data buffer for the command. The range of addresses can include a base address indicated as the start of the data buffer. The range of addresses associated with a SID can be used to limit the amount of memory that can be used to execute a command and can be used to identify anddisable invalid commands, e.g. commands that request memory outside of the base address range. [0041] In one or more embodiments, a command in DID information location 451 can include an explicit, e.g., actual, base address, and not use a SID. The base address would be used by the command as the data buffer location. In some embodiments, the device when executing the command can add an offset to the base address to indicate a full memory address, where the data buffer will be located. The base address in the command can indicate a location in the system and/or host controller memory. [0042] Figure 5 illustrates a block diagram of a host system 510, host system memory 516, and a memory device 520-N in accordance with one or more embodiments of the present disclosure. In Figure 5, host system memory 516 and/or host controller memory 522 can be used to store device class dependent information and/or commands to operate a memory device 520-N coupled to the host system 510. [0043] In Figure 5, device spaces can include device class dependent information for memory device 520-N and/or a command built by the processor to operate device N. The command can use a SID, e.g. SID i in Figure 5, to reference the SID map table 560 on the host controller 512. The SID map table indicates the base address, e.g., base address i for Figure 5, where the data buffer, e.g. data for command with SID i 562 can be located. [0044] In one or more embodiments, once a command is written to the device space N 551 in the host system memory 516, a pointer is written to the DID registers 550, 552, and 554, e.g., DID N registers 550, associated with the device space N 551. The pointer 557 can be detected by the host controller 512 and the pointer 557 can be used to locate the command in the host system memory 516. Once the command is located in the host system memory 516, the command is transferred to the device 520-N via the host controller 512 when the device 520-N initiates a DMA transfer of the command. The command is executed by the memory device 520-N and data is written to and/or read from the data buffer, e.g., 562 by the device using the SID and/or base address indicated in the command. [0045] In one or more embodiments, a number of devices can be operated using commands and/or device dependent information in the systemand/or host controller memory. As described above in association with Figure 5, host system memory 516 can be used to store device commands and device dependent information and the host system processor 514 builds a command stored in device spaces in host system memory 516 for operating memory device 520-N. Also in Figure 5, host controller memory 522 can be used to store device commands and device dependent information in device space 1 555 for operating device 1 (not shown in Figure 5) and store data in a buffer in data for command with SIDj 564. [0046] In one or more embodiments, once a command is written to the device space 1 555 in the host controller memory 522, a pointer 558 is written to the DID register, e.g., DID 1 registers 554, associated with the device commands and device dependent information in device space 1 555. The pointer 558 can be detected by the host controller 512 and the pointer 558 can be used to locate the command in the host controller memory 522. Once the command is located in the host controller memory 522, the command can be transferred to the memory device via the host controller 512 when the memory device initiates a DMA transfer of the command. The command is executed by the memory device and data can be written to and/or read from the data buffer, e.g., 564, by the device using the SID and/or base address indicated in the command. [0047] In one or more embodiments, host system memory 516 and/or host controller memory can be used to operate memory devices by including device class dependent data in the memory and the host system processor 514 can build commands in the host system memory 516 and/or host controller memory 522 for execution by the memory device. Figure 5 illustrates one example of using system memory and/or host controller memory to operate two memory devices. In one or more embodiments, any combination of system memory 518 and/or host controller memory 522 can be used according the embodiments described herein. [0048] Figure 6 is a block diagram illustrating the operation of a memory system in accordance with one or more embodiments of the present disclosure. The embodiment illustrated and described in association with Figure 6 is for a read command from a device. A read command consists of reading data from a memory device, transferring the data to system and/or host controller memory, and writing the data to the system and/or host controller memory, therefore aread command consists of a read operation and a write operation. Also, one or embodiments of the present disclosure can include a write command. A write command consists of read data from system and/or host controller memory, transferring the data to a memory device, and writing the data to the memory device, therefore, a write command also consists of a read operation and a write operation. [0049] A command for a device can be built and executed according to Figure 6. In Figure 6, the host system processor 614 can build a command, allocate data space, e.g., data buffer, set a SID, and set a pointer 670. The host controller 612 can notify the device of the command data and pointer location (in system and/or host controller memory) 672. The device controller 624 receives this notification and then the device can act upon the notification by receiving the command 674 and then the memory device initiates a DMA read from system and/or host controller memory 676. [0050] The device controller 624 begins executing a read command 678 and a memory write to SID is initiated 680. A memory write to SID 680 can write requested data to system and/or host controller memory in the location indicated by the SID and the associated base address. Once a memory write to SID 680 is initiated and DMA write to system memory is facilitated 682, the requested data is being read from the device and written to the system and/or host controller memory for use by the host system using device dependent information and commands from the host and/or host controller memory. [0051] In Figure 6, once the requested data is transferred from the device to the system and/or host controller memory an update command with completion status 684 is generated by device controller 624. The completion status indicator is indicated to the host controller 612 and a DMA write to system memory is facilitated 682. [0052] Also as shown in Figure 6, a device can interrupt a command. Device controller 624 can conditionally write an interrupt into a DID register 686 on the host controller 612. The host controller 612 can conditionally interrupt the host 688 and the host system processor 614 can receive the interrupt and read the DID register to determine the reason for the interrupt 690. [0053] The embodiment illustrated in Figure 6 is for a read command from a device. One or more embodiments of the present disclosure can also beused with a write command. A write command would include the host controller facilitating a DMA read from a SID location in system and/or host controller memory and then transferring the data to the device. The device controller could then write the data to the device. Conclusion [0054] The present disclosure includes methods, devices, and systems for controlling a memory device. One method for controlling a memory device embodiment includes storing device class dependent information and a command in one or more of host system memory and host controller memory, setting a pointer to the command in a register in a host controller, directing access to the one or more of host system memory and host controller memory with the memory device via the host controller; and executing the command with the memory device. [0055] It will be understood that when an element is referred to as being "on," "connected to" or "coupled with" another element, it can be directly on, connected, or coupled with the other element or intervening elements may be present. In contrast, when an element is referred to as being "directly on," "directly connected to" or "directly coupled with" another element, there are no intervening elements or layers present. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items. [0056] It will be understood that, although the terms first, second, etc. may be used herein to describe various elements and that these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. Thus, a first element could be termed a second element without departing from the teachings of the present disclosure. [0057] Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of one or more embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skillin the art upon reviewing the above description. The scope of the one or more embodiments of the present disclosure includes other applications in which the above structures and methods are used. Therefore, the scope of one or more embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled. [0058] In the foregoing Detailed Description, some features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.
Systems, apparatuses, and methods for accelerating accesses to private regions in a region-based cache directory scheme are disclosed. A system includes multiple processing nodes, one or more memory devices, and one or more region-based cache directories to manage cache coherence among the nodes' cache subsystems. Region-based cache directories track coherence on a region basis rather than on a cache line basis, wherein a region includes multiple cache lines. The cache directory entries for regions that are only accessed by a single node are cached locally at the node. Updates to the reference count for these entries are made locally rather than sending updates to the cache directory. When a second node accesses a first node's private region, the region is now considered shared, and the entry for this region is transferred from the first node back to the cache directory.
WHAT IS CLAIMED IS1. A system comprising:a plurality of processing nodes, wherein each processing node of the plurality of processing nodes comprises one or more processors and a cache subsystem;one or more memory devices; andone or more region-based cache directories, wherein each region-based cache directory is configured to track shared regions of memory which have cache lines cached by at least two different processing nodes; andwherein each processing node of the plurality of processing nodes is configured tomaintain an entry with a reference count field to track a number of accesses by the processing node to separate cache lines of a given region responsive to receiving an indication from a corresponding region-based cache directory that the given region is private, wherein a private region is accessed by only a single processing node.2. The system as recited in claim 1, wherein each processing node is further configured toperform updates to the reference count field of the entry without notifying the corresponding region-based cache directory while the reference count field is greater than zero.3. The system as recited in claim 2, wherein each processing node is further configured to send the reference count field of the entry to the corresponding region-based cache directory and invalidate the entry responsive to receiving a notification from the corresponding region- based cache directory that a cache line of the region has been cached by another processing node.4. The system as recited in claim 1, wherein each region-based cache directory of the one or more region-based cache directories is configured to maintain an entry for each shared region tracking a number of accesses by the plurality of processing nodes to separate cache lines of the shared region, wherein a shared region is accessed by at least two processing nodes.5. The system as recited in claim 4, wherein a first region-based cache directory of the one or more region-based cache directories is configured to:perform a lookup responsive to receiving an indication that a cache line of a given region has been cached by a first processing node; and send a notification to the first processing node to have the first processing node maintain a local region-based cache directory entry for the given region responsive to the lookup missing.6. The system as recited in claim 5, wherein the first region-based cache directory is further configured to:determine if a matching entry indicates that the given region was private responsive to the lookup hitting; andresponsive to determining that the matching entry indicates that the given region was private, send a notification to a second processing node, identified in the matching entry, that the given region is now shared.7. The system as recited in claim 6, wherein the first region-based cache directory is further configured to:receive the reference count from the second processing node responsive to the second processing node receiving the notification and sending the reference count to the region- based cache directory; andincrement and store the reference count in the matching entry.8. A method comprising:tracking, by one or more region-based cache directories, shared regions of memory which have cache lines cached by at least two different processing nodes; and maintaining, by each processing node of a plurality of processing nodes, an entry with a reference count field to track a number of accesses by the processing node to separate cache lines of a given region responsive to receiving an indication from a corresponding region-based cache directory that the given region is private, wherein a private region is accessed by only a single processing node.9. The method as recited in claim 8, further comprising performing, by each processing node, updates to the reference count field of the entry without notifying the corresponding region- based cache directory while the reference count field is greater than zero.10. The method as recited in claim 9, further comprising sending, by each processing node, the reference count field of the entry to the corresponding region-based cache directory and invalidating the entry responsive to receiving a notification from the corresponding region- based cache directory that a cache line of the region has been cached by another processing node.11. The method as recited in claim 8, further comprising maintaining, by each region-basedcache directory, an entry for each shared region tracking a number of accesses by the plurality of processing nodes to separate cache lines of the shared region, wherein a shared region is accessed by at least two processing nodes.12. The method as recited in claim 11, further comprising:performing, by a first region-based cache directory, a lookup responsive to receiving an indication that a cache line of a given region has been cached by a first processing node; andsending, by the first region-based cache directory, a notification to the first processing node to have the first processing node maintain a local region-based cache directory entry for the given region responsive to the lookup missing.13. The method as recited in claim 12, further comprising:determining, by the first region-based cache directory, if a matching entry indicates that the given region was private responsive to the lookup hitting; andresponsive to determining that the given region was private, sending, by the first region- based cache directory, a notification to a second processing node, identified in the matching entry, that the given region is now shared.14. The method as recited in claim 13, further comprising:receiving, by the first region-based cache directory, the reference count from the second processing node responsive to the second processing node receiving the notification and sending the reference count to the first region-based cache directory; andincrementing and storing, by the first region-based cache directory, the reference count in the matching entry.15. An apparatus comprising:a plurality of processing nodes, wherein each processing node comprises one or more processors and a cache subsystem; anda plurality of node-based cache directories, wherein each region-based cache directory is configured to track shared regions of memory which have cache lines cached by at least two different processing nodes; and wherein a first processing node is configured to maintain an entry with a reference count field to track a number of accesses by the first processing node to separate cache lines of a first region responsive to receiving an indication from a corresponding region-based cache directory that the first region is only being accessed by the first processing node.16. The apparatus as recited in claim 15, wherein the first processing node is further configured to perform updates to the reference count field of the entry without notifying thecorresponding region-based cache directory while the reference count field is greater than zero.17. The apparatus as recited in claim 16, wherein the first processing node is further configured to send the reference count field of the entry to the corresponding region-based cache directory and invalidate the entry responsive to receiving a notification from thecorresponding region-based cache directory that a cache line of the first region has been cached by another processing node.18. The apparatus as recited in claim 15, wherein each region-based cache directory of theplurality of region-based cache directories is configured to maintain an entry for each shared region tracking a number of accesses by the plurality of processing nodes to separate cache lines of the shared region, wherein a shared region is accessed by at least two processing nodes.19. The apparatus as recited in claim 18, wherein a first region-based cache directory of the plurality of region-based cache directories is configured to:perform a lookup responsive to receiving an indication that a cache line of a second region has been cached by the first processing node; andsend a notification to the first processing node to have the first processing node maintain a local region-based cache directory entry for the second region responsive to the lookup missing.20. The apparatus as recited in claim 19, wherein the first region-based cache directory is further configured to:determine if a matching entry indicates that the second region was private responsive to the lookup hitting; and responsive to determining that the matching entry indicates that the given region was private, send a notification to a second processing node, identified in the matching entry, that the given region is now shared. 21. The apparatus as recited in claim 19, wherein the first node-based cache directory is further configured to invalidate a second entry for the given region responsive to receiving responses to the invalidation probes that all cache lines for the given region have been evicted.
ACCELERATING ACCESSES TO PRIVATE REGIONS IN A REGION-BASED CACHEDIRECTORY SCHEMEBACKGROUNDDescription of the Related Art[0001] Computer systems use main memory that is typically formed with inexpensive and high density dynamic random access memory (DRAM) chips. However DRAM chips suffer from relatively long access times. To improve performance, data processors typically include at least one local, high-speed memory known as a cache. In a multi-core data processor, each data processor core can have its own dedicated level one (Ll) cache, while other caches (e.g., level two (L2), level three (L3)) are shared by data processor cores.[0002] Cache subsystems in a computing system include high-speed cache memories which store blocks of data. As used herein, a“block” is a set of bytes stored in contiguous memory locations, which are treated as a unit for coherency purposes. As used herein, each of the terms “cache block”,“block”,“cache line”, and“line” is interchangeable. In some implementations, a block can also be the unit of allocation and deallocation in a cache. The number of bytes in a block is varied according to design choice.[0003] In multi-node computer systems, special precautions must be taken to maintain coherency of data that is being used by different processing nodes. For example, if a processor attempts to access data at a certain memory address, it must first determine whether the memory is stored in another cache and has been modified. To implement this cache coherency protocol, caches typically contain multiple status bits to indicate the status of the cache line to maintain data coherency throughout the system. One common coherency protocol is known as the "MOESI" protocol. According to the MOESI protocol each cache line includes status bits to indicate which MOESI state the line is in, including bits that indicate that the cache line has been modified (M), that the cache line is exclusive (E) or shared (S), or that the cache line is invalid (I). The Owned (O) state indicates that the line is modified in one cache, that there may be shared copies in other caches and that the data in memory is stale.[0004] Cache directories are a key building block in high performance scalable systems. A cache directory is used to keep track of the cache lines that are currently in use by the system. A cache directory improves both memory bandwidth as well as reducing probe bandwidth by performing a memory request or probe request only when required. Logically, the cache directory resides at the home node of a cache line which enforces the cache coherence protocol. The operating principle of a cache directory is inclusivity (i.e., a line that is present in a central processing unit (CPU) cache must be present in the cache directory). In a cache line based directory scheme, each cache line is tracked individually. So, the size of the cache directory has to increase linearly with the total capacity of all of the CPU cache subsystems in the computing system. The total CPU cache size tends to grow exponentially as memory technology improves.Accordingly, a line-based cache directory scheme is not able to keep up with the exponential growth of the CPU cache size.BRIEF DESCRIPTION OF THU DRAWINGS[0005] The advantages of the methods and mechanisms described herein may be better understood by referring to the following description in conjunction with the accompanying drawings, in which:[0006] FIG. l is a block diagram of one implementation of a computing system.[0007] FIG. 2 is a block diagram of one implementation of a core complex.[0008] FIG. 3 is a block diagram of one implementation of a multi-CPU system.[0009] FIG. 4 is a block diagram of one implementation of a region-based cache directory.[0010] FIG. 5 illustrates one implementation of a private region-based cache directory entry.[0011] FIG. 6 is a generalized flow diagram illustrating one implementation of a method for accelerating accesses to private regions for a region-based cache directory scheme.[0012] FIG. 7 is a generalized flow diagram illustrating one implementation of a method for maintaining a region-based cache directory.[0013] FIG. 8 is a generalized flow diagram illustrating one implementation of a method for managing region-based cache directory entries.PET ATT, ED DESCRIPTION OF IMPLEMENTATIONS[0014] In the following description, numerous specific details are set forth to provide a thorough understanding of the methods and mechanisms presented herein. However, one having ordinary skill in the art should recognize that the various implementations may be practiced without these specific details. In some instances, well-known structures, components, signals, computer program instructions, and techniques have not been shown in detail to avoid obscuring the approaches described herein. It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements. [0015] Systems, apparatuses, and methods for accelerating accesses to private regions in a region-based cache directory scheme are disclosed. A system includes multiple processing nodes, with each processing node including a cache subsystem. A system also includes one or more memory devices and one or more region-based cache directories to help manage cache coherency among the nodes’ cache subsystems. In order to reduce the number of entries in the cache directories, the cache directories track coherency on a region basis rather than on a cache line basis, wherein a region includes multiple cache lines. The cache directory entries for private regions that are only accessed by a single node are cached locally at the node. Updates to the reference count for these entries are made locally rather than sending updates to the cache directory. When a second node accesses a first node’s private region, the region is now considered shared, and the entry for this region is transferred from the first node back to the cache directory.[0016] Referring now to FIG. 1, a block diagram of one implementation of a computing system 100 is shown. In one implementation, computing system 100 includes at least core complexes 105A-N, input/output (I/O) interfaces 120, bus 125, memory controller(s) 130, and network interface 135. In other implementations, computing system 100 includes other components and/or computing system 100 is arranged differently. In one implementation, each core complex 105A-N includes one or more general purpose processors, such as central processing units (CPUs). It is noted that a“core complex” can also be referred to as a “processing node” or a“CPU” herein. In some implementations, one or more core complexes 105A-N include a data parallel processor with a highly parallel architecture. Examples of data parallel processors include graphics processing units (GPUs), digital signal processors (DSPs), field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and so forth. Each processor core within core complex 105A-N includes a cache subsystem with one or more levels of caches. In one implementation, each core complex 105A-N includes a cache (e.g., level three (L3) cache) which is shared between multiple processor cores.[0017] Memory controller(s) 130 are representative of any number and type of memory controllers accessible by core complexes 105A-N. Memory controlled s) 130 are coupled to any number and type of memory devices (not shown). For example, the type of memory in memory device(s) coupled to memory controller(s) 130 can include Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), NAND Flash memory, NOR flash memory, Ferroelectric Random Access Memory (FeRAM), or others. I/O interfaces 120 are representative of any number and type of EO interfaces (e.g., peripheral component interconnect (PCI) bus, PCI-Extended (PCI-X), PCIE (PCI Express) bus, gigabit Ethernet (GBE) bus, universal serial bus (USB)). Various types of peripheral devices are coupled to I/O interfaces 120. Such peripheral devices include (but are not limited to) displays, keyboards, mice, printers, scanners, joysticks or other types of game controllers, media recording devices, external storage devices, network interface cards, and so forth.[0018] In various implementations, computing system 100 is a server, computer, laptop, mobile device, game console, streaming device, wearable device, or any of various other types of computing systems or devices. It is noted that the number of components of computing system 100 varies from implementation to implementation. In other implementations, there are more or fewer of each component than the number shown in FIG. 1. It is also noted that in other implementations, computing system 100 includes other components not shown in FIG. 1 and/or is structured in other ways.[0019] Turning now to FIG. 2, a block diagram of one implementation of a core complex 200 is shown. In one implementation, core complex 200 includes four processor cores 210A-D. In other implementations, core complex 200 includes other numbers of processor cores. It is noted that a“core complex” can also be referred to as a“processing node” or“CPU” herein. In one implementation, the components of core complex 200 are included within core complexes 105A- N (of FIG. 1).[0020] Each processor core 210A-D includes a cache subsystem for storing data and instructions retrieved from the memory subsystem (not shown). For example, in one implementation, each core 210A-D includes a corresponding level one (Ll) cache 215A-D. In one implementation, each processor core 210A-D includes or is coupled to a corresponding level two (L2) cache 220A-D. Additionally, in one implementation, core complex 200 includes a level three (L3) cache 230 which is shared by the processor cores 210A-D. In this implementation, L3 cache 230 is coupled to a coherent master for access to the fabric and memory subsystem. It is noted that in other implementations, core complex 200 includes other types of cache subsystems with other numbers of caches and/or with other configurations of the different cache levels.[0021] In one implementation, private region-based cache directory entries 240 are stored within L3 cache 230. In another implementation, private region-based cache directory entries 240 are stored in a coherent master (not shown) coupled to core complex 200. In other implementations, private region-based cache directory entries 240 are stored in other locations within core complex 200 or external to core complex 200.[0022] Each entry in private region-based cache directory entries 240 tracks a private region that has at least one cache line accessed by any of the cores 210A-D of core complex 200. As used herein, the term“private region” is defined as a region which has cache lines cached in only a single processing node of the overall computing system. When a cache line of a given region is allocated in Ll caches 215A-D, a lookup is performed of CPU cache directory 240 for the given region. If an entry is already allocated in private region-based cache directory entries 240 for the given region, then a reference count of the matching entry is incremented. If the lookup of private region-based cache directory entries 240 is a miss for the given region, then an indication of the miss is sent to the corresponding region-based cache directory (not shown). If the corresponding region-based cache directory responds with a message indicating the given region is a private region (i.e., no other processing nodes have cache lines cached for the given region), then a new entry is allocated for the given region in private region-based cache directory entries240.[0023] If a given cache line in Ll caches 215A-D or L2 caches 220A-D is evicted or invalidated by a coherency probe, and if private region-based cache directory entries 240 has an entry for the region of this given cache line, then the reference count for this entry is decremented. If the reference count for the entry goes to zero, then this entry is marked as invalid and can be reclaimed. Also, when the reference count for a private region-based cache directory entry 240 goes to zero, a notification is sent to the region-based cache directory. In response to receiving this message, a corresponding entry in the region-based cache directory is invalidated.[0024] Referring now to FIG. 3, a block diagram of one implementation of a multi-CPU system 300 is shown. In one implementation, system includes multiple CPUs 305A-N. The number of CPUs per system varies from implementation to implementation. Each CPU 305A-N includes any number of cores 308A-N, respectively, with the number of cores varying according to the implementation. Each CPU 305A-N also includes a corresponding cache subsystem 310A- N. Each cache subsystem 310A-N includes any number of levels of caches and any type of cache hierarchy structure.[0025] In one implementation, each cache subsystem 310A-N includes private region-based cache directory entries 312A-N, respectively. For example, cache subsystem 310A includes any number of private region-based cache directory entries 312A, with each private region-based cache directory entry storing information for a corresponding region which has only been accessed by CPU 305 A. For example, when CPU 305A caches a cache line in cache subsystem 312A for a given region and the given region is private, then a cache directory entry is maintained by cache subsystem 312A for the given region. The cache directory entry tracks the number of cache lines which have been accessed by CPU 305 A within the given region. If at some later point in time the given region becomes shared (i.e., another CPU 305B-N accesses the given region), then the reference count from the entry 312A is sent to the corresponding cache directory and then the entry 312A is discarded. The other cache subsystems 310B-N include private region-based cache directory entries 312B-N storing information for their respective private regions. In one implementation, CPU 305 A performs a write-through to the corresponding cache directory when CPU 305 A updates the local cached copy of a private region cache directory entry 312A. This enables CPU 305A to discard a local cache directory entry312A when it is time to replace the local cache directory entry 312A since the corresponding cache directory is already in sync. For example, in one implementation, a replacement of a local cache directory entry 312A occurs in response to a capacity eviction from the local storage area.[0026] In one implementation, each CPU 305A-N is connected to a corresponding coherent master 315A-N. In another implementation, the CPU-based cache directories 312A-N are stored in coherent masters 315A-N, respectively, rather than being stored in the cache hierarchy of respective CPUs 305A-N. As used herein, a“coherent master” is defined as an agent that processes traffic flowing over an interconnect (e.g., bus/fabric 318) and manages coherency for a connected CPU. To manage coherency, a coherent master receives and processes coherency- related messages and probes, and the coherent master generates coherency-related requests and probes. It is noted that a“coherent master” can also be referred to as a“coherent master unit” herein.[0027] In one implementation, each CPU 305A-N is coupled to a pair of coherent slaves via a corresponding coherent master 315A-N and bus/fabric 318. For example, CPU 305 A is coupled through coherent master 315A and bus/fabric 318 to coherent slaves 320 A-B. In other implementations, bus/fabric 318 includes connections to other components which are not shown to avoid obscuring the figure. For example, in another implementation, bus/fabric 318 includes connections to one or more I/O interfaces and one or more I/O devices.[0028] Coherent slave (CS) 320A is coupled to memory controller (MC) 330A and coherent slave 320B is coupled to memory controller 330B. Coherent slave 320A is coupled to region- based cache directory (CD) 325A, with region-based cache directory 325A including entries for memory regions that have cache lines cached in system 300 for the memory accessible through memory controller 330A. It is noted that region-based cache directory 325A, and each of the other region-based cache directories 325B, 345A-B, and 360A-B, can also be referred to as a “probe filter”. Coherent slave 320B is coupled to region-based cache directory 325B, with region-based cache directory 325B including entries for memory regions that have cache lines cached in system 300 for the memory accessible through memory controller 330B. It is noted that the example of having two memory controllers per CPU is merely indicative of one implementation. It should be understood that in other implementations, each CPU 305A-N can be connected to other numbers of memory controllers besides two.[0029] In a similar configuration to that of CPU 305A, CPU 305B is coupled to coherent slaves 335A-B via coherent master 315B and bus/fabric 318. Coherent slave 335A is coupled to memory via memory controller 350A, and coherent slave 335A is also coupled to region-based cache directory 345A to manage the coherency of cache lines corresponding to memory accessible through memory controller 350A. Coherent slave 335B is coupled to region-based cache directory 345B and coherent slave 335B is coupled to memory via memory controller 365B. Also, CPU 305N is coupled to coherent slaves 355A-B via coherent master 315N and bus/fabric 318. Coherent slaves 355A-B are coupled to region-based cache directory 360A-B, respectively, and coherent slaves 355A-B are coupled to memory via memory controllers 365A- B, respectively. As used herein, a“coherent slave” is defined as an agent that manages coherency by processing received requests and probes that target a corresponding memory controller. It is noted that a“coherent slave” can also be referred to as a“coherent slave unit” herein. Additionally, as used herein, a“probe” is defined as a message passed from a coherency point to one or more caches in the computer system to determine if the caches have a copy of a block of data and optionally to indicate the state into which the cache should place the block of data.[0030] When a coherent slave receives a memory request targeting its corresponding memory controller, the coherent slave performs a lookup to its corresponding region-based cache directory to determine if the request targets a region which has at least one cache line cached in any of the cache subsystems. In one implementation, each region-based cache directory 325A-B, 345A-B, and 360A-B in system 300 tracks regions of memory, wherein a region includes a plurality of cache lines. The size of the region being tracked can vary from implementation to implementation. By tracking at a granularity of a region rather than at a finer granularity of a cache line, the size of each region-based cache directory 325A-B, 345A-B, and 360A-B is reduced. It is noted that a“region” can also be referred to as a“page” herein. When a request is received by a coherent slave, the coherent slave determines the region which is targeted by the request. Then a lookup is performed of the region-based cache directory for this region. If the lookup results in a hit, then the coherent slave sends a probe to the CPU(s) which are identified in the hit entry. The type of probe that is generated by the coherent slave depends on the coherency state specified by the hit entry.[0031] Turning now to FIG. 4, a block diagram of one implementation of a region-based cache directory 400 is shown. In one implementation, region-based cache directories 325A-B, 345A-B, and 360A-B (of FIG. 3) include the functionality shown in region-based cache directory400. In one implementation, region-based cache directory 400 includes control unit 405 and array 410. Array 410 includes any number of entries, with the number of entries varying according to the implementation. In one implementation, each entry of array 410 includes a private/shared status field 415, state field 420, sector valid field 425, cluster valid field 430, reference count field 435, and tag field 440. In other implementations, the entries of array 410 include other fields, omit one or more of the illustrated fields, and/or are arranged in other suitable manners.[0032] The private/shared status field 415 indicates whether the corresponding region is private or shared. A private region is accessed by only a single processing node or a single device. In other words, cache lines from a private region are cached by only a single processing node or a single device. A shared region is accessed by two or more processing nodes or devices. In other words, cache lines from a shared region are cached by two or more processing nodes or devices. If a region is private, then the reference count field 435 for the region is maintained locally by the processing node or device which is accessing the region. If a region is shared, then the cache directory maintains the reference count field 435 for the region. When a region transitions between the private and shared states, the reference count field 435 is transferred between the node or device and the cache directory.[0033] The state field 420 includes state bits that specify the aggregate state of the region. In one implementation, the aggregate state is a reflection of the most restrictive cache line state for this particular region. For example, the state for a given region is stored as“dirty” even if only a single cache line for the entire given region is dirty. Also, the state for a given region is stored as “shared” even if only a single cache line of the entire given region is shared.[0034] The sector valid field 425 stores a bit vector corresponding to sub-groups or sectors of lines within the region to provide fine grained tracking. The organization of sub-groups and the number of bits in sector valid field 425 vary according to the implementation. In one implementation, two lines are tracked within a particular region entry using sector valid field 425. In another implementation, other numbers of lines are tracked within each region entry. In this implementation, sector valid field 425 is used to indicate the number of partitions that are being individually tracked within the region. Additionally, the partitions are identified using offsets which are stored in sector valid field 425. Each offset identifies the location of the given partition within the given region. Sector valid field 425, or another field of the entry, also indicates separate owners and separate states for each partition within the given region. The cluster valid field 430 includes a bit vector to track the presence of the region across various CPU cache clusters. For example, in one embodiment, CPUs are grouped together into clusters ofCPUs. The bit vector stored in cluster valid field 430 is used to reduce probe destinations for regular coherency probes and region invalidation probes.[0035] The reference count field 435 is used to track the number of cache lines of the region which are cached somewhere in the system. On the first access to a region, an entry is installed in table 410 and the processing node or device which made the first access to the region will maintain the reference count field 435 for the region in a locally maintained entry. The reference count field 435 is set to one on the first access to the region. Each time the same processing node or device accesses a cache line from this region, the reference count is incremented. As long as the region stays private, these accesses only require updating the reference count in the locally maintained entry, and a notification to the cache directory does not need to be sent. This helps to reduce the amount of probe traffic sent on the fabric. As cache lines from this region get evicted by the caches of the same processing node or device or invalidated by a coherency probe, the reference count decrements. Eventually, if the reference count reaches zero, the same processing node or device notifies the cache directory and then the entry is marked as invalid, allowing the entry to be reused for another region. If another processing node or device accesses a cache line from the region, causing the region to transition from private to shared, then the cache directory will start to manage the reference count field 435 for the region. The tag field 440 includes the tag bits that are used to identify the entry associated with a particular region.[0036] Referring now to FIG. 5, one implementation of a private region-based cache directory entry 500 is shown. In one implementation, each entry of private region-based cache directory entries 312A-N (of FIG. 3) is structured as shown in private region-based cache directory entry 500. In one implementation, private region-based cache directory entry 500 includes at least a state field 505, reference count field 510, and tag field 515. The state field 505 includes state bits that specify the status (e.g., coherency states such as dirty, shared, valid, invalid, etc.) of the region. In one implementation, the status is specified to represent the most restrictive cache line state for this particular region. The reference count field 510 tracks the number of different cache lines of the region that are cached by the node or device. Tag field 515 includes the tag bits that are used to identify the entry associated with a particular region. In other implementations, private region-based cache directory entry 500 includes other fields and/or is arranged in other suitable manners.[0037] In one implementation, a device locally stores one or more private region-based cache directory entries 500 to accelerate accesses to regions with a coherency state of“invalid” which are known to miss in the cache directory. For example, an I/O device or direct memory access (DMA) device which creates content in memory stores a plurality of private region-based cache directory entries 500 for the regions of memory where the newly created content is being stored.A first write to a given region by a device creates a corresponding private region-based cache directory entry 500 with a state set to“invalid”, while subsequent writes to the given region by the device will hit on this entry 500. On a write to a region with a private region coherency state of invalid, only local storage is needed and a write-through update to the cache directory is not required. Additionally, in such a case, no reference count update is needed. Rather, the local storage is simply being used to accelerate writes which are known will miss in the cache directory. In this manner, latencies due to a directory lookup are reduced and throughput is increased.[0038] Turning now to FIG. 6, one implementation of a method 600 for accelerating accesses to private regions for a region-based cache directory scheme is shown. For purposes of discussion, the steps in this implementation and those of FIG. 7-8 are shown in sequential order. However, it is noted that in various implementations of the described methods, one or more of the elements described are performed concurrently, in a different order than shown, or are omitted entirely. Other additional elements are also performed as desired. Any of the various systems or apparatuses described herein are configured to implement method 600.[0039] A control unit tracks regions of memory which have cache lines cached anywhere within a computing system (block 605). For each region that has been accessed, the control unit tracks whether the region is private (i.e., only a single node or device has accessed the region) or shared (i.e., two or more nodes or devices have accessed the region) (conditional block 610). If the region is shared (conditional block 610,“no” leg), then the region-based cache directory maintains the cache directory entry for the region (block 615). After block 615, method 600 returns to block 605. If the region is private (conditional block 610,“private” leg), then the control unit sends a message to the node or device which is accessing the region to maintain the cache directory entry for the region (block 620). Updates to the cache directory entry are performed by the node or device in response to detecting accesses, evictions, or invalidations for cache lines of the region (block 625). If another device or node subsequently accesses the region (conditional block 630,“yes” leg), then the control unit of the region-based cache directory retrieves the cache directory entry from the node or device (block 635). Otherwise, if no accesses by other devices or nodes to the region are detected (conditional block 630,“no” leg), then the cache directory entry remains with the node or device (block 640). After blocks 635 and 640, method 600 returns to block 605. It is noted that in one implementation, method 600 is performed for each region of the memory space that has been accessed. [0040] Referring now to FIG. 7, one implementation of a method 700 for maintaining a region-based cache directory is shown. A region-based cache directory receives an indication that a cache line of a given region has been cached by a first device (block 705). It is assumed for the purposes of this discussion that this is the first cache line of the given region being cached by the first device. In other words, prior to the cache line of the given region being cached by the first device, the first device did not have any other cache lines of the given region in its cache subsystem. Depending on the implementation, the first device can be a node, an I/O device, or another type of device. In response to receiving the indication, a lookup is performed by the region-based cache directory for the given region (block 710). If the lookup is a hit (conditional block 715,“hit” leg), then if the matching entry stores an indication that the given region is private (conditional block 720, “yes” leg), then the region-based cache directory sends a notification to a second device, identified in the matching entry as caching at least one cache line of the given region, that the given region is now shared (block 725). If the lookup is a hit, this means that the given region is now shared among at least two different devices. In this case, the region-based cache directory will maintain the cache directory entry for the given region and devices will send updates to the region-based cache directory for additional cache lines of the given region being cached or for evicted or invalidated cache lines for the given region.[0041] In response to receiving the notification, the second device sends a reference count from its local entry for the given region to the region-based cache directory and then the second device discards the local entry for the given region (block 730). Alternatively, in another implementation, the second device invalidates the private entry for the given region rather than discarding the private entry. The region-based cache directory receives the reference count from the second device and then the region-based cache directory increments the reference count and stores the new reference count in an entry in the region-based cache directory (block 735). The region-based cache directory also stores an indication in the matching entry that the given region is shared (block 740).[0042] If the lookup is a miss (conditional block 715,“no” leg), then the region-based cache directory stores an indication that the given region is a private region of the first device (block 745). Also, the region-based cache directory sends a notification to the first device to have the first device maintain a local cache directory entry for the given region (block 750). If at a later point in time another device accesses the given region, then the region-based cache directory will retrieve the reference count from the first device and start to maintain the entry for the given region. After blocks 740 and 750, method 700 ends. If the matching entry stores an indication that the given region is shared (conditional block 720,“no” leg), then the region-based cache directory increments the reference count of the matching entry (block 755). After block 755, method 700 ends.[0043] Turning now to FIG. 8, one implementation of a method 800 for managing region- based directory entries is shown. A device evicts or invalidates a cache line of a given region of memory (block 805). In various implementations, the device can be a processing node, an I/O device, or other type of device. In response to evicting or invalidating the cache line of the given region, the device determines whether there is a local region-based cache directory entry for the given region (conditional block 810). In other words, the device is determining whether the given region is private (i.e., the region-based cache directory entry is stored locally at the device) or shared (i.e., the region-based cache directory entry is maintained by the region-based cache directory) in conditional block 810.[0044] If a region-based cache directory entry for the given region is stored locally at the device (conditional block 810,“yes” leg), then the device decrements a reference count of the locally stored region-based cache directory entry (block 815). If the reference count is equal to zero after being decremented (conditional block 820,“yes” leg), then the device discards the entry and sends a notification to the region-based cache directory (block 825). Otherwise, if the reference count of the entry is greater than zero (conditional block 820,“no” leg), then the device continues to maintain the entry and the device forgoes sending a notification to the region-based cache directory (block 830). After block 830, method 800 ends. If the device does not have a locally stored region-based directory entry for the given region (conditional block 810,“no” leg), then the device sends an indication of the eviction or invalidation to the region-based cache directory (block 835). After block 835, method 800 ends.[0045] In various implementations, program instructions of a software application are used to implement the methods and/or mechanisms described herein. For example, program instructions executable by a general or special purpose processor are contemplated. In various implementations, such program instructions can be represented by a high level programming language. In other implementations, the program instructions can be compiled from a high level programming language to a binary, intermediate, or other form. Alternatively, program instructions can be written that describe the behavior or design of hardware. Such program instructions can be represented by a high-level programming language, such as C. Alternatively, a hardware design language (HDL) such as Verilog can be used. In various implementations, the program instructions are stored on any of a variety of non-transitory computer readable storage mediums. The storage medium is accessible by a computing system during use to provide the program instructions to the computing system for program execution. Generally speaking, such a computing system includes at least one or more memories and one or more processors configured to execute program instructions.[0046] It should be emphasized that the above-described implementations are only non-limiting examples of implementations. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.
Examples of the present disclosure describe a 3D stacking apparatus. The 3D stacking apparatus includes a plurality of semiconductor chips stacked in a vertical direction such that each chip in the stack is bonded to the chip at an upper side, a lower side or at the upper side and the lower side. In one embodiment, each chip is the same-for example, with the same circuitry disposed in the same configuration in the chip. The 3D stacking device provides a redundant logic layer by dividing the chip into a plurality of strips that are interconnected by an inter-chip bridge. For example, the 3D stacking apparatus may include three stacked chips that are divided into three different strips, where each strip includes a portion of each of the chips. The inter-chip bridging allows the other portions in the stripe to receive and route data as long as only one portion in the stripe is non-functional.
1.A 3D stacking device, wherein the 3D stacking device includes:A plurality of semiconductor chips stacked vertically on each other, wherein each of the plurality of chips is logically divided into the same number of parts, wherein each of the parts in the same chip is connected with the The adjacent part is separated,Wherein, corresponding portions of each of the plurality of chips in a column are grouped together to form a first strip, wherein the corresponding portions in the first strip include a deactivation portion and a first activation section,Wherein, at least one inter-chip bridge adjacent to the first strip is configured to route data from a second activation portion in the adjacent strip to the first activation portion in the first strip, wherein The second activation part is in the same chip as the deactivation part, but in a different chip from the first activation part.2.The 3D stacking device according to claim 1, wherein the adjacent strips include different deactivation portions, and at least one inter-chip bridge adjacent to the adjacent strips is configured to surround the different The deactivated part of the routing data, and wherein the first stripe and the adjacent stripe have at most one deactivated part.3.The 3D stacking device according to claim 1, wherein the corresponding portions in the first strip have the same circuit.4.The 3D stacking device according to claim 3, wherein the corresponding portion in the first strip has the same circuit as the portion in the adjacent strip.5.The 3D stacking device according to claim 3, wherein the corresponding portion in the first strip has a different circuit from the portion in the adjacent strip, wherein the adjacent strip The parts in the band have the same circuit.6.The 3D stacking device according to claim 1, wherein each of the inter-chip bridges includes a through hole to an adjacent chip.7.The 3D stacking device according to claim 1, wherein the plurality of chips includes at least three chips, wherein an inter-chip bridge in a middle chip of the plurality of chips includes a first connection to an upper chip, to A second connection of the lower chip and a third connection that couples adjacent portions within the middle chip together, wherein the inter-chip bridge in the middle chip is configured such that the first connection, the second Only one of the connection and the third connection is operable to transfer data during operation of the 3D stacking device.8.The 3D stacking device according to claim 7, wherein the inter-chip bridge in the middle chip includes a first driver for driving the first connection, and a first driver for driving the second connection. Two drivers and a multiplexer provided on the third connection, wherein the inter-chip bridge in the middle chip is configured to run data only through the first driver and the One of the second drives.9.The 3D stacking device according to claim 8, wherein the middle chip is configured such that during operation, data received from one of the lower chip and the upper chip passes the multiplexing The device flows to one of adjacent portions within the middle chip.
3D stacking deviceTechnical fieldThe present disclosure relates generally to providing redundancy in a 3D stacked device including multiple chips.Background techniqueField-programmable gate arrays (FPGAs) can be packaged into 2.5D packages, where FPGAs are placed on a common substrate or interposer. That is, FPGAs are bonded side-by-side to the same surface of the interposer. The interposer is usually passive (eg, does not include active components such as transistors) and includes data paths for coupling FPGAs to each other. In addition, the package may include an additional or redundant FPGA to improve yield. This is because one of the FGPAs may be non-functional due to production defects. Therefore, the package may include 4 FPGAs, but a 3-FPGA system with a redundant FPGA is advertised. As long as one of the FPGAs in the package is defective (cannot be determined until the FPGA is installed on the interposer and tested), the package can be sold as a 3-FPGA system. If multiple FPGAs are found to be defective after testing, the package may be discarded or sold as a different system.Utility model contentThe present disclosure describes techniques for configuring a 3D stacked device to provide at least one redundant logical layer. An example is a 3D stacking device, which includes a plurality of semiconductor chips stacked vertically on each other, where each of the plurality of chips is logically divided into the same number of parts, and where each part of the same chip is connected with Adjacent sections are separated. The respective portions of each of the plurality of chips in the column are grouped together to form a first strip, wherein the respective portions in the first strip include a deactivated portion and a first activated portion. At least one inter-chip bridge adjacent to the first strip is configured to route data from a second activation portion in the adjacent strip to a first activation portion in the first strip, wherein the second activation portion and the deactivation portion In the same chip, but in a chip different from the first activation part.The example can have one or more of the following characteristics:The adjacent strips include different deactivation portions, wherein at least one inter-chip bridge adjacent to the adjacent strips is configured to route data around the different deactivation portions, and wherein the first strip There is at most one deactivation part with the adjacent band.The respective sections in the first strip have the same circuit.The corresponding portion in the first strip has the same circuit as the portion in the adjacent strip.The corresponding portion in the first strip has a different circuit from the portion in the adjacent strip, wherein the portion in the adjacent strip has the same circuit.Each of the inter-chip bridges includes a via to an adjacent chip.The plurality of chips includes at least three chips, wherein an inter-chip bridge in a middle chip of the plurality of chips includes a first connection to an upper chip, a second connection to a lower chip, and a phase connection in the middle chip. A third connection in which adjacent portions are coupled together, wherein the inter-chip bridge in the middle chip is configured such that only one of the first connection, the second connection, and the third connection is operable for use in all The data is transmitted during the operation of the 3D stacking device.The inter-chip bridge in the middle chip includes a first driver for driving the first connection, a second driver for driving the second connection, and a multiplexer provided on the third connection. A user, wherein the inter-chip bridge in the middle chip is configured such that, at runtime, data flows through only one of the first drive and the second drive.It is characterized in that the middle chip is configured such that during operation, data received from one of the lower chip and the upper chip flows through the multiplexer to an adjacent one in the middle chip. One of the sections.The present disclosure also describes an example as a method for configuring a 3D stacking apparatus including a plurality of semiconductor chips stacked vertically on each other. The method includes testing multiple portions of each of a plurality of chips, wherein the plurality of chips are logically divided into the same number of portions, and wherein each of the portions in the same chip is bridged with a chip-to-chip Adjacent sections are separated. The method includes identifying at least one non-functional portion in a first strip, wherein the first strip includes a corresponding portion of each of a plurality of chips in a column, wherein the strip includes a first portion in addition to the non-functional portion. An activation part. The method further includes configuring at least one inter-chip bridge adjacent to the first strip to route data from a second activation portion in the adjacent strip to the first activation portion in the first strip, Wherein the second activation part and the non-functional part are in the same chip, but in a chip different from the first activation part.BRIEF DESCRIPTION OF THE DRAWINGSTherefore, a more specific description of the above brief summary and a detailed understanding of the above features are obtained by referring to exemplary embodiments, some of which are illustrated in the accompanying drawings. It should be noted, however, that the drawings illustrate only typical exemplary embodiments and are not to be considered limiting of its scope.Figure 1 is a 3D stacking device with redundant layers according to one embodiment.FIG. 2 is a 3D stacking device with redundant layers according to one embodiment.Figure 3 is a 3D stacking device with redundant layers according to one embodiment.4A and 4B are inter-chip bridges for avoiding non-functional parts in the chips of the 3D stacked device according to one embodiment.FIG. 5 is a logical view of the apparatus shown in FIG. 4B according to one embodiment.FIG. 6 is a circuit in an inter-chip bridge according to one embodiment.FIG. 7 is a flowchart for forming a 3D stacking device with a redundant layer according to one embodiment.FIG. 8 is a flowchart of an inter-chip bridge in a chip for configuring a 3D stacked device according to an embodiment.To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is foreseen to incorporate elements of one embodiment into other embodiments.Detailed waysVarious features are described below with reference to the drawings. It should be noted that the drawings may or may not be drawn to scale, and elements of similar structure or function are denoted by the same reference signs in all drawings. It should be noted that the drawings are only intended to facilitate describing the features. They are not an exhaustive description of the specification or a limitation on the scope of the claims. In addition, the illustrated examples need not have all the aspects or advantages shown. An aspect or advantage described in connection with a particular example is not necessarily limited to the example, and may be practiced in any other example, even if it is not shown as such, or if not so explicitly described.The examples herein describe techniques for forming a 3D stacked device including redundant logic layers. The 3D stacking device includes a plurality of semiconductor chips stacked in a vertical direction so that each chip is bonded to the chip above, below, or above and below in the stack. In one embodiment, each chip is the same-for example, it has the same circuit arranged in the same configuration in the chip. The chip may be an FPGA, a memory device (for example, a DRAM or SRAM chip), a processor, an accelerator, a system on a chip (SoC), or the like.In one embodiment, the 3D stacking device provides a redundant logic layer by dividing the chip into a plurality of strips, and the strips are interconnected through an inter-chip bridge. For example, the 3D stacking device may include three stacked chips, which are divided into three different strips, where each strip includes a portion of each of the chips. As long as only one portion of the strip is defective (non-functional), the inter-chip bridge allows the other portions of the strip to receive and route data. In this redundancy scheme, multiple chips may have defects, but still have two logic layers, which are equivalent to two functional chips in the stack. In other words, although there may be multiple potentially non-functional parts of the three chips in the stack, as long as these parts are in different slices, an inter-chip bridge can couple these chips together, making the 3D stack look from the outside Comes with two fully functional chips.The 3D stacked device described here has several advantages over 2.5D packaging because it does not require an interposer (whether deactivated or activated) and avoids significant delays when passing from one die to the next. loss. In addition, compared to 3D stacked devices, 2.5D packages may require users to partition their designs because 2.5D packages have relatively few inter-die connections.FIG. 1 is a 3D stacking device 100 with redundant layers according to one embodiment. In the example, the device 100 is three identical semiconductor chips 105 each. That is, the chip 105 may be three of the same FPGA, memory device, processor, SoC, and the like. Although three chips 105 are shown, the device 100 may have two, four, five, or six chips. The chips 105 are shown as separate for clarity, but are connected together in operation to establish a physical connection and communication path between the chips 105. For example, solder joints or other communication methods may be used to enable the chip 105 to communicate. In addition, the chip 105 may be encapsulated in a protective material, such as epoxy resin, to provide further structural support and protection during packaging.Each chip 105 is divided into four sections 110. The portion 110 shown in FIG. 1 is a logical division of the chip rather than a physical division. The sections 110 are then grouped into corresponding bands 115 according to the column in which they are located. For example, the portion 110A in the chip 105A, the portion 110E in the chip 105B, and the portion 110I in the chip 105C form a strip 115A. The section 110B in the chip 105A, the section 110F in the chip 105B, and the section 110J in the chip 105C form a strip 115B, and so on. Because the chip 105 is the same chip, the circuits and their settings in each section 110 in each strip 115 are the same. That is, the sections 110A110E and 110I include the same circuits and settings, the sections 110B, 110F, and 110J include the same circuits and settings, and so on. In this way, the parts 110 in the same strip 115 can perform the same function, and thus can be considered as redundant parts.In one embodiment, each portion 110 in all the chips 105 is the same. That is, the sections 110A-110L may include the same circuits and settings. For example, if the chip 105 is an FPGA, the portion 110 may include a circuit repeated four times in each chip 105. In another example, each section 110 may include a processor, the same set of programmable logic, or the same number of memory cells. In this embodiment, the strips 115 are the same as each other.The division between the portion 110 and the strip 115 may be based on any number of logical boundaries, such as the boundary between two clock domains, the boundary between two voltage domains, two different types of Boundaries between circuits or logic blocks, etc. Although not shown here, the chip 105 includes an inter-chip bridge provided at a boundary between each part 110 in the chip 105, and the inter-chip bridge enables these parts to be adjacent to or adjacent to the part 110 in the same chip 105 A part of the chip 105 communicates.The hashing in FIG. 1 shows a portion 110 of the chip 105 that is deactivated or non-functional. In the example, during operation, the sections 110A, 110B, 110C, 110F, 110H, 110I, 110K, and 110L are not used when the sections 110D, 110E, 110G, and 110J are used. The deactivation sections 110 may be deactivated because they are non-functional (for example, due to manufacturing defects) or because they are redundant. For example, section 110D may include manufacturing defects that render it nonfunctional; however, sections 110E, 110G, and 110J may be functional but deactivated because they are redundant. That is, although the 3D stacking device 100 includes three chips 105, if any one of the chips 105 includes a non-functional portion 110, the 3D stacking device 100 behaves as if it includes two functional chips 105. In other words, although the 3D stacking device 100 includes three chips 105, it may be advertised as having at least two functional chips 105 and one redundant chip 105. If all parts 110 in all chips 105 are functional during the test, all parts 110 may be active.For simplicity, it is assumed that the deactivation portion 110 in FIG. 1 includes manufacturing defects that render these portions non-functional. For example, after the chips 105 are bonded together, testing the chips 105 indicates that the portions 110D, 110E, 110G, and 110J are non-functional. Therefore, unlike other redundancy schemes that implement N + 1 redundancy only when a single chip has an error, here, two or three chips 105 may include a fault and still function as two fully functional chips. Because no strip 115 has more than one non-functional or deactivated portion, this means that each strip 115 has two functional or activated portions 110. As described in detail below, the two active portions in each strip 115 may be interconnected using an inter-chip bridge, so that the device 100 is equivalent to a device with two fully functional chips (for example, two without Activate part of the chip). That is, sections 110A and 110I in strip 115A are communicatively connected to activation sections 110B and 110F in strip 115B, and activation sections 110B and 110F in strip 115B are in turn communicatively connected to strip 115C. The activation sections in 110C and 110K, and so on. Because the sections 110 in each band are homogeneous, it does not matter which two of the three sections 110 are active (as long as they are functional). That is, from the perspective of the part 110F, it does not matter whether the part 110G (in the same chip 105B) or the part 110K (in the adjacent chip 105C) sends and receives data. Therefore, the activation section 110 in FIG. 1 is communicatively coupled to perform the same function as the two fully functional chips 105.FIG. 2 is a 3D stacking device 200 with redundant layers according to one embodiment. As described above, the 3D stacking apparatus 200 includes three homogeneous chips 205. However, the logical division of chip 205 into sections (not labeled) and to stripe 215 is different. That is, the strips 215 have portions of different sizes, rather than strips having the same width (or portions of the same size). Similar to the 3D stacking device 100, a portion in each strip 215 may be the same (for example, the same circuit and setting), but the strip 215 may include different portions. That is, the circuits in the portion 110 in each strip may be different. For example, sections 110A, 110E, and 110I in strip 115A may include a processor and associated memory, while sections 110B, 110F, and 110J in strip 115B may include programmable logic provided in one or more configuration logic blocks . In the illustrated example, the strips 115A and 115B are heterogeneous, although the corresponding portion 110 within each strip 115 is homogeneous. In addition, the boundary between the section 110 and the strip 115 can be based on any number of logical boundaries, such as the boundary between two clock domains, the boundary between two voltage domains, the boundary between two different types of circuits, etc. .As long as only one part of each of the strips 215 is deactivated (as shown by the hash), the 3D stacking device 200 can perform the same function as the two full-function chips 205. The chip 205 can be divided into smaller and smaller strips 215, which increases the chance that each strip contains at most one non-functional or deactivated portion. However, forming additional stripes 215 may be limited by the number of suitable boundaries for dividing the stripes 215, and also by the space used to add circuits and structures for inter-chip bridging at these boundaries.FIG. 3 is a 3D stacking device 300 with redundant layers according to one embodiment. FIG. 3 illustrates a 2D array 320 that logically divides the device 300 into stripes 315. That is, each column and each row in the chip 305 may form a plurality of sections 310 instead of the columns shown in FIGS. 1 and 2. That is, the chip 305 may include a suitable boundary extending in two directions on the chip 305 in order to divide the chip 305 into the portion 310.The portions 310 may then form strips 315, where each strip 315 has at most one deactivation portion 310 (as shown in the hash diagram). The portion 310 may include an inter-chip bridge coupling the portion 310 to its corresponding adjacent portion of the same chip or an adjacent chip. For example, the portion 310A may abut an inter-chip bridge to the east and north portions 310 in the chip 305A and the corresponding portions 310 in the chip 305B. Similarly, section 310B includes an inter-chip bridge that couples it to the north, east, and south sections 310 in chip 305A and the corresponding section 310 in chip 305B.For the same chip size, the 3D stacking device 300 may include a greater number or density of strips 315 than the 3D stacking devices 100 and 200, which may increase the possibility that at most only one portion of each strip 315 is non-functional . In this way, the 3D stacking device 300 can be equivalent to two full-function chips 305 running.4A and 4B illustrate an inter-chip bridge 450 for avoiding non-functional parts in the chips of the 3D stacking device 400 according to one embodiment. FIG. 4A shows a side or cross-sectional view of the 3D stacking device 400, which shows a respective inter-chip bridge 450 between each portion 410. Specifically, FIG. 4A shows the state of the 3D stacking device 400 before determining which part 410 is non-functional and therefore should be deactivated.The double-headed dashed arrow indicates a communication path that can be facilitated by the inter-chip bridge 450, although ultimately only one of those paths can be selected. In other words, the inter-chip bridge includes a circuit and a conductive path (e.g., a trace) that allows each portion 410 to communicate with an adjacent portion 410 in the same chip and at least one adjacent portion in another chip 405. And through holes). For example, the inter-chip bridge 450 between the sections 410A and 410B allows the section 410A to communicate with the section 410B in the same chip 405A and the section 410F in a different chip 405B. The same inter-chip bridge 450 allows communication between the portion 410B and the portion 410A or the portion 410D in the chip 405B. The inter-chip bridge 450 between the portion 410D and the portion 410F allows the portion 410D to communicate with the portion 410F in the same chip 405B, the portion 410B in the upper chip 405A, or the portion 410H in the lower chip 405C. The same inter-chip bridge 450 allows communication between the portion 410F and the portion 410D, the portion 410A in the upper chip 405A, or the portion 410G in the lower chip 405G. In the described embodiment, the inter-chip bridge 450 is bidirectional and allows data to flow in both directions.FIG. 4B shows the state of the inter-chip bridge 450 when the 3D stacking device 400 is configured and the portion 410 is identified as a non-functional or deactivated portion. In FIG. 4B, the solid double-headed arrow indicates the actual communication path established by the inter-chip bridge 450, rather than the potential communication path shown by the dotted arrow in FIG. 4A.In FIG. 4B, as shown by the hash, portions 410C, 410D, and 410H are deactivated. For example, these portions 410 may have manufacturing defects that affect their operability. Because each strip (ie, each column in the example) includes at most one deactivation section 410, the operation of the 3D stacking device is the same as the two full-function chips 405.The inter-chip bridge 450 is configured to provide a communication path around the deactivation portion 410. For example, because part 410D is deactivated, inter-chip bridge 450C and inter-chip bridge 450E allow part 410G in chip 405C to transmit data to and receive data from part 410E in chip 405B. This allows the data stream to avoid the deactivation section 410D and the deactivation section 410H. Similarly, because part 410C is deactivated, inter-chip bridge 450B and inter-chip bridge 450D couple part 410B to part 410F so that they can transmit data. In turn, bridge-to-chip bridge 450D and bridge-to-chip 450F couple part 410E to part 410I in chip 405C so that they can share data. Therefore, the inter-chip bridge 450 only needs to provide a communication path to the next (or more) neighboring chips (rather than to a chip other than two chips that can be separated in the device 400) so that the activation section 410 can be avoided Activate section 410. Furthermore, because the sections 410 in the strip are homogeneous, it does not matter which section is connected to the adjacent section. That is, when the portion 410A is communicatively coupled to the portion 410B instead of the portion 410E through the bridge 450A, the operation of the circuit in the portion 410A is not affected or changed.In addition to the communication path provided by the inter-chip bridge 450, the 3D stacking device 400 also includes a vertical communication path 415 that allows portions 410 in the same band to communicate. That is, although the inter-chip bridge 450 allows parts in one strip to communicate with parts in adjacent strips (whether on the same chip or on different chips), the vertical communication path 415 allows the Section 410 communication. In one embodiment, the vertical communication path 415 is not affected by the deactivation portion 410. For example, although the portion 410D is deactivated in FIG. 4B, the lower portion 410G can still communicate with the upper portion 410A using the vertical communication path 415. For example, the vertical communication path 415 may be a passive through hole extending through all the sections 410A, 410D, and 410G. Therefore, the data transmitted by one of the portions 410 through the via hole reaches the other two portions 410 regardless of whether one of these portions 410 is deactivated. In another example, the portion 410 may include separate receiving and driving circuits for relaying data using the vertical communication path 415. When the portion 410 is deactivated, the vertical communication path 415 is not affected . For example, although the portion 410D is deactivated, the driver and receiver circuits of the path 415 in the portion 410D are still operational, so that the portions 410A and 410G can communicate using the vertical communication path 415. Therefore, not all circuits in the deactivation section 410 are used.FIG. 5 is a logical view of the apparatus shown in FIG. 4B according to one embodiment. That is, FIG. 5 shows the data flow through the portion 410 without showing the location of the portion 410 in the chip. As shown, section 410A uses inter-chip bridge 450A to communicate with section 410B because these sections 410 are on the same chip. However, because part 410G and part 410E are on different physical chips, they use bridges between the two chips (ie, bridge 450C and bridge 450E) to facilitate communication. Figure 5 shows that portions 410 can be interconnected through an inter-chip bridge 450 to create two logical chips with the same functions as two full-function physical chips, including a vertical communication path 415 for allowing portions in the same stripe Communication between 410.FIG. 6 illustrates a circuit in an inter-chip bridge 450 according to one embodiment. Specifically, FIG. 6 shows circuits in the inter-chip bridge 450A, the inter-chip bridge 450C, and the inter-chip bridge 450E, which enable the left strip (or column and middle strip (or column) in FIGS. 4A and 4B to be implemented. ). FIG. 6 shows a circuit for transmitting data from a portion 410 of the chip 405 in the left strip to a portion 410 of the chip 405 in the middle strip. That is, FIG. 6 shows a circuit in the inter-chip bridge 450A, which allows the portion 410A to transmit data to the portion 410B or the portion 410E. In one embodiment, the inter-chip bridge 450A, the inter-chip bridge 450C, and the inter-chip bridge 450E include another copy of the circuit shown here, which allows each portion 410 in the middle strip to transfer data to the left Section 410 in the strip. For example, the inter-chip bridge 450B may include a circuit that is inverted relative to the setting in FIG. 6 so that the portion 410E can transmit data to the portion 410A, the portion 410D, or the portion 410G.For simplicity, only the circuits in the inter-chip bridge 450A are discussed in detail, but the description applies to other inter-chip bridges 450. The driver 605A receives data from a circuit (not shown) in the section 410A. The data is then provided to the inputs of driver 605B, driver 605C, and multiplexer (mux) 615A. The driver 605B is used to route data to a portion of a chip disposed above the chip, the chip including a portion 410A in the stack. However, because the portion 410A is located in the topmost chip 405 in the example, the driver 605B can always be disabled (using the ENABLE signal that can be controlled by the configuration register), so that data is transferred to the upper portion without using a TSV 610A chip. However, in other embodiments, there may be another chip containing the portion 410A above the chip, in which case the TSV 610A may be used. For example, an I / O or memory chip may be provided on top of a chip 405 (which may be an FPGA) containing a portion 410, which may use TSV 610 to transfer data between the I / O or memory chip and the FPGA formed by the chip 405.The output of multiplexer 615A is coupled to driver 605D, which in turn is coupled to section 410B. Using the SELECT signal, the multiplexer 615A can choose which data to send to the section 410B. For example, referring to the example in FIG. 4B, the inter-chip bridge 450A is configured to transfer data from the portion 410A to the portion 410B. In this way, the SELECT signal is selected so that the intermediate input of the multiplexer 615A (which is coupled to the driver 605A) is output to the circuit in the driver 605D and the section 410B.The output of the driver 605C is coupled to the TSV 610B, and the TSV 610B communicatively couples the chip 405A to the chip 405B at the boundary interface 620A. Although not shown, the boundary interface 620 may include a solder bump or other connection material that couples the TSV in the chip 405A to the TSV in the chip 405B. When part 410A is ready to send data to part 410E, the ENABLE signal activates driver 605C, allowing data to flow through TSV 610B and to multiplexer 615B in inter-chip bridge 450C. However, because part 410A sends data to part 410B in the example, the ENABLE signal disables driver 605B and driver 605C. In this way, the data received at the driver 605A flows into the section 410B through the multiplexer 615A and the driver 605D. In one embodiment, the ENABLE and SELECT signals are independent signals (ie, not shared signals) so that various circuits coupled to these signals can be independently controlled.Turning to the configuration of the inter-chip bridge 450C and the inter-chip bridge 450E, since the portion 410D is deactivated in the example, the driver 605E does not receive data from the portion 410D. For example, driver 605E can be disabled so that any data sent from the circuits in section 410D cannot reach other sections 410. Alternatively, the chip 405 may include an e-fuse, which may be blown after testing to disable a desired portion.As shown in FIG. 4B, part 410G sends data to part 410E. In this way, the data received at the driver 605I in the inter-chip bridge 450E is transmitted through the driver 605J (which is enabled), the data passes through the TSV 610C and is received at the multiplexer 615B in the inter-chip bridge 450C. The SELECT signal is set so that the multiplexer 615B outputs data received from the inter-chip bridge 450E (and part 410G) to the driver 605H, which in turn outputs data to the part 410E. Therefore, the circuits in the inter-chip bridge 450C and the inter-chip bridge 450E work together so that data can be transmitted between portions 410 in two different chips 405. In the illustrated example, portions 410D and 410H do not transmit or receive data through the inter-chip bridge 450C and bridge 450E. As a result, some of these inter-chip bridges 450C and 450E may be unused, such as driver 605E, driver 605F, driver 605G, driver 605K and driver 605L, and multiplexer 615C. In addition, the TSV 610D may not be used, but in some embodiments, the TSV 610D may be used to communicatively couple the chip 405C to a lower substrate (if the stack includes an interposer (optional) or I / O Device or storage).Although FIG. 6 shows a single TSV 610, these may represent multiple vias or a single via. In one embodiment, the TSV 610 extends through the respective chip 405 to form a continuous sheet of conductive material. In another example, the TSV 610 may be segmented such that there are active drives between the TSV 610. In one embodiment, the chip 405 may include a through hole for interconnecting the chip 405 and the inter-chip bridge 450 without extending through the chip 405.FIG. 7 is a flowchart of a method 700 for forming a 3D stacking device with redundant layers according to one embodiment. At block 705, a plurality of wafers are formed, each wafer including a plurality of chips. For example, each circular wafer may include tens or hundreds of different chips.At block 710, the wafers are bonded together in a stacked manner. That is, the wafers can be aligned so that the chips in the top wafer overlap and align with the corresponding chips in the next wafer, and so on. For example, solder bumps can be used to physically and communicatively couple wafers together so that chips in different wafers can communicate. As described above, these solder bumps or connections can be used as part of the vertical communication path 415 and part of the path used by the inter-chip bridge 450 to establish communication between parts in different chips 405, as shown in FIG. 4B.At block 715, the bonded wafer is separated into multiple 3D stacked devices. For example, a bonded wafer may be sawed or cut along the boundary between each chip in the wafer. Doing so will result in multiple 3D stacked devices, each terminated including a column of stacked chips, as shown in Figure 1-4B. As described above, the chip can be logically divided into one or more stripes.At block 720, the chips in each 3D stacked device are tested to identify non-functional portions. In one embodiment, as long as a chip includes non-functional sections, the sections of each stripe are set to be deactivated. For example, if only a portion is non-functional, the system can still select a portion from other bands (which can be functional and pass the test) to deactivate it. In Figure 8, it is discussed which functional part of the strip is selected and deactivated. However, in other embodiments, only the non-functional part may be deactivated, so that some bands may only contain the active part.At block 725, the method 700 determines whether a band exists that includes more than one non-functional portion. That is, after reviewing the test results, the test equipment or engineer can determine if two or more parts in the same strip (possibly on different chips) are non-functional. If so, the method 700 proceeds to block 730 where the 3D stacking device is marked as incompatible. For example, it can be guaranteed or promoted that the 3D stacking device has N + 1 redundancy, where each strip includes an extra part, so that the 3D stacking device can be used as a 3D stacking device with N full-function chips. However, if a device with N + 1 redundancy has a strip with two or more non-functional parts, the device cannot function as if it has N full-function chips. In one embodiment, incompatible devices may be discarded. However, the device can be relabeled and sold as a different product. For example, if a 3D stacked device has three chips with two non-functional sections but each strip has at least one functional section, it may be possible (depending on the position of the non-functional section in the strip) to form one from each section Function chip. Therefore, the 3D stacking device can be sold as a 3D stacking device that is logically equivalent to one full-function chip. In another example, a four-chip 3D stacking device may have two defective portions in the same strip. In this case, the four-layer device can be sold as two logical layered devices or one logical layered device (depending on which parts of the stripe are defective). That is, a deactivated or unused portion in an adjacent strip may allow a four-layer device to have two logical layers, but in other examples, there may not be enough connections to different layers (for example, TSVs between layers) ) To allow a four-layer device to be configured as two logically layered devices. In the example, a four-tier device can be sold as a logical tier device.However, if no strip contains two or more non-functional sections, the method 700 proceeds to block 735 where the inter-chip bridge is configured to avoid the non-functional (and deactivated) sections. For example, using electrical fuses and / or drivers in the inter-chip bridge, the functional portions in the stripe can be interconnected as shown in Figure 4B. In this way, the remaining functional (ie, active) portions in the stripe can be interconnected to form, for example, a logic chip as shown in FIG. 5.FIG. 8 is a flowchart of a method 800 for configuring an inter-chip bridge in a chip of a 3D stacked device according to one embodiment. In one embodiment, the method 800 begins after block 725, where the chips in the 3D stacked device have been tested, and none of the strips have more than one non-functional section. However, because not all bands may include non-functional portions, in embodiments where a portion of each band is deactivated, the method 800 is applicable even if the deactivated portion is functional To select which part of the band to deactivate.At block 805, one of the 3D stacked chips is selected. In one embodiment, the method 800 forms a loop for evaluating each strip in the 3D stacking device to determine which portion of the strip is to be deactivated.At block 810, the method 800 uses the test data to determine whether the slice currently being evaluated contains a non-functional portion. If so, the method 800 proceeds to block 810 where the inter-chip bridge adjacent to the selected strip is configured to avoid non-functional portions. In other words, if a strip contains a non-functional section, the section is selected to be deactivated by default.The method 800 then proceeds to block 825 to determine if there are more bands in the 3D stacking device that need to be evaluated. If so, the method 800 selects a different slice (previously not evaluated) and returns to block 810, where the method 800 again determines whether the currently selected slice contains a non-functional portion. If not (ie, all parts in the strip pass one or more tests), the method 800 proceeds to block 815 where the method 800 evaluates performance parameters of the functional portions in the strip. For example, when testing a chip, the test program can determine performance parameters of various parts of the chip, such as power consumption, data throughout, signal noise, and so on. At block 815, these performance parameters may be evaluated to order or prioritize the various sections in the strip. For example, weighting algorithms can be used to evaluate various performance parameters for each section and assign scores to these sections. You can use fractions to sort sections in a band. However, there are many different ways to evaluate performance parameters in order to sort the sections.At block 820, the inter-chip bridge adjacent to the strip is configured to avoid the portion with the worst performance. That is, based on evaluating the performance parameters, the worst performing part of the band is selected and deactivated. For example, the worst performing part may be the part with the lowest static power. In one embodiment, assuming all parts of the strip are operational, the method 800 selects the set of layers that results in the smallest static power configuration. The inter-chip bridge can then be configured to route the data around the deactivated portion of the strip.At block 825, the method 800 determines if there are remaining strips to be evaluated in the 3D stacking device. If not, the method 800 ends.The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various examples of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of an instruction, which includes one or more executable instructions for implementing the specified logical function. In some alternative embodiments, the functions labeled in the boxes may not be consistent with the order labeled in the figures. For example, two blocks displayed in succession may actually be executed at the same time, or may sometimes be executed in the reverse order, depending on the functions involved. It should also be noted that each block in the block diagrams and / or flowcharts, and combinations of blocks in the block diagrams and / or flowcharts, can be implemented by dedicated hardware systems that perform specified functions or behaviors, or by dedicated hardware and computer instructions Combined implementation.Although the foregoing is directed to specific embodiments, other and further embodiments can be devised without departing from the basic scope thereof, and the scope of the present disclosure is determined by the appended claims.
The present invention provides systems, apparatuses, and/or methods to augment reality. An object identifier may identify an object in a field of view of a user that includes a reflection of the userfrom a reflective surface, such as a surface of a traditional mirror. In addition, a reality augmenter may generate an augmented reality object based on the identification of the object. In one example, eyeglasses including a relatively transparent display screen may be coupled with an image capture device on the user and the augmented reality object may be observable by the user on the transparent display screen when the user wears the eyeglasses. A localizer may position the augmented reality object on the transparent display screen relative to the reflection of the user that passes though the transparent display screen during natural visual perception of the reflection by the user.
1.A system for augmented reality, including:Glasses, including a transparent display for coupling with an image capture device on a user;a reality enhancer for automatically generating an augmented reality object based on an identification of an object in a field of view of the user, the object in the field of view of the user comprising an image of the user from a reflective surface, wherein When the user wears the glasses, the augmented reality object will be viewable by the user on the transparent display screen.2.The system of claim 1 further comprising:An image capture device for facing the user;A synchronizer for synchronizing data from an image capture device for the user with data from an image capture device on the user.3.A system according to any one of claims 1 to 2, further comprising a locator for locating the augmented reality object relative to an image of the user on the transparent display screen, The image of the user will pass through the transparent display.4.A device for augmented reality, comprising:An object identifier for automatically identifying an object in a user's field of view, the object in the user's field of view including an image of the user from a reflective surface;A reality enhancer for automatically generating an augmented reality object based on the identification of the object.5.The apparatus of claim 4 further comprising an image capture device on said user for capturing said image of said user.6.The apparatus of claim 4 further comprising:An image data identifier for identifying image data of the image for the user;A depth data identifier for identifying depth data for the image of the user.7.The apparatus of claim 4 further comprising:a device identifier for identifying a device on the user;a skeleton identifier for identifying a body position of the user;a facial marker for identifying a face of the user;A gesture identifier for identifying a gesture made by the user.8.The apparatus of claim 4 further comprising:a map generator for generating a map from image data of the scene;a feature extractor for extracting features from the image data of the scene;Locator for:Locating the user in the map based on the displacement of the feature in the map;The augmented reality object is located in the map based on the location of the user.9.The apparatus of claim 4 further comprising a synchronizer for synchronizing data from the image capture device for the user with data from an image capture device on the user.10.Apparatus according to any one of claims 4 to 10, further comprising a locator for locating the augmented reality object relative to the image of the user on a transparent display screen, The image of the user will pass through the transparent display.11.At least one computer readable storage medium comprising a set of instructions that, when executed by a processor, cause the processor to:Automatically identifying an object in a user's field of view, the object in the user's field of view including an image of the user from a reflective surface;An augmented reality object is automatically generated based on the identity of the object.12.The at least one computer readable storage medium of claim 11, wherein the instructions, when executed, cause the processor to capture the image of the user.13.The at least one computer readable storage medium of claim 11, wherein the instructions, when executed, cause the processor to:Identifying image data for the image of the user;Depth data identifying the image for the user.14.The at least one computer readable storage medium of claim 11, wherein the instructions, when executed, cause the processor to:Identifying devices on the user;Identifying the physical location of the user;Identifying the face of the user;Identify the gesture made by the user.15.The at least one computer readable storage medium of claim 11, wherein the instructions, when executed, cause the processor to:Generating a map from image data of the scene;Extracting features from the image data of the scene;Locating the user in the map based on the displacement of the feature in the map, andThe augmented reality object is located in the map based on the location of the user.16.The at least one computer readable storage medium of claim 11, wherein the instructions, when executed, cause the processor to cause data from an image capture device for the user to be from the user The image capture device synchronizes the data.17.The at least one computer readable storage medium of any one of claims 11 to 16, wherein the instructions, when executed, cause the processor to be on the transparent display screen relative to the user An image is positioned to locate the augmented reality object, the image of the user will pass through the transparent display screen.18.A method for augmented reality, including:Automatically identifying an object in a user's field of view, the object in the user's field of view including an image of the user from a reflective surface;An augmented reality object is automatically generated based on the identity of the object.19.The method of claim 18 further comprising: capturing said image of said user.20.The method of claim 18, further comprising:Identifying image data for the image of the user;Depth data identifying the image for the user.21.The method of claim 18, further comprising:Identifying devices on the user;Identifying the physical location of the user;Identifying the face of the user;Identify the gesture made by the user.22.The method of claim 18, further comprising:Generating a map from image data of the scene;Extracting features from the image data of the scene;Locating the user in the map based on the displacement of the feature in the map;The augmented reality object is located in the map based on the location of the user.23.The method of claim 18, further comprising synchronizing data from the image capture device facing the user with data from an image capture device on the user.24.A method according to any one of claims 18 to 23, further comprising locating the augmented reality object relative to the image of the user on a transparent display screen, the image of the user will pass The transparent display screen.25.An apparatus for augmented reality, comprising means for performing the method of any one of claims 19 to 24.
Augmented reality in the field of view including the imageCross-reference to related applicationsThis application claims priority benefit from U.S. Non-Provisional Patent Application Serial No. 15/087,478, filed on March 31, 2016.Technical fieldEmbodiments relate generally to augmented reality. More specifically, embodiments relate to augmented reality in a field of view including an image.Background techniqueSystems that provide smart mirror functionality can include relatively complex display technologies with sensors. For example, a touch screen smart mirror may include a three-dimensional (3D) camera, a multi-spectral camera, a facial recognition component, a gas sensor, and the like. In one system, a transparent organic light emitting diode (OLED) display including relatively high reflectivity provides virtual fitting room functionality. Other systems can improve the user's appearance, provide health reports based on long-term analysis of the face, and the like. However, due to enhanced mirror or display technology, the cost of such systems may be orders of magnitude higher than conventional mirrors. Therefore, there is considerable room for improvement in providing smart mirror functionality for real-life enhancements.DRAWINGSThe various advantages of the embodiments will become apparent to those skilled in the <RTIgt;1A-1B are diagrams of examples of methods for augmenting reality using glasses and conventional mirrors, in accordance with an embodiment;2 is a diagram of an example of a device for augmented reality, in accordance with an embodiment;3 is a diagram of an example of a method for augmented reality, according to an embodiment;4 is a block diagram of an example of a processor in accordance with an embodiment.Detailed waysTurning now to Figures 1A-1B, a method 10 for augmented reality in accordance with an embodiment is illustrated. As illustrated in FIG. 1A, the user 12 is standing in front of the reflective surface 14 of the conventional mirror 16 and makes a gesture including raising the arm 18, which is covered by the sleeve of the garment 20 and coupled to the wearable device 22. . The garment 20 includes a sweater that lacks a pattern (eg, woven, printed, stitched, etc.), and the wearable device 22 includes a smart watch that provides the user 12, the wearable device 22, and/or the environment 24 (eg, a room) Sensor data. For example, wearable device 22 may provide gas sensor data for user 12, pulse sensor data for user 12, blood pressure data for user 12, acceleration data for user 12 and/or wearable device 22, user 12, and/or wearable device 22 Orientation data, temperature data for user 12 and/or environment 24, location data for user 12, wearable device 22, and/or environment 24, and the like.User 12 wears glasses 26 including transparent display screens 28 (28a-28b) that are passed through the transparent display screen 28 (28a-28b) to the eyes of user 12 during natural visual perception of the subject by the user 12. . Transparent display 28 can include, for example, a transparent organic light emitting diode (OLED) display having relatively low reflectivity. The glasses 26 further include an image capture device 32, which may include a two-dimensional (2D) camera, a three-dimensional (3D) camera, a multi-spectral camera, a thermal camera, and the like. In the illustrated example, image capture device 32 includes a ranging camera (eg, an RGB-D camera, etc.) to generate image data (eg, RGB data, etc.) for objects in the field of view of image capture device 32 and / or depth data (for example, pixel depth data, etc.). Image data and/or depth data may be generated based on, for example, stereo triangulation, patch triangulation, structured light, time of flight, interferometry, coded aperture, and the like.In the illustrated example, the mirror 16 also includes an image capture device 34 that faces the user 12 to generate image data and/or depth data for objects in the field of view of the image capture device 34. Notably, image capture device 34 may track objects within or outside the field of view of image capture device 32 to provide an expanded field of view for tracking gestures and/or for supplementing the center of view. Moreover, synchronization between data generated by image capture devices 32, 34 as user 12 moves may provide a correspondence between data to minimize problems involving availability from delays and/or to make augmented reality functions accurate Maximizing sex. For example, image capture device 34 may forward a message (eg, a notification gesture, image data, metadata of depth data, etc.) to indicate that a gesture that may be utilized to verify the gesture observed by image capture device 32 has been observed to supplement The data generated by the image capture device 32 in response to the gesture, and the like.Objects in the field of view of the user 12 and/or in the field of view of the image capture devices 32, 34 may be identified. For example, a virtual object corresponding to image 30 in the field of view of user 12 may be identified. In one embodiment, the device on user 12 may be identified based on feature data (eg, a wristband, form factor, etc.) from image 30. The face of the user 12 may also be identified based on feature data (eg, eyes, nose, mouth, etc.) from the image 30. Moreover, the body position of the user 12 can be identified based on skeletal data from the image 30 (eg, "tracked" skeletal data, "location only" skeletal data, etc.). The image 30 can also be used to identify gestures made by the user 12 based on feature data, skeletal data, gesture data (eg, fingers/hands, etc.), and the like.Similarly, real objects in the field of view of user 12 and/or in the field of view of image capture device 32, such as mirror 16, may be identified. Moreover, real objects in the field of view of image capture device 34, such as user 12 and/or wearable device 22, may be identified. Additionally, objects can be identified based on sensor data. For example, an object may be identified based on some type of sensor data that may be derived from a particular object, acceleration data for a particular object, and the like. The object may also be identified based on identification data such as device name, device flag, device address (eg, media access control address, etc.), and the like.In the illustrated example, augmented reality (AR) objects 36, 38 may be utilized to enhance objects in the field of view of user 12. In one example, based on the garment from the image 30 being identified as a sweater, the image 30 can be enhanced with an AR object 36 (eg, printed pattern, color change, etc.). For example, the AR object 36 can be utilized to enhance the garment 20 in the image 30 such that when the glasses 26 are worn by the user 12, the user 12 observes an enhanced sweater (eg, a sweater with a printed pattern, etc.). Moreover, the identification of the arm 18 and/or body position of the user 12 may allow the AR object 36 to be properly positioned as the user 12 moves. In this regard, synchronization between image capture devices 32, 34 may facilitate object recognition, AR object localization, and the like.In another example, the real object in the field of view of the user 12 may be enhanced by the AR object 38 (eg, the GUI of the menu, such as the user's weight, the 2D representation of the steps taken, etc.). For example, a synchronous positioning and map construction (SLAM) process can be implemented to utilize the AR object 38 to enhance the mirror 16 in the field of view of the user 12. For example, a map of scene 40 may be generated from image data (eg, video frames, etc.) generated by image capture device 32. Features such as the upper left corner of the mirror 16 in the scene 40 may be extracted from the image data, and as the user 12 moves, the displacement of the feature (eg, dx/dy, etc.) may be determined. The position of the user 12 in the scene 40, such as the location of the image 30, may be determined based on the displacement of the feature, and when the user 12 moves, the position of the AR object 38 may be moved in proportion to the displacement of the user 12 to Object 38 is positioned at the same location in scene 40.As illustrated in FIG. 1B, the 3D perception of image capture device 32 may not be compromised by the 2D nature of surface 14 of mirror 16. For example, the two sensors of image capture device 32 capture scene 40 at slightly different viewing angles 42 (42a-42b), which allows the depth extraction process to determine the actual depth of scene 40. The surface 14 can be set as an image surface by the position of the mirror 16, and when the user 12 is standing in front of the mirror 16 to be captured by the image capture device 32 at two different viewing angles 42a, 42b, the virtual object corresponding to the image 30 will appear. Therefore, the image capturing device 32 provides the 3D function based on the image 30 even when used alone.Moreover, the images of the image 30 and/or real objects in the scene 40 through the transparent display screen 28 may minimize the need for complex display technologies and/or minimize the computational requirements that may be used for augmented reality. The AR elements 36, 38 may be positioned, for example, relative to the image 30, and/or positioned relative to the image of the display object through the transparent display screen 28 to minimize display calculations and/or pixel usage.Although the examples have provided various functions of method 10 for purposes of illustration, it should be understood that one or more functions of method 10 may reside in the same and/or different physical and/or virtual computing platform locations, Any order is combined, omitted, bypassed, rearranged, and/or utilized. Glasses 26 may, for example, provide one or more AR functions of method 10. Additionally, the functionality of method 10 can be between various computing platforms, respectively, to provide distributed AR functionality. Moreover, any or all of the functionality of method 10 can be implemented automatically (eg, without human intervention, etc.). For example, when data from an image capture device is acquired, objects in the field of view can be automatically identified.FIG. 2 illustrates an apparatus 44 for augmented reality, in accordance with an embodiment. Device 44 may include a computing platform such as, for example, a laptop computer, a personal digital assistant (PDA), a media content player, an imaging device, a mobile internet device (MID), any smart device, such as a wireless smart phone, smart Tablets, smart TVs, smart watches, glasses, computer servers, gaming platforms, etc. Apparatus 44 may also include logic (eg, logic instructions, configurable logic) configured to implement any of the techniques mentioned herein, including, for example, method 10 (FIGS. 1A-1B) discussed above (FIG. 1A-1B) , fixed function logic hardware, etc.). For example, controller 46 may receive data corresponding to an image of a user from a conventional mirror and utilize an augmented reality (AR) object to enhance an object in the field of view.Controller 46 includes a data repository interface 48 that can be coupled to a memory (eg, a cache, random access memory, etc.) for engagement with a hard drive (eg, on-platform storage, removable storage, etc.) ,and many more. The controller 46 also includes a communication interface 50 that can be coupled with communication functions for a variety of purposes such as, for example, cellular telephones (e.g., Wideband Code Division Multiple Access/W-CDMA (Universal Mobile Telecommunications System/ UMTS), CDMA2000 (IS-856/IS-2000), etc., WiFi (Wireless Fidelity, for example, Institute of Electrical and Electronics Engineers/IEEE 802.11-2007, Wireless LAN/LAN Media Access Control (MAC) and Physical Layer (PHY) Specification), LiFi (photo-fidelity, for example, Institute of Electrical and Electronics Engineers/IEEE 802.15-7, WLAN/LAN Media Access Control (MAC) and Physical Layer (PHY) specifications), 4G LTE (4th Generation Long Term Evolution) ), Bluetooth (eg, Institute of Electrical and Electronics Engineers / IEEE 802.15.1-2005, wireless personal area network), WiMax (eg, IEEE 802.16-2004, LAN/MAN broadband wireless LANS), Global Positioning System (GPS), extension Spectrum (eg, 900 MHz), NFC (Near Field Communication, ECMA-340, ISO/IEC 18092) and other radio frequency (RF) purposes. Accordingly, controller 46 may utilize data repository interface 48 to store image data, depth data, object identification data, AR object data, and the like, and/or may utilize communication interface 50 to forward image data, depth data, object identification data, AR object data and so on.Controller 46 further includes an image data identifier 52 to identify image data. For example, image data identifier 52 may identify RGB data from an image of the RGB-D camera that corresponds to the user. Image data marker 52 may also identify RGB data from an RGB-D camera that corresponds to an image of a real object in a field of view (eg, a user, etc.). Controller 46 further includes a depth data identifier 54 to identify depth data. For example, depth data identifier 54 may identify pixel depth data from an image of the RGB-D camera that corresponds to the user. The depth data identifier 54 may also identify depth pixel data generated by the RGB-D camera that corresponds to an image of a real object in the field of view.The controller 46 further includes a synchronizer 56 to synchronize data from the user oriented image capture device with data from the user on the user's image capture device. For example, synchronizer 56 may synchronize data from an RGB-D camera located on a user facing mirror with data from an RGB-D camera on the glasses worn by the user. Synchronized image data and/or depth data may facilitate object identification for several purposes, such as object recognition, gesture recognition, feature extraction, AR object localization, and the like.The controller 46 further includes an object identifier 58 to identify an object in the user's field of view, identify an object in the field of view of the image capture device on the user providing the self-centered viewpoint of the object, and/or identify the user-facing and provide An expanded field of view to track the pose and/or complement the self-centered view of the object in the field of view of the image capture device. For example, an image capture device on the user may provide a self-centered view of the user image for use by the object identifier 58 to identify the virtual object, an auto-centered view of the mirror may be provided for use by the object identifier 58 to identify the real object, etc. Wait. The object identifier 58 may also identify real-world objects in the field of view of the user-facing image capture device to supplement the self-centered viewpoint with the user-oriented viewpoint.In the illustrated example, object identifier 58 includes device identifier 60 to identify devices on the user. For example, device identifier 60 may identify a device (eg, a smart watch, etc.) that the user is wearing. The object identifier 58 further includes a skeleton identifier 62 to identify the body position of the user. For example, the skeleton identifier 62 can identify the joint position of the user's body (eg, "tracked" position, etc.). In addition, object identifier 58 includes a face identifier 64 to identify the face of the user. For example, facial marker 64 can identify the user's nose, the user's lips, the user's hair, and the like. The object identifier 58 further includes a gesture identifier 66 to identify the user's gesture. For example, gesture identifier 66 may identify facial gesture movements (eg, smiles, etc.) made by the user, hand or finger gesture movements made by the user (eg, thumbs up, etc.), arm gesture movements made by the user ( For example, give up, etc.), and so on.The controller 46 further includes a reality enhancer 68 to generate an AR object based on, for example, an identification of the object in the field of view. Reality enhancer 68 may generate an enhanced facial appearance (eg, facial hair removal, eye color change, etc.) for the image based, for example, on the identification of the face of the user from the image. The reality enhancer 68 may generate an enhanced wearing appearance (eg, different pants, etc.) for the image based, for example, on the identification of the garment from the user of the image. The reality enhancer 68 may further generate an enhanced wall appearance (eg, GUI, data from the wearable device, data for the environment, etc.) for the image of the wall, for example based on identifying the wall from the image of the wall.The controller 46 further includes a locator 70 to determine the position of the AR object to be rendered on the display screen. The locator 70 can position the AR object relative to the image and/or an image of the real object in the field of view of the user, the image and/or the image of the real object during natural visual perception of the image and/or image The user's eyes are reached through a transparent display. The locator 70 can locate the AR object on the user's image based on image data and/or depth data from the RGB-D camera associated with the object identified from the image during the object recognition process. The locator 70 can be further based on image data and/or depth data from an RGB-D camera associated with an object identified from an image of the real object during the object recognition process on a real object (eg, a wall, another person, etc.) Position the AR object on the image.The locator 70 can also locate the AR object on the image of the object based on image data and/or depth data from the RGB-D camera associated with the features extracted from the image during SLAM. In the illustrated example, controller 44 includes a map generator 72 for generating a map from image data of the scene and a feature extractor 74 for extracting features from the image data of the scene. The locator 70 can then locate the user in the map based on the displacement of the features in the map and locate the AR object in the map based on the location of the user. Thus, the AR object can be shifted proportionally to the user's displacement to position the AR object at the same location in the scene as the user moves.Although the examples have provided various components of device 44 for purposes of illustration, it should be understood that one or more components of device 44 may reside in the same and/or different physical and/or virtual computing platform locations, Any order is combined, omitted, bypassed, rearranged, and/or used. In one example, one or more components of controller 46 may physically reside on the same computing platform. In another example, one or more components of controller 46 may be distributed between various computing platforms to provide distributed reality enhancements. Moreover, any or all of the components of device 44 may be implemented automatically (eg, without human intervention, etc.). For example, object identifier 58 may be automatically implemented when data from an image capture device is acquired.Turning now to Figure 3, a method 76 for augmented reality is illustrated in accordance with an embodiment. Method 76 can be implemented as a module or related component of a set of logic instructions stored in, for example, random access memory (RAM), read only memory (ROM), programmable ROM (PROM), firmware, flash memory, and the like. In a non-transitory machine or computer readable storage medium, stored in configurable logic such as, for example, a programmable logic array (PLA), a field programmable gate array (FPGA), a complex programmable logic device (CPLD), Using fixed-function hardware logic such as, for example, application specific integrated circuit (ASIC), complementary metal oxide semiconductor (CMOS), or transistor-transistor logic (TTL) technology, or stored in any combination of the above . For example, computer program code for implementing the operations shown in method 76 can be written in any combination of one or more programming languages, including object oriented programming languages such as JAVA, SMALLTALK, C++, and the like. A conventional procedural programming language such as a "C" programming language or a similar programming language.The illustrated block 78 provides for identifying image data and/or depth data. Block 78 may, for example, identify RGB data from an RGB-D camera that corresponds to an image of the user, an image of a real object in the field of view (eg, the user, etc.), and the like. Block 78 may further identify, for example, depth pixel data from an RGB-D camera corresponding to an image of the user, an image of a real object in the field of view (eg, the user, etc.), and the like.The illustrated block 80 provides for synchronizing data. Controller 80 may, for example, synchronize data from the user-facing image capture device with data from the user's image capture device facing the user. The illustrated processing block 82 provides for identifying objects in the field of view. Block 82 may, for example, identify an object in the user's field of view, identify an object in the field of view of the image capture device on the user providing the object's egocentric viewpoint, and/or identify the user facing to provide an expanded field of view to track the gesture And/or an object in the field of view of the image capture device that complements the self-centered viewpoint. In one example, block 82 may identify a device on the user, identify the user's body location, identify the user's face, identify the user's gesture, and the like.The illustrated processing block 84 provides for generating an augmented reality (AR) object. Block 84 may generate an AR object based on, for example, an identification of an object in the field of view. In one example, block 84 may generate an enhanced virtual object appearance for the image based on, for example, an identification of the face of the user from the image. In another example, block 84 may generate an enhanced realistic object appearance for an image of a real object in a user's field of view based on, for example, identifying a wall from an image of the wall.The illustrated block 86 provides for determining the location of the AR object to be rendered on the display, which may include a transparent OLED display. Block 86 may, for example, locate an AR object relative to the image and/or an image of the real object in the field of view of the user, the image and/or the image of the real object during natural visual perception of the image and/or image Through the OLED display to the user's eyes. Block 86 may locate the AR object based on image data and/or depth data associated with an object identified during the object recognition process and/or during the SLAM process (eg, a real object, a virtual object such as a map, etc.) . In this regard, block 86 may generate a map from the image data of the scene, extract features from the image data of the scene, locate the user in the map based on the displacement of the features in the map, and locate the augmented reality object in the map based on the location of the user.Block 86 may locate the AR object on the OLED display screen to register with the user's image, for example based on image data and/or depth data corresponding to the user's image. In one example, the AR object can be an enhancement to the user's clothing, and block 86 can position the AR element that enhances the garment to be in registration with the image of the garment that spans the OLED display. Block 86 may further position the AR object to an image registration with the real object on the OLED display, for example based on image data and/or depth data corresponding to the image of the real object. In one example, the AR object can be an enhancement to a mirror, wall, etc. of the environment in which the user is located, and block 86 can position the AR elements that enhance the mirror or the like into an image that is registered with a mirror or the like that spans the OLED display.Although separate blocks and/or specific sequences have been shown for purposes of illustration, it should be understood that one or more of the blocks of method 76 may be combined, omitted, bypassed, rearranged, and/or in any order. flow. Moreover, any or all of the blocks of method 76 may be implemented automatically (eg, without human intervention, etc.). For example, when data from the image capture device is acquired, block 82 can automatically identify the objects in the field of view.FIG. 4 shows a processor core 200 in accordance with one embodiment. Processor core 200 may be a core for any type of processor, such as a microprocessor, an embedded processor, a digital signal processor (DSP), a network processor, or other device for executing code. Although only one processor core 200 is illustrated in FIG. 4, the processing elements may alternatively include more than one processor core 200 illustrated in FIG. Processor core 200 may be a single-threaded core, or for at least one embodiment, processor core 200 may be multi-threaded, as embodied in that processor core 200 may include more than one hardware thread context for each core (or " Logical processor").FIG. 4 also illustrates memory 270 coupled to processor core 200. Memory 270 can be any of a wide variety of memories (including various layers of memory hierarchy) known to those skilled in the art or otherwise available to those skilled in the art. Memory 270 can include one or more code 213 instructions to be executed by processor core 200, wherein code 213 can implement method 10 (FIG. 1), device 44 (FIG. 2), and/or method 76 (FIG. 3) as discussed. ). Processor core 200 follows the sequence of programs of instructions indicated by code 213. Each instruction can enter the front end portion 210 and be processed by one or more decoders 220. The decoder 220 may generate micro-operations (such as fixed-width micro-operations in a predefined format) as its output, or may generate other instructions, micro-instructions, or control signals that reflect the original code instructions. The illustrated front end portion 210 also includes register renaming logic 225 and scheduling logic 230 that generally allocate resources and queue operations corresponding to the conversion instructions for execution.Processor core 200 is shown to include execution logic 250 having a set of execution units 255-1 through 255-N. Some embodiments may include a number of execution units that are specific to a particular function or set of functions. Other embodiments may include only one execution unit or only one execution unit that may perform a particular function. The illustrated execution logic 250 performs the operations specified by the code instructions.After the execution of the operations specified by the code instructions is completed, the backend logic 260 retires the instructions of the code 213. In one embodiment, processor core 200 allows out-of-order execution, but requires ordered retirement of instructions. The retirement logic 265 can take various forms (e.g., reorder buffers, etc.) as known to those skilled in the art. In this manner, processor core 200 executes at code 213 based at least on the output generated by the decoder, the hardware registers and tables utilized by register renaming logic 225, and any registers (not shown) modified by execution logic 250. The period is converted.Although not illustrated in FIG. 4, the processing elements can include other components on the chip along with the processor core 200. For example, the processing elements can include memory control logic along with the processor core 200. Processing elements may include I/O control logic and/or may include I/O control logic integrated with memory control logic. Processing elements may also include one or more caches.Additional notes and examples:Example 1 can include a system for augmented reality, the system comprising: glasses including a transparent display for coupling with an image capture device on a user; and a reality enhancer for basing the user in the field of view An identification of the object to automatically generate an augmented reality object, the object in the user's field of view including an image of the user from the reflective surface, wherein when the user wears the glasses, the augmented reality object will be able to be on the transparent display by the user observed.Example 2 may include the system of example 1, further comprising: an image capture device for facing the user; and a synchronizer for causing data from the image capture device for the user to be with the image capture device from the user Data synchronization.Example 3 may include the system of any of examples 1 to 2, further comprising: a locator for locating an augmented reality object relative to an image of the user on a transparent display screen, the user's image will Through the transparent display.Example 4 can include means for augmented reality, the apparatus comprising: an object identifier for automatically identifying an object in a user's field of view, the object in the user's field of view including an image of the user from the reflective surface And a reality enhancer for automatically generating an augmented reality object based on the user's identity.Example 5 may include the apparatus of example 4, further comprising: an image capture device on the user for capturing an image of the user.Example 6 may include the apparatus of any one of examples 4 to 5, further comprising: an image data identifier for identifying image data for the image of the user; and a depth data identifier for identifying the mapping for the user Depth data.Example 7 may include the apparatus of any one of examples 4 to 6, further comprising: a device identifier, the user identifies the device on the user; a skeleton identifier for identifying the body position of the user; and a facial identifier for Identifying the user's face; and a gesture identifier for identifying the gesture made by the user.Example 8 may include the apparatus of any one of examples 4 to 7, further comprising: a map generator for generating a map from image data of the scene; a feature extractor for extracting features from the data of the scene; and positioning And locating a user in the map based on the displacement of the feature in the map, and locating the augmented reality object in the map based on the location of the user.The example 9 may include the apparatus of any one of examples 4 to 8, further comprising: a synchronizer for causing data from the image-capturing device for the user to be used with data from the image capturing device on the user Synchronize.The example 10 may include the apparatus of any one of examples 4 to 9, further comprising: a locator for locating an augmented reality object relative to a user's image on a transparent display screen, the user's image will Through the transparent display.Example 11 can include at least one computer readable storage medium comprising a set of instructions that, when executed by a processor, cause the processor to automatically identify an object in a user's field of view, the user's field of view The object in the image includes an image of the user from the reflective surface; and the augmented reality object is automatically generated based on the identification of the object.The example 12 can include the at least one computer readable storage medium of example 11, wherein the instructions, when executed, cause the processor to capture an image of the user.The example 13 may include at least one computer readable storage medium of any one of examples 11 to 12, wherein the instructions, when executed, cause the processor to: identify image data for the image of the user; and identify the The depth data of the image.The example 14 may include at least one computer readable storage medium of any one of examples 11 to 13, wherein the instructions, when executed, cause the processor to: identify the device on the user; identify the user's physical location, identify the user Face; and identify the gesture made by the user.The example 15 may include at least one computer readable storage medium of any one of examples 11 to 14, wherein the instructions, when executed, cause the processor to: generate a map from image data of the scene; extract features from the data of the scene Positioning the user in the map based on the displacement of the feature in the map; and locating the augmented reality object in the map based on the location of the user.The example 16 may include at least one computer readable storage medium according to any one of claims 11 to 15, wherein the instructions, when executed, cause the processor to cause data from the image capture device for the user to come with Data synchronization of the image capture device on the user.The example 17 may include at least one computer readable storage medium of any one of examples 11 to 16, wherein the instructions, when executed, cause the processor to locate augmented reality relative to the user's influence on a transparent display screen The object, the user's image will pass through the transparent display.Example 18 can include a method for augmented reality, the method comprising: automatically identifying an object in a user's field of view, the object in the user's field of view including an image of a user from the reflective surface; and an object-based identification To automatically generate augmented reality objects.Example 19 can include the method of example 18, further comprising: capturing an image of the user.The example 20 may include the method of any of examples 18 to 19, further comprising: identifying image data for the image of the user; and identifying depth data for the image of the user.The example 21 may include the method of any one of examples 18 to 20, further comprising: identifying a device on the user; identifying a body position of the user; identifying a face of the user; and identifying a gesture made by the user.The example 22 may include the method of any one of examples 18 to 21, further comprising: generating a map from image data of the scene; extracting features from the data of the scene; and locating the user in the map based on the displacement of the feature in the map; And locating the augmented reality object in the map based on the location of the user.The method of any one of examples 18 to 22, further comprising synchronizing data from the image-capturing device for the user with data from the image capture device on the user.The example 24 may include the method of any of examples 18 to 23, further comprising: locating an augmented reality object with respect to an image of the user on a transparent display screen through which the image of the user will pass.Example 25 can include a device for augmented reality, the device comprising means for performing the method of any of examples 18-24.Thus, the techniques described herein provide smart mirror functionality while utilizing conventional mirrors and 3D enhanced AR glasses. For example, when a user faces a traditional mirror, the user can wear AR glasses to experience augmented reality. The AR glasses may not be tied to relatively expensive conventional smart mirror technology and may be used with any reflective surface that provides an image. Furthermore, the reflective surface may not require an embedded sensor, as naturally related content may be rendered to the user's view by transparent AR glasses with respect to, for example, a user's image. Additionally, since reflective surfaces (eg, conventional mirrors) can be used, relatively complex display techniques may not be required. The user's image can be used to monitor and/or analyze the user's face, skeleton, and/or posture.In one example in which a user wearing AR glasses is standing in front of a mirror, the user can see his/her image wearing different clothes, and when the user moves, the enhancement can be presented in a realistic manner because traceable The user, and the virtual image (eg, the image) can be enhanced based on 3D enhanced object recognition and/or SLAM procedures. When a user wearing 3D enhanced AR glasses moves in front of the mirror, the mirrored image can be tracked via RGB-D analysis of the mirrored image and can be via 2D and/or 3D data (eg, date, temperature, mood, Health conditions, etc.) to provide enhancements based on usage (eg, applications, etc.).User-oriented RGB-D cameras can also be used to track precisely refined poses and/or images for enhanced mirroring, where data from user-facing RGB-D cameras can be used with RGB-D cameras from users. data synchronization. Embodiments also facilitate the use of multi-mode perceptual computing techniques that utilize 3D depth computation, including facial recognition, bone tracking, gesture tracking, and the like.Embodiments are suitable for use with all types of semiconductor integrated circuit ("IC") chips. Examples of such IC chips include, but are not limited to, processors, controllers, chipset components, programmable logic arrays (PLAs), memory chips, network chips, system on a chip (SoC), SSD/NAND controller ASICs, and the like. Additionally, in some of the figures, the signal conductors are indicated by lines. Some lines may be different to indicate a more constitutive signal path; have numerical numbers to indicate the number of constitutive signal paths; and/or have arrows at one or more ends to indicate the flow of primary information. However, this should not be construed in a limiting manner. Rather, such additional details may be used in conjunction with one or more exemplary embodiments to facilitate an easier understanding of the circuitry. Any represented signal line, whether or not having additional information, may actually include one or more signals that may travel in multiple directions and may be implemented with any suitable type of signal scheme, such as Digital or analog lines, fiber optic lines, and/or single-ended lines implemented using differential pairs.Example sizes/models/values/ranges may have been given, but embodiments are not limited thereto. As manufacturing techniques (eg, photolithography) mature over time, it is expected that devices of smaller size can be fabricated. In addition, well known power/ground connections to IC chips and other components may or may not be shown in the figures for the sake of simplicity of illustration and discussion, and are also so as not to obscure certain aspects of the embodiments. Further, the arrangement can be shown in block diagram form to avoid obscuring the embodiments, and also in view of the fact that the details of the implementation relative to such block diagram arrangements are highly dependent on the fact that the computing system of the embodiments will be implemented therein, ie, such The details should be well within the knowledge of those skilled in the art. It is apparent to those skilled in the art that the specific details (e.g., circuits) are described in the description of the exemplary embodiments. The embodiments may be practiced without the specific details. The description is therefore to be regarded as illustrative rather than restrictive.The term "coupled" may be used herein to mean any type of direct or indirect relationship between the components under consideration, and may be applied to electrical, mechanical, fluid, optical, electromagnetic, or electromechanical connections. Or other connections. In addition, the terms "first," "second," and the like may be used herein only for ease of discussion and do not carry any particular time domain or time meaning unless otherwise indicated.As used in this application and in the claims, a list of items that are coupled by the term "one or more" or "at least one" may mean any combination of the recited items. For example, the phrase "one or more of A, B, or C" may mean A; B; C; A and B; A and C; B and C; or A, B, and C. In addition, a list of items joined by the terms "etc." or "etc." may mean any combination of the recited items and any combination with other items.Those skilled in the art will appreciate from the foregoing description that the broad teachings of the embodiments can be implemented in various forms. Therefore, although the embodiments have been described in connection with the specific examples thereof, the true scope of the embodiments should not be limited thereto, as other modifications will become apparent to those skilled in the art after the study of the drawings, the description and the appended claims. .Claims (as amended by Article 19 of the Treaty)1.A system for augmented reality, including:Glasses, including a transparent display for coupling with an image capture device on a user;a reality enhancer for automatically generating an augmented reality object based on an identification of an object in a field of view of the user, the object in the field of view of the user comprising an image of the user from a reflective surface, wherein When the user wears the glasses, the augmented reality object will be viewable by the user on the transparent display screen.2.The system of claim 1 further comprising:An image capture device for facing the user;A synchronizer for synchronizing data from an image capture device for the user with data from an image capture device on the user.3.A system according to any one of claims 1 to 2, further comprising a locator for locating the augmented reality object relative to an image of the user on the transparent display screen, The image of the user will pass through the transparent display.4.A device for augmented reality, comprising:An object identifier for automatically identifying an object in a user's field of view, the object in the user's field of view including an image of the user from a reflective surface;A reality enhancer for automatically generating an augmented reality object based on the identification of the object.5.The apparatus of claim 4 further comprising an image capture device on said user for capturing said image of said user.6.The apparatus of claim 4 further comprising:An image data identifier for identifying image data of the image for the user;A depth data identifier for identifying depth data for the image of the user.7.The apparatus of claim 4 further comprising:a device identifier for identifying a device on the user;a skeleton identifier for identifying a body position of the user;a facial marker for identifying a face of the user;A gesture identifier for identifying a gesture made by the user.8.The apparatus of claim 4 further comprising:a map generator for generating a map from image data of the scene;a feature extractor for extracting features from the image data of the scene;Locator for:Locating the user in the map based on the displacement of the feature in the map;The augmented reality object is located in the map based on the location of the user.9.The apparatus of claim 4 further comprising a synchronizer for synchronizing data from the image capture device for the user with data from an image capture device on the user.10.Apparatus according to any one of claims 4 to 9, further comprising a locator for locating the augmented reality object relative to the image of the user on a transparent display screen, The image of the user will pass through the transparent display.11.At least one computer readable storage medium comprising a set of instructions that, when executed by a processor, cause the processor to:Automatically identifying an object in a user's field of view, the object in the user's field of view including an image of the user from a reflective surface;An augmented reality object is automatically generated based on the identity of the object.12.The at least one computer readable storage medium of claim 11, wherein the instructions, when executed, cause the processor to capture the image of the user.13.The at least one computer readable storage medium of claim 11, wherein the instructions, when executed, cause the processor to:Identifying image data for the image of the user;Depth data identifying the image for the user.14.The at least one computer readable storage medium of claim 11, wherein the instructions, when executed, cause the processor to:Identifying devices on the user;Identifying the physical location of the user;Identifying the face of the user;Identify the gesture made by the user.15.The at least one computer readable storage medium of claim 11, wherein the instructions, when executed, cause the processor to:Generating a map from image data of the scene;Extracting features from the image data of the scene;Locating the user in the map based on the displacement of the feature in the map, andThe augmented reality object is located in the map based on the location of the user.16.The at least one computer readable storage medium of claim 11, wherein the instructions, when executed, cause the processor to cause data from an image capture device for the user to be from the user The image capture device synchronizes the data.17.The at least one computer readable storage medium of any one of claims 11 to 16, wherein the instructions, when executed, cause the processor to be on the transparent display screen relative to the user An image is positioned to locate the augmented reality object, the image of the user will pass through the transparent display screen.18.A method for augmented reality, including:Automatically identifying an object in a user's field of view, the object in the user's field of view including an image of the user from a reflective surface;An augmented reality object is automatically generated based on the identity of the object.19.The method of claim 18 further comprising: capturing said image of said user.20.The method of claim 18, further comprising:Identifying image data for the image of the user;Depth data identifying the image for the user.21.The method of claim 18, further comprising:Identifying devices on the user;Identifying the physical location of the user;Identifying the face of the user;Identify the gesture made by the user.22.The method of claim 18, further comprising:Generating a map from image data of the scene;Extracting features from the image data of the scene;Locating the user in the map based on the displacement of the feature in the map;The augmented reality object is located in the map based on the location of the user.23.The method of claim 18, further comprising synchronizing data from the image capture device facing the user with data from an image capture device on the user.24.The method of claim 18 further comprising: locating said augmented reality object relative to said image of said user on a transparent display screen, said image of said user passing through said transparent display screen .25.An apparatus for augmented reality, comprising means for performing the method of any one of claims 18 to 24.
<P>PROBLEM TO BE SOLVED: To provide a method and equipment for forming an analog capacitor on a semiconductor substrate. <P>SOLUTION: This method includes a process where a field oxide is formed on a portion of a substrate, and a polysilicon layer is formed on the field oxide, then a silicide layer is formed on the layer. A first interlayer dielectric layer is formed on the substrate, and a capacitor masking pattern is formed. Using the capacitor masking pattern as a mask and the silicide layer as an etch stop, the first interlayer dielectric layer is etched to form a thin dielectric on the substrate. A contact masking pattern is formed on the substrate. Using the silicide layer and substrate as etch stops, second etching is performed on the thin dielectric. A metal layer is deposited on the substrate, then planarized. Thus, an analog capacitor is configured. <P>COPYRIGHT: (C)2004,JPO
A method of forming a high-precision analog capacitor close to the top of a semiconductor substrate, wherein a field oxide layer is formed on a portion of the substrate and a polysilicon layer is formed on a portion of the field oxide layer Defining a bottom plate of the capacitor by forming a silicide on the field oxide layer; forming a first interlayer dielectric on the substrate; and Planarizing the body, forming a capacitor masking pattern on the substrate, and using the capacitor masking pattern as a mask and the silicide as an etch stop, the first interlayer dielectric. Corroding the capacitor region, forming a thin dielectric on the substrate, and contacting the substrate. Etching the thin dielectric and the first interlayer dielectric by forming a masking pattern and using the contact masking pattern as a mask and the silicide as an etch stop to form a bottom Forming a plate contact hole, filling the capacitor region and bottom plate contact hole by depositing metal on the substrate, and planarizing the metal to top the capacitor The electrical connection of the plate and the capacitor to the bottom plate is limited such that the electrical connection of the capacitor to the top plate and the capacitor to the bottom plate is laterally driven by the first interlayer dielectric; Insulating the method.A high precision analog capacitor near the top of a semiconductor substrate, comprising a bottom plate, a top plate, and a thin dielectric layer between the plates, the bottom plate being polysilicon near the thin dielectric layer A member and a silicide layer, wherein the top plate includes a metal member of a metallic material substantially having a first top plane, and a second top plane having a common plane with the first top plane. The high-precision analog capacitor including an electrical connection path to the bottom plate including a metal member having a metal material.
High precision analog capacitor with low voltage coefficient and hysteresis and its innovative method of constructionBACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates generally to integrated circuits, and more particularly to high precision analog capacitors for use in integrated circuits and methods of forming the same. 2. Description of the Related Art Semiconductor device manufacturing is a combination of the generation of various components that collectively perform data manipulation (logic functions) and data retention (storage functions) functions. Most of these functions work in digital or on / off mode, thus recognizing the "0" and "1" conditions at the working level of the circuit. In addition, there are applications that use analog levels of voltage, for example, where the voltage has a range of values that vary between an upper limit and a lower limit. In addition, there are applications where both digital and analog methods of signal processing reside within the same semiconductor device. The mixing of functions and processing capabilities involves a mixture of components that can coexist in a single semiconductor device. Where most of the device components consist of transistors addressing logic processing functions and various switching components, resistors and capacitors that form part of the semiconductor device are also commonly found. For example, it is known that capacitors form the basic building blocks of many analog circuits used in analog applications such as analog-to-digital conversion and digital-to-analog conversion. In addition to A / D conversion, capacitors perform a variety of important tasks required to interface digital data with the outside world, such as amplification, pre-filtering demodulation and signal conditioning. It is also well known in the art that capacitors are widely applied in digital applications such as storage nodes for dynamic random access memory (DRAM) circuits. Analog capacitors generally store information in various states, while digital capacitors store information in two states: low and high. With respect to analog capacitor fabrication, FIG. 1A illustrates a side view 100 of a conventional analog capacitor 105 and FIG. 1B illustrates a conventional method 150 of manufacturing the capacitor. One of the first processing steps required to form the capacitor 105 on the surface of the semiconductor substrate 110 is to electrically insulate the region on the surface of the substrate where the active region 8 transistor is generated. Operation 160 of FIG. 1B isolates device 105 from other devices (not shown) on semiconductor substrate 110 by forming field oxide (Fox) 115. One conventional solution in the semiconductor industry for forming Fox 115 is by the silicon local oxidation (LOCOS) method. LOCOS typically uses a patterned silicon nitride (Si3N4) layer (not shown) as an oxidation barrier mask, and the underlying silicon substrate is selectively oxidized. One disadvantage of using LOCOS is that the non-planar surface of the semiconductor substrate results. Another method of forming field oxide (Fox) is shallow trench isolation (STI) (not shown). One method of using STI is to first etch a trench (not shown) having substantially vertical sidewalls in the silicon substrate. These trenches are then typically filled by chemical vapor deposition (CVD) of silicon oxide (SiO2), and then the silicon oxide is plasma etched using CMP to form a significantly flat STI region. Following the formation of Fox 115, a polysilicon layer 120 is formed on Fox 115 in operation 162 of FIG. 1B. Subsequently, in operation 164, a silicide layer 125 is formed on the polysilicon layer 120 to form a conductor etch stop on the polysilicon. Formation of the polysilicon layer 120 typically forms a vertical step 127 on the surface of the substrate 120. Unfortunately, this vertical step 127 results in a detrimental effect when forming the capacitor 105 in the prior art, as described below. Following the formation of polysilicon layer 120 and silicide 125, oxide layer 130 is blanket deposited on the substrate in operation 166 of FIG. 1B, typically by low pressure chemical vapor deposition (LPCVD). Is performed to form a dielectric layer for the capacitor 105. A titanium nitride (TiN) layer 135 is then deposited on the substrate 110 in operation 168 of FIG. 1B. A titanium nitride (TiN) hard mask layer 137 that is selective to the underlying TiN layer 135 is further formed on the TiN layer 135 in operation 170. At operation 172, a capacitor masking pattern (not shown) is formed, where subsequent hard mask and TiN corrosion is performed at operation 174, thereby removing portions of TiN hard mask layer 137 and TiN layer 135. Thus, the top plate 140 of the capacitor 105 is limited. Following the TiN corrosion of operation 174, an interlayer dielectric (ILD) layer 142 is formed by conventional methods. In prior art operation 178, a contact masking pattern (not shown) is formed on the ILD layer. In operation 182, a metal 144 is deposited on the ILD layer 142, thereby filling the contact hole 143, and in operation 184, the metal is planarized. A connection layer 145 is then formed over the contact hole 143 to interconnect the capacitor 105 to other devices (not shown) on the semiconductor substrate 110. For the prior art 150 that uses the TiN layer 135 for the top plate 140 of the capacitor 105, the etching performed in operation 174 is significant; This is because the corrosion must be stopped at the semiconductor substrate 110 in order to avoid perforating the semiconductor substrate. However, to avoid stringers that can cause leakage during the operation of capacitor 105, it is sufficient to remove TiN on silicide layer 125 (ie, uncorroded TiN on silicide layer). Must be corroded. Accordingly, the TiN corrosion process of operation 174 must be closely monitored to avoid the detrimental effects of both insufficient corrosion of the TiN layer 135 and excessive corrosion of the semiconductor substrate 110. In addition, step 127 of FIG. 1A caused by the formation of polysilicon layer 120 on the non-planar surface of substrate 110 increases the difficulty of TiN corrosion when using LOCOS for Fox formation. In order to provide a basic understanding of some aspects of the present invention, the following is a brief summary of the invention. This summary is not an extensive overview of the invention. It does not identify key elements of the invention or delineate the scope of the invention. Its purpose is to present some concepts of the invention in a simplified form as a prelude to the more detailed description that is discussed later. The present invention generally relates to a method of forming an analog capacitor on a semiconductor substrate. More particularly, the present invention relates to a method of forming a precision analog capacitor on a field oxide (Fox) on a semiconductor substrate. In accordance with the present invention, a field oxide layer is formed over a portion of the substrate. Forming a polysilicon layer over the field oxide layer limits the capacitor bottom plate, and forming a silicide over the polysilicon layer limits the capacitor bottom plate. A first interlayer dielectric (ILD) layer is then formed on the substrate. According to one exemplary aspect of the invention, the first ILD layer includes a plurality of layers. Following the formation of the first ILD layer, a capacitor masking pattern is formed on the substrate and an etching process is performed where the capacitor masking pattern is used as a mask and a silicide is used as an etch stop. One interlayer dielectric is corroded, thereby limiting the capacitor area. A thin dielectric is then formed on the substrate. According to another aspect of the invention, this thin dielectric is formed by a low pressure chemical vapor deposition (LPCVD) process. Following the formation of the thin dielectric, a contact masking pattern having one or more contact holes is formed on the substrate, and then another erosion process is performed where the contact mask is used as a mask. The thin dielectric and the first interlayer dielectric are eroded using silicide as an etch stop using a masking pattern. According to another exemplary aspect of the present invention, the contact masking pattern includes bottom plate capacitor contact holes and moat contact holes, and the thin dielectric and first ILD layer erosion. Includes using silicide as an etch stop for bottom plate capacitor contact holes and using a semiconductor substrate as an etch stop for moat contact holes. A metal layer is then formed on the substrate, the metal layer substantially filling one or more contact holes. In addition, the metal layer is planarized to limit electrical connection to the capacitor top plate and capacitor bottom plate, and the first ILD provides electrical connection to the capacitor top plate and capacitor bottom plate. Is insulated laterally. To the accomplishment of the foregoing and related ends, the invention includes the features fully described below and particularly pointed out in the claims. The following description and the annexed drawings set forth in detail certain illustrative embodiments of the invention. These examples, however, represent only a few of the various ways in which the principles of the present invention can be employed. Other objects, advantages and novel features of the invention will become apparent from the following detailed description of the invention when considered in conjunction with the drawings. DETAILED DESCRIPTION OF THE INVENTION The present invention will now be described in detail with reference to the drawings, wherein like reference numerals are used to refer to like elements. It should be understood that these aspects are merely exemplary and should not be taken in a limiting sense. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to those skilled in the art that the present invention may be practiced without these specific details. The terms “wafer” and “substrate” refer to silicon-on-insulator (SOI) or silicon-on-sapphire (SOS) technology, doped and undoped semiconductors, silicon epitaxially supported on a base semiconductor basis. It should be understood to include layers and other semiconductor structures. Further, in the following description, when referring to a “wafer” or “substrate”, previous processing steps have been used to form regions or junctions in the base semiconductor structure or foundation. In addition, the semiconductor need not be silicon-based, but may be based on, for example, silicon germanium, germanium, or gallium arsenide. The present invention is directed to a method of forming an analog capacitor on a semiconductor substrate. Although exemplary methods are described herein as a series of operations or events, it will be understood that the invention is not limited to the described order of such operations or events, Because these steps may appear in a different order and / or in parallel with other steps apart from those shown and described herein. Moreover, not all illustrated steps may be required to implement the invention. Further, it will be appreciated that the method may be implemented with other systems not described as well as with the apparatus and systems described herein. Referring now to FIG. 2A, a method 200 for forming an analog capacitor on a semiconductor substrate according to one aspect of the present invention will be described. Method 200 begins at operation 205 where a field oxide (Fox) is formed on a semiconductor substrate, thereby forming an active region (ie,The region where the semiconductor device is generated is limited. The active region is further electrically isolated on the substrate surface by field oxide. A cross-sectional view 300 of an exemplary semiconductor substrate 301 is shown in FIG. 3, where a field oxide 305 is formed on the semiconductor substrate by a silicon local oxidation (LOCOS) method. Alternatively, field oxide 305 can be formed using a shallow trench isolation (STI) method, although any method of forming an isolated field oxide on semiconductor substrate 301 is considered within the scope of the present invention. It is done. Following the formation of field oxide 305, a polysilicon layer 310 is formed over the field oxide in operation 210 of FIG. 2A. As shown in FIG. 3, for example, polysilicon layer 310 is blanket deposited on substrate 301 and patterned by conventional lithographic and etching techniques. The polysilicon layer 310 that remains after the etching thereby defines the capacitor bottom plate 314. After the polysilicon layer 310 is formed on the field oxide 305 in operation 210, a silicide layer 315 is formed on the polysilicon layer in operation 215 of FIG. 2A. As will be appreciated by those skilled in the art, the silicide layer 315 of FIG. 3 is formed by metal deposition and heat treatment of the substrate 301. FIG. 4 illustrates a first interlayer dielectric (ILD) layer 320 (ie, an oxide layer) formed on the semiconductor substrate 301 in operation 220 of FIG. 2A. According to one exemplary aspect of the invention, the ILD layer 320 includes a plurality of layers. For example, the ILD layer 320 may be borophosphosilicate glass (BPSG), phosphosilicate glass (PSG), undoped silicate glass (USG), borosilicate glass (BSG), tetraethylorthosilicate (TEOS), undoped dioxide. Including one or more of silicon and the like. According to an exemplary aspect of the present invention, the first ILD layer 320 is formed to a thickness of about 10 k by the operation shown in FIG. 2B. In accordance with a preferred embodiment of the present invention, the first ILD layer 320 is formed starting with operation 221 of forming the first USG by the operations shown in FIG. 2B. FIG. 4 illustrates a first USG layer 322 formed on the substrate 301. As will be appreciated by those skilled in the art, the first USG layer 322 significantly prevents dopants such as phosphorus from PSG of ILD layer 320 or phosphorus and boron from BPSG from migrating to polysilicon layer 310 or silicon layer 301. Formed to do. Following the formation of the first USG layer 322, the PSG layer 323 is formed in operation 222 of FIG. 2B, and then the density of the USG layer and the PSG layer is increased in operation 223. Densification of operation 223 is accomplished by heat flow processing, as will be appreciated by those skilled in the art. Densification is generally performed to reduce the viscosity of the ILD layer and thereby generally stabilize the ILD layer. Following the densification illustrated in operation 223, planarization of the PSG layer 323 of FIG. 4 is performed. The planarization of the PSG layer 323 can be performed by, for example, a chemical mechanical polishing (CMP) process. As will be appreciated by those skilled in the art, CMP is performed using a combination of chemical corrosion and mechanical ablation, but in order to significantly planarize the PSG layer 323, a rotating platen and polishing head (not shown) ) Is typically applied to the surface. After planarizing the PSG layer 323 in operation 224 of FIG. 2B, a second USG layer is formed on the substrate in operation 225. FIG. 5 illustrates the second USG layer 324 formed on the PSG layer 323, thus completing the formation of the multilayered first ILD layer 320. The second USG layer 324 is, for example, tetraethylorthosilicate (TEOS), and blocks phosphorus diffusion between the PSG layer 323 and a metal layer (not shown) to be formed next. For clarity, the PSG layer 323 and the second USG layer 324 are illustrated below as an ILD layer 320 as shown in FIG. 6, but it is understood that the ILD layer 320 includes any interlayer dielectric layer. . Referring again to FIG. 2A, following the formation of the ILD layer 320 at operation 220, a capacitor masking pattern is formed at operation 230. FIG. 6 illustrates an exemplary capacitor masking pattern 325, which is formed by a conventional lithographic process, as will be appreciated by those skilled in the art. The capacitor masking pattern typically exposes the capacitor region 326 of the first ILD layer 320 on the bottom plate 314 and covers the rest of the semiconductor substrate 301 with photoresist. As a result, the first ILD layer 320 is eroded in the capacitor region 326 in operation 235 of FIG. 2A. FIG. 7 illustrates the result of eroding the first ILD layer 320 using the capacitor masking pattern 325 as a mask and the silicide layer 315 as an etch stop within the capacitor region 326. The corrosion process performed in operation 235 of FIG. 2A includes, for example, an anisotropic dry etching process performed to expose the silicide layer 315 on the bottom plate 314 in the capacitor region 326. Following etching of the first ILD layer 320, the mask 325 is removed by conventional methods such as ashing. In operation 240 of FIG. 2A, a thin dielectric is formed on the semiconductor substrate. FIG. 8 illustrates a thin dielectric 330 (eg, a thin oxide) overlying the first ILD layer 320 and a silicide layer 315 exposed in the capacitor region 326. The formation of a thin dielectric 330 limits the capacitor dielectric 331 in the capacitor region 326 and protects the next deposited metal layer (not shown) from gas diffusion from the ILD layer 320, as described below. To do. The thin dielectric 330 is formed to a thickness between 200 and 1000 inches, for example by low pressure chemical vapor deposition (LPCVD) processing, depending on the capacity and voltage coefficient requirements. A thinner dielectric 330 results in a relatively high capacitance per unit area, as typically preferred in analog applications. However, making the thin dielectric 330 thinner (less than 200 Å) is also typically undesirable in analog applications, since the dielectric 330 will still have a relatively high voltage coefficient. It is advantageous to form the thin dielectric 330 by the LPCVD process, in general, the LPCVD process forms a uniform thickness of the thin dielectric 330, in order to maintain the low hysteresis of the capacitor (not shown). This is because it is particularly important in the capacitor region 326. However, other methods of forming the thin dielectric 330 such as PECVD, APCVD are considered within the scope of the present invention. In accordance with one exemplary aspect of the invention, a titanium nitride (TiN) layer (not shown) is deposited on the thin dielectric 330 to further protect the thin dielectric 330 from subsequent planarization as described below. It is formed. Following the formation of thin dielectric 330 in operation 240 of FIG. 2A, a contact masking pattern is formed in operation 245. FIG. 9 illustrates a contact masking pattern 335 formed by a conventional photolithography process. Contact masking pattern 335 includes one or more contact holes 340 that expose thin dielectric 330, while the remaining portion of semiconductor substrate 301 is covered by the contact masking pattern. Next, in operation 250 of FIG. 2A, a corrosion process is performed using the contact masking pattern 335 of FIG. 9 as a mask and silicide as an etch stop. FIG. 10 illustrates the result of performing operation 250 where the thin dielectric layer 330 and the first ILD layer 320 are eroded through the contact holes 340 in the contact masking pattern 335. In accordance with another aspect of the present invention, the corrosion treatment performed in operation 250 is a thin dielectric under the contact hole 340 in the contact masking pattern 335 as shown in FIG. 330 and first ILD layer 320 are eroded to form bottom plate contact hole 350 and moat contact hole 352. Thus, the corrosion treatment of operation 250 uses the silicide layer 315 as an etch stop in forming the bottom plate contact hole 350 and uses the moat 354 as an etch stop in forming the moat contact hole 352. To do. The moat 354 is a thickly doped region of the semiconductor substrate 301 that allows application of a specific potential, such as ground voltage or Vss to a device formed on the substrate. It should be noted that the corrosion treatment performed in operation 250 of FIG. 2A is not adversely affected by TiN layer corrosion as described in the foregoing prior art. In conventional processing, dry TiN corrosion was used to obtain a straight TiN layer profile. However, a major drawback of dry TiN corrosion is the poor selectivity of TiN to poly or TiN to silicon corrosion. If the dry over etch is optimized for removal of TiN “stringers” (eg, TiN remaining along the edges of poly 310 or field oxide 305), the corrosion begins to drill into semiconductor surface 301, which Causes diode leakage problems. One solution is to convert all or part of the dry etch into a wet etch because wet etch removes the TiN stringer more easily without damaging the semiconductor surface 301. However, wet etching deleteriously undercuts the TiN layer and the dielectric layer (eg, capacitor edge). Instead, this degrades the tuning performance of the capacitor, which is a critical requirement for capacitors used in analog circuit applications. Following the etching process performed at operation 250 of FIG. 2A, a metal layer is deposited on the semiconductor substrate at operation 255. FIG. 11 illustrates a metal layer 355 formed on the semiconductor substrate 301. Metal layer 355 includes, for example, tungsten, which typically fills bottom plate contact hole 350, moat contact hole 352, and capacitor region 326. Next, in operation 260 of FIG. 2A, the metal is planarized and a portion of the metal layer 355 is removed. As shown in FIG. 12, planarization is performed in operation 260 to electrically isolate the capacitor 360 and further define the capacitor top plate 361, bottom plate connector 362, and mote connector 363. In addition, the electrical connection area 365 to the top plate 361, bottom plate connector 362, and mote connector 363 is limited by the planarization performed in operation 260. According to one aspect of the invention, the metal layer 355 advantageously provides a low voltage coefficient within the capacitor 360 by being a metal such as tungsten. This planarization further electrically isolates the capacitor 360 from other devices (not shown) on the semiconductor substrate 301. According to another exemplary aspect of the present invention, a barrier metal (not shown) such as titanium and / or titanium nitride is formed prior to depositing the metal layer in operation 225 of FIG. 2A. In addition, the metal is planarized along the metal layer deposited in operation 255. According to another aspect of the present invention, a conductor connection layer 370 as shown in FIG. 13 is formed on the electrical connection region 365 to connect the capacitor 360 to another device (not shown) on the semiconductor substrate 301. Formed and patterned. While the invention has been shown and described in terms of several aspects, equivalent changes and modifications will occur to those skilled in the art upon reading and understanding this specification and the accompanying drawings. In particular, with respect to the various functions performed by the components (systems, devices, assemblies, etc.) described above, the terms used to describe such components, unless otherwise indicated,Perform specific functions of the components described (ie, functional equivalents) even though they are not structurally equivalent to the disclosed structures that perform the functions of the exemplary aspects of the invention described herein. It is intended to accommodate every component. Moreover, while specific features of the invention have been described with respect to only one of several aspects, such features may be considered one or more other when desirable and advantageous for any given or specific application. Can be combined with the features of Further, in the claims and the detailed description of the invention, to the extent that the term “include” is used, such terms are intended to be inclusive in a manner similar to the term “comprising”. ing. The following items are further disclosed with respect to the above description. (1) A method for forming a high-precision analog capacitor close to the top of a semiconductor substrate, in which a field oxide film layer is formed on a part of the substrate and on a part of the field oxide film layer Forming a polysilicon layer on the substrate, forming a silicide on the field oxide layer, defining a bottom plate of the capacitor, and forming a first interlayer dielectric on the substrate. Planarizing the first interlayer dielectric; forming a capacitor masking pattern on the substrate; using the capacitor masking pattern as a mask and the silicide as an etch stop. Limiting the capacitor area by corroding the first correlated conductor; and forming a thin dielectric on the substrate; Forming the thin dielectric and the first interlayer dielectric by forming a contact masking pattern on the substrate and using the contact masking pattern as a mask and the silicide as an etch stop. Etching to form a bottom plate contact hole, filling the capacitor region and bottom plate contact hole by depositing metal on the substrate, and planarizing the metal Limiting the electrical connection of the capacitor to the top plate and the capacitor to the bottom plate, whereby the electrical connection of the capacitor to the top plate and the capacitor to the bottom plate is limited to the first interlayer. Including a lateral insulation by a dielectric. . (2) The step of forming the first interlayer dielectric on the substrate includes depositing a first undoped silicate glass layer on the substrate and depositing a phosphosilicate glass layer on the substrate. Densifying the first undoped silicate glass layer and the phosphosilicate glass layer, planarizing the substrate, and depositing a second undoped silicate glass layer on the planarized substrate. The method of claim 1, comprising the step of: (3) The method according to claim 1, wherein the step of forming the first interlayer dielectric on the substrate includes forming tetraethylorthosilicate (TEOS) to a thickness of about 10 KÅ. (4) The method according to claim 1, wherein forming the thin dielectric on the substrate includes a low pressure chemical vapor deposition (LPCVD) process for forming the thin dielectric from 200 to 1000 mm thick. (5) The method according to claim 1, further comprising forming a barrier metal layer containing titanium nitride or tungsten / titanium on the substrate prior to depositing the metal layer. (6) A method according to item 1, wherein the metal layer contains tungsten. (7) Corrosion of the thin dielectric and the first interlayer dielectric further uses the contact masking pattern as a mask and the mote as an etch stop to provide moat contact holes. And depositing a metal on the substrate further comprises filling the moat contact hole. (8) A high-precision analog capacitor near the top of the semiconductor substrate, including a bottom plate, a top plate, and a thin dielectric layer between the plates, wherein the bottom plate is the thin dielectric A polysilicon member and a silicide layer near the layer, wherein the top plate includes a metal member of a metallic material having a substantially first top plane, and has a plane common to the first top plane; The high precision analog capacitor including an electrical connection path to the bottom plate including a metal member of a metal material having a second top plane having the second top plane. (9) The capacitor as described in item 8, wherein the metal substance contains tungsten. (10) The capacitor according to item 8, wherein the metal substance contains titanium nitride or tungsten / titanium. (11) The present invention relates to a method of forming an analog capacitor on a semiconductor substrate (301). The method forms a field oxide (305) on a portion of a substrate (301), forms a polysilicon layer (310) on the field oxide, and then on the polysilicon layer (310). Forming a silicide (315). A first interlayer dielectric layer (320) is formed on the substrate (301) to form a capacitor masking pattern (325). Using the capacitor masking pattern (325) as a mask and the silicide layer (315) as an etch stop, the first interlayer dielectric is eroded and a thin dielectric (330) on the substrate (301). Is formed. A contact masking pattern (335) is formed on the substrate (301) and subsequent corrosion is performed on the thin dielectric (330) using the silicide (315) and the substrate as an etch stop. A metal layer (355) is deposited on the substrate (301) and then planarized, thereby defining an analog capacitor. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1A illustrates a partial cross-sectional view of a conventional analog capacitor formed by a prior art method. FIG. 1B illustrates a method of forming a conventional analog capacitor formed by a prior art method. FIG. 2A illustrates a method of forming an analog capacitor according to the present invention. FIG. 2B illustrates a method of forming an analog capacitor according to one aspect of the present invention. FIG. 3 illustrates a partial cross-sectional view of the steps of forming field oxide and polysilicon layers for an analog capacitor according to the present invention. FIG. 4 illustrates a partial cross-sectional view of the steps of forming a first interlayer dielectric layer for an analog capacitor according to the present invention. FIG. 5 illustrates a partial cross-sectional view of the steps of forming a first interlayer dielectric (ILD) layer for an analog capacitor according to one aspect of the present invention. FIG. 6 illustrates a partial cross-sectional view of the step of forming a capacitor masking pattern for an analog capacitor according to the present invention. FIG. 7 illustrates a partial cross-sectional view of the step of corroding a first ILD layer for an analog capacitor according to the present invention. FIG. 8 illustrates a partial cross-sectional view of the steps of forming a thin dielectric layer for an analog capacitor according to the present invention. FIG. 9 illustrates a partial cross-sectional view of the steps of forming a contact masking pattern for an analog capacitor according to the present invention. FIG. 10 illustrates a partial cross-sectional view of the step of corroding a thin dielectric layer and a first ILD layer for an analog capacitor according to the present invention. FIG. 11 illustrates a partial cross-sectional view of forming a metal layer for an analog capacitor according to the present invention. FIG. 12 illustrates a partial cross-sectional view of planarizing a first metal layer for an analog capacitor according to the present invention. FIG. 13 illustrates a partial cross-sectional view of the steps of forming a conductive connection layer for an analog capacitor according to the present invention. [Description of Symbols] 301, semiconductor substrate 305, field oxide 310, polysilicon layer 314, bottom plate 315, silicide layer 320, first interlayer dielectric (ILD) layer 322, first USG layer 323, PSG layer 325, capacitor masking pattern 326, capacitor region 330 thin dielectric 335 contact masking pattern 340 contact hole 350 bottom plate contact hole 352 mote contact hole 354 mote 360 capacitor 361 top plate 362 bottom plate connector 363 mote connector 365 electrical connection Region 370 Conductive connection layer
Systems, apparatuses, and methods related to a controller for managing metrics and telemetry are described. A controller includes a front end portion, a central controller portion, a back end portion, and a management unit. The central controller portion can include a cache to store data associated with the performance of the memory operations, metric logic configured to collect metrics related to performance of the memory operations, load telemetry logic configured to collect load telemetry associated with performance of the memory operations within a threshold time, and a storage area to store the collected metrics and the collected load telemetry. The management unit memory of the controller can store metrics and load telemetry associated with monitoring the characteristics of the memory controller, and based on the stored metrics and load telemetry, alter at least one characteristic of the computing system.
What is claimed is:1. An apparatus, comprising: a memory controller configured to manage a first type of memory device, wherein the memory controller comprises: a front end portion comprising: an interface that includes a plurality of input/output (I/O) lanes; and circuitry to manage the interface; a central controller portion configured to, in response to receiving a signaling indicative of access requests from the host, perform memory operations, wherein the central controller portion comprises: a cache to store data associated with the performance of the memory operations; metric logic configured to collect metrics related to performance of the memory operations; load telemetry logic configured to collect load telemetry associated with performance of the memory operations within a threshold time; and a storage area to store the collected metrics and the collected load telemetry; a back end portion to couple the memory controller to the first type of memory device; and a management unit configured to: based on the stored metrics and load telemetry, alter at least one characteristic of the computing system.2. The apparatus of claim 1, wherein the first type of memory device is one of a dynamic random access memory (DRAM) device and a ferroelectric random access memory (FeRAM) device.3. The apparatus of claim 1, further comprising a memory controller configured to manage a second type of memory device.4. The apparatus of claim 1, wherein the metric logic includes a plurality of counters to perform respective counts related to the access requests, and wherein the metrics include information corresponding to the respective counts of the plurality of counters.5. The apparatus of claim 4, wherein the plurality of counters comprise at least one of a read hit counter, write hit counter, read miss counter, write miss counter, replacement counter, writeback counter, total read access counter, total write access counter, cache set read access counter, cache set write access counter, or any combination thereof.6. The apparatus of claim 4, wherein the cache further comprises a set associative cache, and wherein the metric logic is configured to: collect, via the plurality of counters, a count for each set in the set associative cache; and determine a most frequently accessed set based on the collected counts.7. The apparatus of claim 1, wherein the plurality of I/O lanes are configured to transfer access requests to or from circuitry external to the memory controller according to a compute express link protocol.8. The apparatus of claim 3, further comprising a peripheral component interconnect express (PCIe) 5.0 interface coupled to the plurality of I/O lanes, wherein the memory controller is to receive access requests involving at least one of the cache, the first type of memory device, or the second type of memory device, or any combination thereof, via the PCIe 5.0 interface according to a compute express link protocol.9. A system, comprising: a host; and a memory controller configured to manage a dynamic random access memory (DRAM) memory device, wherein the memory controller comprises: a front end portion to couple the memory controller to the host; a central controller portion configured to, in response to receiving a signaling indicative of access requests from the host, perform memory operations, wherein the central controller portion comprises: a plurality of cache memories including a first cache and a second cache, wherein each cache of the plurality of cache memories comprises: a metric logic including a plurality of counters, the metric logic configured to collect, within a threshold amount of time, metrics related to the memory operations using the plurality of counters; and load telemetry logic configured to collect, within the threshold amount of time, load telemetry associated with performing the memory operations using a plurality of load telemetry counters; and a storage area to store the collected metrics and the collected load telemetry; a back end portion configured to couple the memory controller to the DRAM memory device; and a management unit configured to: store the collected metrics and the collected load telemetry in the storage area; and based on the stored metrics and load telemetry, alter at least one of a characteristic of the interface in the front end portion, a characteristic of the DRAM memory device, a characteristic of the cache memories, or any combination thereof.10. The system of claim 9, wherein the load telemetry logic is to calculate a load path of the memory operation using the DRAM memory device.11. The system of claim 9, wherein the plurality of counters further comprise a memory operation hit counter configured to increase when a memory operation hit is detected.12. The system of claim 9, wherein the plurality of counters further comprises a memory operation miss counter configured to increase when a memory operation miss is detected.13. The system of claim 9, wherein the management unit is further configured to alter a data transfer rate of the interface in the front end portion based on the stored metrics and the collected load telemetry in the storage area.14. The system of claim 9, wherein each counter of the plurality of counters is configured to reset to an initial value after the threshold amount of time has elapsed.15. The system of claim 9, wherein the storage area further comprises a plurality of rows, each of the plurality of rows includes a plurality of slots, and wherein each counter of the plurality of counters is to store respective counts in a respective row of the plurality of rows.16. A method, comprising: receiving a signaling indicative of access requests involving either a first type of memory device or a second type of memory device; responsive to the receipt of the signaling indicative of the access requests, performing memory operations on a cache of a central controller portion; collecting, for a threshold amount of time, information associated with performing the memory operations on the cache, the information including: metrics collected via a metric logic in the central controller portion; and load telemetry collected via load telemetry logic in the central controller portion; and based on the collected information, altering at least one of a characteristic of an interface of the front end portion, a characteristic of a first type of memory device, a characteristic of a second type of memory device, a characteristic of the cache, or any combination thereof.17. The method of claim 16, further comprising collecting the information with a plurality of counters.18. The method of claim 16, further comprising splitting the cache into at least two sub-cache memories, wherein each sub-cache of the at least two sub cache memories comprises a respective metric logic and a respective load telemetry logic.19. The method of claim 18, further comprising accessing the sub-cache memories substantially concurrently to collect the information.20. The method of claim 16, further comprising receiving the signaling indicative of the access request at a rate of thirty -two gigatransfers per second or greater.
CONTROLLER FOR MANAGING METRICS AND TELEMETRYTechnical Field[0001] The present disclosure relates generally to semiconductor memory and methods, and more particularly, to apparatuses, systems, and methods for a controller for managing metrics and telemetry.Background[0002] Memory devices are typically provided as internal, semiconductor, integrated circuits in computers or other electronic systems. There are many different types of memory including volatile and non-volatile memory. Volatile memory can require power to maintain its data (e.g., host data, error data, etc.) and includes random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), synchronous dynamic random access memory (SDRAM), and thyristor random access memory (TRAM), among others. Non-volatile memory can provide persistent data by retaining stored data when not powered and can include NAND flash memory, NOR flash memory, ferroelectric random access memory (FeRAM), and resistance variable memory such as phase change random access memory (PCRAM), resistive random access memory (RRAM), and magnetoresistive random access memory (MRAM), such as spin torque transfer random access memory (STT RAM), among others.[0003] Memory devices may be coupled to a host (e.g. , a host computing device) to store data, commands, and/or instructions for use by the host while the computer or electronic system is operating. For example, data, commands, and/or instructions can be transferred between the host and the memory device(s) during operation of a computing or other electronic system. A controller may be used to manage the transfer of data, commands, and/or instructions between the host and the memory devices.Brief Description of the Drawings[0004] Figure 1 illustrates a functional block diagram in the form of a computing system including a controller for managing metrics and telemetry in accordance with a number of embodiments of the present disclosure. [0005] Figure 2 illustrates a functional block diagram in the form of a controller for managing metrics and telemetry in accordance with a number of embodiments of the present disclosure.[0006] Figure 3 illustrates a functional block diagram in the form of another controller for managing metrics and telemetry in accordance with a number of embodiments of the present disclosure.[0007] Figure 4 illustrates a flow diagram of an example method for managing metrics and telemetry in accordance with a number of embodiments of the present disclosure.[0008] Figure 5 illustrates a block diagram illustrating a flow of data through a controller for managing metrics and telemetry in accordance with a number of embodiments of the present disclosure.[0009] Figure 6 illustrates a controller for managing metrics and telemetry in accordance with a number of embodiments of the present disclosure.[0010] Figure 7 illustrates a functional block diagram in the form of a cache for managing metrics and telemetry in accordance with a number of embodiments of the present disclosure.[0011] Figure 8 illustrates a functional block diagram in the form of a cache for managing metrics and telemetry in accordance with a number of embodiments of the present disclosure.Detailed Description[0012] Systems, apparatuses, and methods related to a controller for managing metrics and telemetry are described. A controller includes a front end portion, a central controller portion, a back end portion, and a management unit. The central controller portion can include a cache. The cache can store data associated with memory operation. For instance, the cache can store data associated with memory operations (e.g., a read or a write) performed responsive to signaling indicative of a memory request (e.g., read request and/or write request). As detailed herein, the cache can include metric logic and load telemetry logic. The metric logic to collect information related to metrics associated with memory requests (i.e., access requests) and/or metrics associated with performance of memory operations (e.g., metrics associated with read/writes). For instance, the metric logic can collect metrics associated with memory requests and/or metrics associated with memory operation performed on the cache and/or other memory devices. Similarly, the load telemetry logic and can collect information related to load telemetry associated with memory requests (i.e., access requests) and/or loads associated with performance of memory operations (e.g., load telemetry associated with read/writes).[0013] Notably, based on the stored metrics and load telemetry, embodiments herein can alter at least one characteristic of the computing system. For instance, the metrics and load telemetry can cause an interface, a memory, and/or a cache to be altered, as detailed herein. Such alteration of a computing system characteristic based on the stored metrics and load telemetry can improve memory performance in comparison to approaches in which a characteristic is not altered and/or is other approaches that may attempt to make a change solely based on either load telemetry or various metrics.[0014] Moreover, embodiments herein can collect metrics and load telemetry for a threshold amount of time. Such collection of the metrics and load telemetry for a threshold amount of time can, as detailed herein, permit enhanced control and thereby improve memory performance in contrast to other approaches that do not collect metrics and load telemetry for a threshold amount of time such as other approaches which continually increment a counter.[0015] Systems, apparatuses, and methods related to a controller (e.g., a memory or media controller portion) for managing metrics and telemetry are described. The controller can orchestrate performance of operations to write data to and read data from a cache.[0016] The memory controller can include a front end portion, a central controller portion, a back end portion, and a management unit. The front end portion can couple the memory controller to external circuitry or an external device, such as a host computing device that can generate requests to read or write data to and/or from the cache and/or the memory device(s). In some embodiments, the memory controller can manage a first type of memory device. In yet another embodiment, the memory controller can manage a first type of memory device and a second type of memory device. In some embodiments, a first type of memory device can be a DRAM memory device and a second type of memory device can be a FeRAM memory device. However, this disclosure is not so limited. For example, either the first memory device or the second memory device can be other low latency RAM memory device. The DRAM memory device and the FeRAM memory device can be simultaneously coupled to the memory controller. As memory devices are tasked with performing more complicated operations, multiple types of memory devices with different sets of timing characteristics may be implemented in a memory system to store different types of data. In some embodiments, one of the timing characteristics can be row address strobe timing (tRAS). As used herein, the term “row address strobe timing” generally refers to the minimum number of clock cycles required between a row activation command an issuance of signaling to precharge the row. That is, “row address strobe timing” can relate to an amount of time required by a memory device to refresh a row after an operation involving the row has occurred.[0017] The memory controller can include a variety of components to monitor the behavior of access requests. For example, the memory controller can include a central controller portion comprising a cache. The cache can receive access requests from the host and/or a memory device. The cache can monitor the behavior of the received access request to determine the behavior of the access request. The behavior can determine the if at least one characteristic of the interface in the front end portion should be altered.[0018] In some embodiments, the memory system can be a ComputeExpress Link (CXL) compliant memory system (e.g., the memory system can include a PCIe/CXL interface). CXL is a high-speed central processing unit (CPU)-to-device and CPU-to-memory interconnect designed to accelerate next- generation data center performance. CXL technology maintains memory coherency between the CPU memory space and memory on attached devices, which allows resource sharing for higher performance, reduced software stack complexity, and lower overall system cost.[0019] CXL is designed to be an industry open standard interface for high-speed communications, as accelerators are increasingly used to complement CPUs in support of emerging applications such as artificial intelligence and machine learning. CXL technology is built on the peripheral component interconnect express (PCIe) infrastructure, leveraging PCIe physical and electrical interfaces to provide advanced protocol in areas such as input/output (I/O) protocol, memory protocol (e.g., initially allowing a host to share memory with an accelerator), and coherency interface.[0020] In the following detailed description of the present disclosure, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration how one or more embodiments of the disclosure may be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the embodiments of this disclosure, and it is to be understood that other embodiments may be utilized and that process, electrical, and structural changes may be made without departing from the scope of the present disclosure.[0021] It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” can include both singular and plural referents, unless the context clearly dictates otherwise. In addition, “a number of,” “at least one,” and “one or more” (e.g., a number of memory banks) can refer to one or more memory banks, whereas a “plurality of’ is intended to refer to more than one of such things.[0022] Furthermore, the words “can” and “may” are used throughout this application in a permissive sense (i.e., having the potential to, being able to), not in a mandatory sense (i.e., must). The term “include,” and derivations thereof, means “including, but not limited to.” The terms “coupled” and “coupling” mean to be directly or indirectly connected physically or for access to and movement (transmission) of commands and/or data, as appropriate to the context. The terms “data” and “data values” are used interchangeably herein and can have the same meaning, as appropriate to the context.[0023] Figure 1 illustrates a functional block diagram in the form of a computing system 101 including a controller 100 for managing metrics and telemetry in accordance with a number of embodiments of the present disclosure. The computing system 101 can include a controller 100 (i.e., a memory controller) comprising a front end portion 104, a central controller portion 110, and a back end portion 119. The computing system 101 can be coupled to a host 103 and memory devices 126, 128. However, this disclosure is not so limited. For example, in some embodiments, the computing system 101 can be coupled to a host 103 and a single memory device 126 or 128. [0024] In some embodiments, the controller 100 can manage a DRAM memory device 126 having a first tRAS and a low latency RAM memory device 128 having a second tRAS. In some embodiments, the tRAS of the low latency RAM memory device 128 is less than a threshold value (e.g., less than a given number of nanoseconds.[0025] The controller 100 can have a front end portion 104 that includes an interface to couple the controller 100 to the host 103 through input/output (I/O) lanes 102-1, 102-2, . . ., 102-N (individually or collectively referred to as I/O lanes 102). In some embodiments, there can be eight (8) I/O lanes 102 and in other embodiments there can be sixteen (16) I/O lanes 102. In some embodiments, the plurality of I/O lanes 102 can be configured as a single port. [0026] The memory controller 100 can include a central controller portion 110 that can control, in response to receiving a request from the host 103, performance of a memory operation. The memory operation can be a memory operation to read data from a memory device 126, 128 or an operation to write data to a memory device 126, 128. In some embodiments, the central controller portion 110 can, in response to receiving a request from the host 103, control writing of multiple pages of data substantially simultaneously to memory device 126, 128.[0027] The central controller portion 110 can include a cache (e.g., the cache 212 illustrated in Figure 2, herein) to store data associated with performance of a memory operation and/or a security component (e.g., the security component 214 illustrated in Figure 2, herein) to encrypt data before the data is stored in the DRAM memory device 126, the low latency RAM memory device 128, and/or the cache. In some embodiments, the cache can also provide the central controller portion 110 with information related to access requests. In some embodiments, in response to receiving a request from the host 103, data from the host 103 can be stored in cachelines of the cache. The data in the cache can be written to a memory device 126, 128. In some embodiments, the data can be encrypted using an Advanced Encryption Standard (AES) encryption. For instance, the data can be encrypted using an AES encryption before the data is stored in the cache and/or memory device 126, 128.[0028] The central controller portion 110 can include error correction code (ECC) encoding circuitry (e.g., the ECC encoding circuitry 216 illustrated in Figure 2, herein) to ECC encode the data and ECC decoding circuitry (e.g., the ECC decoding circuitry 218 illustrated in Figure 2, herein) to ECC decode the data. As used herein, the term “ECC encoding” can refer to encoding data by adding redundant bits to the data. As used herein, the term “ECC decoding” can refer to examining the ECC encoded data to check for any errors in the data. The ECC encoding circuitry can encode data that will be written to the DRAM memory device 126 and the low latency RAM memory device 128. However, this disclosure is not so limited. For example, the ECC encoding circuitry can encode data that will be written to a single memory device 126 or 128. In some embodiments, an error detected in the data can be corrected immediately upon detection. The ECC decoding circuitry can decode data that has been previously ECC encoded.[0029] In some embodiments, the controller 100 can comprise a back end portion 119 comprising a media controller portion and a physical (PHY) layer that couples the controller 100 to a plurality of memory ranks. As used herein, the term “PHY layer” generally refers to the physical layer in the Open Systems Interconnection (OSI) model of a computing system. The PHY layer may be the first (e.g., lowest) layer of the OSI model and can be used transfer data over a physical data transmission medium. In some embodiments, the physical data transmission medium can be a plurality of channels 125-1, 125-2. As used herein, the term “memory ranks” generally refers to a plurality of memory chips (e.g., DRAM memory chips and/or FeRAM memory chips) that can be accessed simultaneously. A memory rank can be sixty -four (64) bits wide and each memory rank can have eight (8) pages. In some embodiments, a page size of a first type of memory device (e.g., DRAM memory device) 126 can be larger than a page size of the second type of memory device (e.g., low latency RAM memory device) 128.[0030] In some embodiments, the controller 100 can include a management unit 134 to monitor characteristics of the controller 100. The management unit 134 can include an I/O bus to manage out-of-band data, a management unit controller to execute instructions associated with monitoring the characteristics of the controller, and a management unit memory to store data associated with monitoring the characteristics of the controller 100. As used herein, the term “out-of-band data” generally refers to data transferred through a transmission medium that is different from the main transmission medium of a network. For example, out-of-band data can be data transferred to a network using a different transmission medium than the transmission medium used to transfer data within the network.[0031] Figure 2 illustrates a functional block diagram in the form of a controller 200 for managing metrics and telemetry in accordance with a number of embodiments of the present disclosure. A controller 200 (i.e., a memory controller) is configured to manage a first type of memory device (e.g., DRAM memory device) 226-1, . . ., 226-N (individually or collectively referred to as the first type of memory device 226) that operates according to a first set of timing characteristics. In some embodiments, controller 200 can be configured to manage a first type of memory device and a second type of memory device (e.g., low latency RAM memory device) 228-1, . . ., 228-N (individually or collectively referred to as the second type of memory device 228) that operates according to a second set of timing characteristics. In some embodiments, the first set of timing characteristics can be a tRAS of the DRAM memory device 226 and the second set of timing characteristics can be a tRAS of the low latency RAM memory device 228. In some embodiments, the first set of timing characteristics can correspond to a timing that is greater than the second set of timing characteristics. In some embodiments, a controller 200 can include a front end portion 204, a central controller portion 210, and a back end portion 219.[0032] As shown in Figure 2, a front end portion 204 can include an interface 206 that includes multiple I/O lanes 202-1, 202-2, . . ., 202 -N (individually or collectively referred to as I/O lanes 202), as well as circuitry 208 to manage the interface 206. The interface 206 can be a peripheral component interconnect express (PCIe) 5.0 interface coupled to the I/O lanes 202. In some embodiments, the controller 200 can receive access requests involving at least one of the cache 212, the first type of memory device 226, and/or the second type of memory device 228 via the PCIe 5.0 interface 206 according to a CXL protocol. The interface 206 can receive data from a host (e.g., the host 103 shown in Figure 1) through the of I/O lanes 202. The circuitry 208 may use CXL protocols to manage the interface 206. [0033] A central controller portion 210 can be configured to cause performance of a memory operation. The central controller portion 210 can include the cache 212 to buffer data associated with performance of the memory operation and provide the central controller portion 210 with information related to access requests. The cache 212 can be a set-associative cache including multiple cachelines. In some embodiments, the cache 212 can be a fully associative cache. The cacheline size can be equal to the controller 200 read granularity. Therefore, each cacheline can include 256 bytes of data. In some embodiments, each cacheline can comprise 512 bytes of data.[0034] Read and write requests of CXL memory systems can be 64 bytes in size. Therefore, data entries in the cache 212 can have 64 bytes of data. Each cacheline can comprise 256 bytes. Therefore, multiple 64 byte requests can be stored in each cacheline. In response to a requests from the host, the controller 200 can write 256 bytes of data to a memory device 226, 228. In some embodiments, the 256 bytes of data can be written in 64 byte chunks.[0035] As shown in Figure 2, a central controller portion 210 can include a security component 214 to encrypt data before storing the data in the DRAM device 226 or low latency RAM memory device 228. As stated before, the security component 214 can use an AES encryption to encrypt the data. In some embodiments, the security component 214 may encrypt data that is written to the low latency RAM memory device 228 but may not encrypt the data that is written to the DRAM memory device 226. The data written to the low latency RAM memory device 228 may be encrypted because the low latency RAM memory device 228 can have security deficiencies that the DRAM memory device 226 does not have. The security component 214 can be bypassed when it is not used, such as when data is being written to the DRAM memory device 226. In some embodiments, the security component 214 can be enabled or disabled. For example, the security component 214 can be enabled when writing memory to a persistent memory device, such as a low latency RAM memory device 228.[0036] As shown in Figure 2, the central controller portion 210 can include error correction code (ECC) circuitry to ECC encode the data and ECC decode the data. In some embodiments, the central controller portion 210 can implement low power chip kill (LPCK) error correction. As used herein, the term “chip kill” generally refers to a form of error correction that protects memory systems (e.g., the computing system 101 shown in Figure 1) from any single memory chip failure as well as multi-bit error from any portion of a single memory chip. One approach for chip kill protection is on-the-fly correction implementation. On-the-fly correction can form a plurality of codewords out of four (4)-bit symbols of each of a plurality of die (e.g., memory chip). For example, if there are eleven (11) die each containing 4 separate bit symbols, with each bit symbol containing 4 bits, the 11 die can form 4 codewords each with 11 separate bit symbols comprising a total of forty-four (44) bits per codeword. [0037] In some embodiments, a first codeword can comprise the first bit symbol of each die, a second codeword can comprise the second bit symbol of each die, a third codeword can comprise the third bit symbol of each die, and a fourth codeword can comprise the fourth bit symbol of each die. In other words, the eight data bit symbols and 3 parity bit symbols of a code word can be stored in eleven (11) die. Eight (8) of the 11 die can contain data bit symbols and the three (3) remaining die of the 11 die can contain parity bit symbols. In some embodiments, the data bit symbols and the parity bit symbols can be written or read concurrently from the 11 die by the ECC encoding circuitry 216 and the ECC decoding circuitry 218. If every bit symbol in a die fails, only the bit symbols from that die in the codeword will fail. This allows memory contents to be reconstructed despite the complete failure of one die.[0038] As shown in Figure 2, the controller 200 can include a back end portion 219, including a media controller portion 220 comprising a plurality of media controller portions and a physical (PHY) layer portion 222 comprising a plurality of PHY layers 224-1, 224-2, 224-N, . . ., 224-(N+l) (individually or collectively referred to as PHY layer 224). In some embodiments, the back end portion 219 is configured to couple the PHY layer portion 222 to a plurality of memory ranks 230-1, . . ., 230-N (individually or collectively referred to as memory ranks 230) of a first memory device 226 and a plurality of memory ranks 232-1, . . ., 232-M (individually or collectively referred to as memory ranks 232) of a second memory device 228-1, . . ., 228-N (individually or collectively referred to as second memory device 228). The media controller portion 220 can include both open-page policies and a closed-page policies. As used herein, the term “open-page policy” generally refers to a policy which allows a controller (e.g., media controller portion 220) to leave a page of memory open for a certain amount of time after a read operation or a write operation is performed. As used herein, the term “closed-page policy” generally refers to a policy that ensures that a page of memory is closed immediately after a read operation or a write operation is performed. In some embodiments, the low latency RAM memory device 228 can implement a closed-page policy with an additional requirement that the tRAS and other timings of the low latency RAM memory device 228 are different from DRAM timings.[0039] In embodiments where LPCK error correction is used, the media controller portion 220 can be a single media controller portion 220. When implementing LPCK error correction, a plurality of channels 225-1, 225-2, 225- N, . . ., 225-(N+l) (individually or collectively referred to as the plurality of channels 225) can be driven concurrently to write data to the DRAM memory device 226 and/or the low latency RAM memory device 228. In some embodiments, instead of using a single media controller portion 220, multiple media controller portions can be used to drive the plurality of channels 225 in the LPCK architecture. When multiple media controller portions are used to drive the channels 225 concurrently, the media controller portions are utilized substantially simultaneously.[0040] As used herein, the term “substantially” intends that the characteristic needs not be absolute, but is close enough so as to achieve the advantages of the characteristic. For example, “substantially simultaneously” is not limited to operations that are performed absolutely simultaneously and can include timings that are intended to be simultaneous but due to manufacturing limitations may not be precisely simultaneously. For example, due to read/write delays that may be exhibited by various interfaces (e.g., LPDDR5 vs. PCIe), media controller portions that are utilized “substantially simultaneously” may not start or finish at exactly the same time. For example, the multiple controllers can be utilized such that they are writing data to the memory devices at the same time regardless if one of the media controller portions commences or terminates prior to the other.[0041] Each of the plurality of media controller portions can receive a same command and address and drive the plurality of channels 225 substantially simultaneously. By using the same command and address for the plurality of media controller portions, each of the plurality of media controller portions can utilize the plurality of channels 225 to perform the same memory operation on the same plurality memory cells.[0042] A PHY layer portion 222 can include multiple PHY layers 224 and the media controller portion 220 that is configured to drive the channels 225 that couple PHY layers 224 to the memory ranks 230, 232. In some embodiments, the memory ranks 230, 232 can be DRAM memory ranks 230 and/or low latency memory ranks 232. In some embodiments, the controller 200 can be coupled to the memory ranks 230, 232 through channels 225 coupled to the back end portion 219 and each of the channels 225 is coupled to four (4) memory ranks 230, 232.[0043] The controller 200 can include a management unit 234 configured to monitor characteristics of the controller 200. In some embodiments, the management unit 234 includes an I/O bus 238 to manage out-of-band data, a management unit controller 240 to execute instructions associated with monitoring the characteristics of the controller 200, and a management unit memory 242 to store data associated with monitoring the characteristics of the controller 200. An endpoint of the management unit 234 can be exposed to the host system (e.g., the host 103 shown in Figure 1) to manage data. In some embodiments, the characteristics monitored by the management unit 234 can include a voltage supplied to the controller 200 or a temperature of the central controller portion 210, or both. Further, the management unit 234 can include an advanced high-performance bus (AHB) interconnect 236 to couple different components of the management unit 234.[0044] As stated above, the I/O bus 238 can be configured to transfer out-of-band data. In some embodiments, the I/O bus 238 can be a System Management Bus (SMBus). As used herein, the term “SMBus” generally refers to a single-ended simple two-wire bus for the purpose of lightweight communication. Further, the management unit 234 can include circuitry to manage in-band data. As used herein, the term “in-band data” generally refers to data that is transferred through the main transmission medium within a network, such as a local area network (LAN).[0045] The management unit 234 can include a management unit controller 240. In some embodiments, the management unit controller 240 can be a controller that meets the Joint Test Action Group (JTAG) standard and operate according to an Inter-Integrate Circuit (I2C) protocol, and auxiliary I/O circuitry. As used herein, the term “JTAG” generally refers to an industry standard for verifying designs and testing printed circuity boards after manufacture. As used herein, the term “I2C” generally refers to a serial protocol for a two-wire interface to connect low-speed devices like microcontrollers, I/O interfaces, and other similar peripherals in embedded systems. In some embodiments, the auxiliary I/O circuitry can couple the management unit 234 to the controller 200. Further, firmware for operating the management unit can be stored in the management unit memory 242. In some embodiments, the management unit memory 242 can be a flash memory such as flash NOR memory or other persistent flash memory device.[0046] Figure 3 illustrates a functional block diagram in the form of another controller 300 for managing metrics and telemetry in accordance with a number of embodiments of the present disclosure. A controller 300 is configured to manage a dynamic random access memory (DRAM) device 326-1, . . ., 326-N (individually or collectively referred to as DRAM memory device 326) having a first row address strobe timing (tRAS). In some embodiments, the controller 300 is configured to manage a dynamic random access memory (DRAM) device 326 and a low latency RAM device 328-1, . . ., 328-N (individually or collectively referred to as low latency RAM memory device 328) having a second tRAS. As shown in Figure 3, the controller 300 can include a front end portion 304, a central controller portion 310, and a back end portion 319.[0047] As shown in Figure 3, the front end portion 304 can include an interface 306 that includes multiple I/O lanes 302-1, 302-2, . . ., 302 -N (individually or collectively referred to as I/O lanes 302) and a circuity 308 to manage the interface 306. In some embodiments the quantity of I/O lanes 302 can be eight (8) I/O lanes and in other embodiments, the quantity of I/O lanes 302 can be sixteen (16) I/O lanes. Increasing the amount of I/O lanes 302 can increase the amount of data transferred to and from the controller 300. In some embodiments, the I/O lanes are configured to transfer access requests to or from circuitry external to the controller at a rate of at least thirty -two (32) gigatransfers per second (GT/s). More specifically, each of the I/O lanes can be configured to transfer the requests at a rate of at least 32 GT/s. Therefore, increasing the number of I/O lanes can increase the amount of data written per second. Further, in some embodiments, the I/O lanes can be configured to transfer access requests to or from circuitry external to the controller according to a compute express link protocol.[0048] As shown in Figure 3, a central controller portion 310 that can cause performance of a read operation or a write operation, or both can include a cache 312 to store data associated with the read operation or write operation, or both, and increase an efficiency of accessing the data. The cache 312 can be used to store data received from the host and write the stored data to the DRAM memory device 326 and/or the low latency RAM memory device 328. In some embodiments, the cache 312 can increase the efficiency of accessing the data by allowing the low latency RAM memory device 326 to receive data in 64 byte blocks. A CXL memory system (e.g., computing system 101 of Figure 1) can request data at a granularity of 64 bytes but the data may be accessed at a granularity of 256 bytes. Storing data in the cache 312 can allow the low latency RAM memory device 328 to access data in 64 byte chunks because the cache 312 can send data in 64 byte chunks. Use of the cache 312 can also increase the efficiency of the memory system because the cache 312 can prefetch the data and store the data in multiple 64 byte blocks in the case of a cache miss. This can increase the efficiency of the CXL memory system because, instead of searching a separate memory device in the event of a cache miss, the data can be read from the cache 312 because the data was prefetched by the cache 312. Less time and energy may be used accessing the prefetched data than would be used if the memory system has to search for the data before accessing the data. In addition, efficiency can be increased when a subsequent search to the same address is made. For example, a search for data not in the cache memory 312 can yield a miss from a memory device. However, a subsequent search for the same data can generate a hit inside the cache memory 312 and thereby increase efficiency.[0049] As shown in Figure 3, the central controller portion 310 can include a security component 314 to encrypt the data before storing the data in the DRAM memory device 326 or the low latency RAM memory device 328.As stated above, the security component 314 can encrypt the data using AES encryption. In some embodiments, the data can bypass the security component 314 and avoid encryption. For example, when data is written from the host to the DRAM memory device 326, the data can bypass the security component and be written into the DRAM memory device 326 as unencrypted data. The data can bypass the security component when being written to the DRAM memory device 326 because the DRAM memory device 326 may not have the same vulnerabilities as the low latency RAM memory device 328. This can increase the efficiency of a CXL memory system because bypassing the security component 314 can decrease the power consumed and/or the time used to transfer data. Therefore, by engaging the security component 314 in circumstances when the security component 314 provides a more significant benefit and bypassing the security component 314 in circumstances where the security component 314 provides a less significant benefit, the efficiency of the memory system will increase.[0050] As shown in Figure 3, the central controller portion 310 can include ECC encoding circuitry 316-1, 316-2, 316-N, . . ., 316-(N+1) (individually or collectively referred to as ECC encoding circuitry 316) to ECC encode the data and ECC decoding circuitry 318-1, 318-2, 318-N, . . ., 318- (N+l) (individually or collectively referred to as ECC decoding circuitry 318) to ECC decode the data. In some embodiments, the central controller portion 310 can also include a plurality of redundant array of independent disks (RAID) components 344-1, 344-2, 344-N, . . ., 344-(N+l) (individually or collectively referred to as RAID components 344) to store the data. As used herein, the term “RAID components” generally refers to data storage virtualization technology that combines multiple physical disk drive components into one or more logical units for the purposes of data redundancy, performance improvement, or both. [0051] Each of the RAID components 344 can be coupled to differentECC encoding circuitry 316 and ECC decoding circuitry 318. In some embodiments, each of the RAID components 344 can correspond to one of the media controllers 321-1, 321-2, 321-N, . . ., 321-(N+1) (individually or collectively referred to as media controllers 321). This allows a separate RAID component 344 and a separate media controller 321 to be dedicated to each of the channels 325-1, 325-2, . . ., 325-N, 325-(N+l). A RAID state machine can implement the functionality of the RAID components 344. By dedicating a separate RAID component 344 and a separate media controller 321 to each channel 325, each channel 325 can be driven individually and receive a separate command and address than other channels 325. In some embodiments, each media controller 321 executes commands independent of the other media controllers 321. This RAID architecture can provide more flexibility to the memory system in regard to how much data is written to a memory device 326, 328 and when the data is written to a memory device 326, 328 in comparison to the LPCK architecture. In some embodiments, a RAID component 344 can be striped across multiple channels 325. If a RAID component 344 is striped across multiple channels 325, a RAID state machine can be shared across multiple channels 325. This allows a RAID component 344 to drive a plurality of channels 325 substantially simultaneously.[0052] As shown in Figure 3, the central controller portion 310 can include a back end portion 319 including a media controller portion 320 comprising a plurality of media controllers 321 and a physical (PHY) layer portion 322 comprising a plurality of PHY layers 324-1, 324-2, . . ., 324-N+l (individually or collectively referred to as PHY layers 324), wherein the back end portion 319 is configured to couple the PHY layer portion 322 to a plurality of memory ranks. In some embodiments, the memory ranks can include DRAM memory ranks 330-1, . . ., 330-N (individually or collectively referred to as DRAM memory ranks 330) and low latency memory ranks 332-1, . . ., 332-M (individually or collectively referred to as low latency memory ranks 332). In some embodiments, the back end portion 319 can be connected to the plurality of memory ranks 330, 332 through the plurality of channels 325 and each of the plurality of channels 325 is coupled to five (5) memory ranks 330, 332.[0053] As stated above, each media controller 321 can correspond to aRAID component 344, as well as ECC encoding circuitry 316 and ECC decoding circuitry 318. Each media controller 321 can also correspond to one of the plurality of PHY layers 324. Each PHY layer 324 can be coupled to a DRAM memory device 326 or a low latency RAM memory device 328 through a channel 325. In some embodiments, each media controller 321 can execute commands independent of the other media controllers 321. Therefore, data can be transferred from a PHY layer 324 through a channel 325 to a memory device 326, 328 independent of other PHY layers 324 and channels 325. [0054] As shown in Figure 3, the controller 300 can include a management unit 334 configured to monitor a plurality of characteristics of the memory controller 300. The management unit 334 can include an I/O bus 338 to transfer out-of-band data, a microcontroller 340 to execute instructions associated with monitoring characteristics of the controller 300, and a management unit memory 342 to store data associated with monitoring the characteristics of the controller 300. The characteristics of the controller 300 that the management unit 334 can monitor can include, but are not limited to, the amount of voltage being applied to the controller 300 and the temperature of the controller 300.[0055] Figure 4 illustrates a flow diagram of an example method 470 for managing metrics and telemetry in accordance with a number of embodiments of the present disclosure. The method 470 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0056] At block 471, the method 470 can include receiving a signaling indicative of an access request involving either a first type of memory device or a second type of memory device. In some embodiments, the first type of memory device is one of a dynamic random access memory (DRAM) device or a low latency RAM memory device. In addition, the second type of memory device is the other of a low latency RAM memory device or dynamic random access memory (DRAM) device. The signaling can be sent from a host to the central controller portion. In some embodiments, the central controller portion can receive the signaling at a rate of 32 GT/s.[0057] At block 472, the method 470 can include performing memory operations on a cache of a central controller portion, responsive to the receipt of the signaling indicative of the access requests. In some embodiments, the cache can store data related to memory operations. For example, a controller can access the cache to determine if the requested data is stored in the cache. If the data is stored in the cache then the cache can process the request and perform the memory operation. In some embodiments, data can be written from the host to a cache before writing the data to a memory device. That is, data can be accessed from the cache without using the memory device. In some embodiment accessing data from a cache can increase the speed of accessing data, as compared to accessing data from a memory device. For example, data can be written to the cache when a memory option is performed after receiving a signal indicative of an access request is received. Similarly, data can be read from the cache when a memory option is performed after receiving a signal indicative of an access request is received. That is, cache can include a cache controller logic to send a read command to the memory device and write the data from the memory device to the cache as a result of signal indicative of an access request. In addition, the cache controller logic can send a read command to the cache and write the data from the cache to the memory device as a result of signal indicative of an access request cache[0058] At block 473, the method 470 can include collecting, for a threshold amount of time, information associated with performing the memory operations on the cache. For example, the cache can collect information including metrics collected through a metric logic in the central controller portion. The metric logic can be used to monitor the behavior of the computing system as it relates to memory operations (e.g., requests to read/writes to memory). For example, the metric logic can include multiple counters to collect metrics, such as, the number of cacheline hits, cacheline misses, cacheline evictions without writeback, cacheline replacements with writeback, cache read accesses, and/or cache write accesses. In some embodiments, the cache can include a cache memory to store cacheline data. As used herein, a “hit” refers to the moment when the requested data can be found in the element (e.g., cache) being searched. As used herein, a “miss” refers to the moment when the requested data cannot be found in the element (e.g., cache) being searched.[0059] For instance, the metric logic can include a read hit counter to count the number of cacheline hits when reading data, a write hit counter to count the number of cacheline hits when writing data, a read miss counter to count the number of cacheline misses when reading data, a write miss counter to count the number of cacheline misses when writing data, a replacement counter to count the number of cacheline evictions without writebacks, a writeback counter to count the number of cacheline replacements with writebacks, a total read access counter to count the number of cache read accesses, and/or a total write access counter to count the number of cache write accesses. In some embodiments, the metric logic can collect a count for each set in the set associative cache. The metric logic can use the counts collected for each set in the set associative cache to determine the most frequently accessed set. In some embodiments, determining the most frequently accessed counter can assist in determining which characteristics of the computing system should be altered. [0060] In addition, the cache can collect information including load telemetry collected through a load telemetry logic in the central controller portion. The load telemetry logic can be used to calculate the read path loads and the write path loads that occur in the computing system by the host and/or memory device. In some embodiments, the load telemetry logic can include multiple telemetry counters to count the write path loads and read path loads that occur in the computing system. The load telemetry logic can determine the load value based on the load telemetry received (e.g., the write path loads and read path loads). The load value can be determined by the average value over the time it takes to reduce oscillations of traffic from the host. The telemetry ratio is calculated (e.g., measured) by dividing the load telemetry of the telemetry counter by the load value.[0061] In some embodiments, the cache can store the collected information in a storage area. For example, the cache can store, in the storage area, the load telemetry collected by the load telemetry logic and the metrics collected by the metric logic. In some embodiments, the load telemetry logic can store the load telemetry count from the telemetry counters and the telemetry ratio to the storage area after a threshold amount of time. In some embodiments, the metric logic can store the count form each respective counter to the storage area. The metric logic can cause each counter to store respective counts to the storage area after a threshold amount of time. In some embodiments, the count for each respective counter of the metric logic can be reset to an initial value after a metric storage event. In addition, an interrupt request can be sent to the interconnect to alert the interconnect that a new metric is stored in the storage area, after the metric storage event.[0062] In some embodiments, the storage area can include multiple rows to store counts for each counter of the metric logic and each telemetry counter of the load telemetry logic. That is, each counter and telemetry counter can have a designated row to store respective counts to in the storage area. In some embodiments, each row can include multiple slots. The metric logic can store a count to a different slot of within a designated row after each metric storage event. Similarly, the load telemetry logic can store the count from the telemetry count to a different slots within a designated row after a threshold amount of time. Storing each count to a different slot within a designated row can allow the computing system to track the behavior of memory operations over time. In some embodiments, tracking the behavior of memory operations in the computing system can improve the computing system by guiding the alteration of at least one characteristic of an interface of the front end portion, a first type of memory device, a second type of memory device, a cache, or any combination thereof.[0063] Figure 5 illustrates a block diagram illustrating a flow of data through a controller for managing metrics and telemetry in accordance with a number of embodiments of the present disclosure. The bandwidths 556-1, 556- 2, 556-3, 556-4, 556-5, 556-6, 556-7, 556-8, 556-9, 556-10, 556-11, 556-12 (individually or collectively referred to as bandwidth 556) of the I/O bus between components in the front end portion 504, the central controller portion 510, and the back end portion 519 of a controller are shown. As used herein, the term “bandwidth” generally refers to a maximum amount of data written from one component in a memory system to another component within the same memory system or external to the memory system in a given amount of time. [0064] As shown in Figure 5, the front end portion 504 can include circuitry 508 for managing an interface between the host and the front end portion 504. In some embodiments, the interface can be a PCIE 5.0 interface including either 8 I/O lanes or 16 I/O lanes. In some embodiments, each of the I/O lanes between the host and the front end portion 504 may have a bandwidth 556-1, 556-12 of 32 gigabytes per second (GB/s). [0065] The bandwidth 556-2, 556-12 of I/O circuitry between the front end portion 504 and the central controller portion 510 can be 32 GB/s. In some embodiments, the central controller portion 510 can include a cache 512, AES encryption circuitry 513, AES decryption circuitry 515, ECC encoder circuitry 516, and ECC decoder circuitry 518. As shown in Figure 5, data in the central controller portion 510 can be written from the cache to the AES encryption circuitry 513. In some embodiments, the bandwidth 556-3 of the I/O circuitry from the cache 512 to the AES encryption circuitry 513 can be 32 GB/s. The data can travel from the AES encryption circuitry 513 to the ECC encoder circuitry 516. In some embodiments, the I/O circuitry between the AES encryption circuitry 513 and the ECC encoder circuitry 516 can have a bandwidth 556-4 of 32 GB/s. Further, the I/O circuitry between the AES decryption circuitry 515 and the ECC decoder circuitry 518 can have a bandwidth 556-9 of 32 GB/s.[0066] As shown in Figure 5, I/O circuitry coupling the central controller portion 510 and the back end portion 519 of the controller can have a bandwidth 556-5, 556-8 of 44 GB/s. The back end portion 519 can include a media controller portion 520 and a PHY layer portion 522. The PHY layer portion 522 can couple to a DRAM memory device and a low latency RAM memory device through a plurality of channels. In some embodiments, each of the plurality of channels can have a bus width of sixteen (16) bits and a bandwidth 556-6, 556-7 of 8 GB/s. Parity bits can consume 3/11 of the total bandwidth 556-6, 556-7 of a channel that connects the back end portion 519 to a DRAM memory device or an low latency RAM memory device. The remaining data throughput can travel at a speed of 64 GB/s which matches a PCIe raw bandwidth for downstream data (e.g., 32 GB/s) added to upstream data (e.g., 32 GB/s). As used herein, the term “downstream data” can refer to data sent from a computer or network and the term “upstream data” can refer to data received by computer or network.[0067] In some embodiments, downstream data can be data received by the controller and upstream data can be data sent from the controller. In some embodiments, the bandwidth 556 requirements can be modified (e.g., increased or decreased) based factors including, but not limited to, the efficiency of the bus (e.g., the PCIe bus) and/or the memory system, the cache hit rate, the efficiency of the media controller portion 520, and the DRAM memory device bus turnaround cycle, and the DRAM memory device bus rank-to-rank timing (e.g., rank switching). As used herein, the term “turnaround cycle” generally refers to the amount of time it takes for a memory device to alternate between a read operation and a write operation. As used herein, the term “rank-to-rank timing” generally refers to the time period between completing a memory operation on a rank of a memory device and starting a memory operation on another rank of the memory device.[0068] Figure 6 illustrates a controller 600 for managing metrics and telemetry in accordance with a number of embodiments of the present disclosure. The controller 600 can include PCIe I/O lanes 602-1, 602-2, 602-3, .. ., 602-N (individually or collectively referred to as PCIe I/O lanes 602), PCIe clock and reset I/O lanes 603-1, 603-2, . . ., 603-N (individually or collectively referred to as PCIe clock and reset I/O lanes 603), and SMBus I/O lanes 605-1, 605-2, 605-3 (individually or collectively referred to as SMBus I/O lanes 605). Further, the controller 600 can include a voltage input bus 607, a plurality of power management integrated circuit (PMIC) I/O lanes 609-1, 609-2, 609-3 (individually or collectively referred to as PMIC I/O lanes 609), channels 625-1,. . ., 625-N (individually or collectively referred to as channels 625), a serial peripheral interface (SPI) 611, a JTAG bus 613, and a ground connection bus 615.[0069] As shown in Figure 6, the I/O lanes 602 can include PCIe RX(receiving) lanes and PCIe TX (transmitting) lanes. As stated above, the I/O lanes can write (e.g., transmit) data to a host and receive data from a host. The PCIe clock and reset I/O lanes 603 can include at least one PCIe clock lane to determine the timing of data input and output to and from a memory system and at least one PCIe reset lane that can receive a signal to reset the memory system. Further, the SMBus I/O lanes 605 can include at least one SMBus clock lane to determine the timing of data input and output to and from the memory system, at least one SMBus data lane to write and receive data, and at least one SMB reset lane to receive a signal to reset the memory system.[0070] As shown in Figure 6 the PMIC I/O lanes 609 can include a lane to receive a VDDP voltage to stabilize a clock of the memory system at high frequencies and a lane to receive data from a low power double data rate 5thgeneration (LPDDR5) memory component, and a lane to utilize an I2C protocol to connect low-speed memory components. Further, as shown in Figure 6, the controller 600 can include channels 625 to couple the controller to at least one DRAM memory device and/or at least one low latency RAM memory device, an SPI lane 611 used for short-distance communication, and a JTAG bus 613 to couple the controller 600 to an external memory component. Further, as shown in Figure 6, the controller 600 can include a voltage input bus 607 to receive a voltage supply and a ground connection bus 615 to ground the controller 600. [0071] Figure 7 illustrates a functional block diagram in the form of a cache 712 for managing metrics and telemetry in accordance with a number of embodiments of the present disclosure. A cache 712 can be included in a central controller portion (e.g., central controller portion 110 of Figure 1). The cache 712 can include a cache controller logic and a cache memory. In some embodiments, the cache 712 can also provide the central controller portion with information related to performance of memory operations. In some embodiments, data from a host (e.g., host 103 of Figure 1) can be stored in the cache memory included in cache 712 in response to receiving a signaling indicative of access requests from the host.[0072] In some embodiments, the cache 712 can include a metric logic756 to collect metrics related to memory operations. That is, the cache controller logic of the cache 712 can include a metric logic 756 to collect metrics. For example, as data is read and/or written to the cache 712 the metric logic 756 can collect metrics related to cacheline hits, cacheline misses, cacheline evictions without writeback, cacheline replacements with writeback, cache read accesses, and/or cache write access. The metrics collected by the metric logic 756 can be used to track the behavior of the computing system. In some embodiments, understanding the behavior of the computing system related to memory operations can assist in determining which characteristic of the computing system should be altered.[0073] In some embodiments, the metric logic can include multiple counters to collect metrics related to memory operations. For example, the metric logic 756 can include at least of a read hit counter, write hit counter, read miss counter, write miss counter, replacement counter, writeback counter, total read access counter, total write access counter, cache set read access counter, cache set write access counter, or any combination thereof to collect metrics related to memory operations. In some embodiments, the metric logic 756 can use a counter to count cacheline hits, cacheline misses, cacheline evictions without writeback, cacheline replacements with writeback, cache read accesses, and/or cache write access, for example. The metric logic 756 can store the count in the storage area 758. For instance, the metric logic 756 can store a count from each counter in the storage area 758 after each metric storage event. The storage area 758 can be any type of volatile memory and/or non-volatile memory. For instance, the storage area can be random access memory (RAM), NOR flash, among other possibilities. In some embodiments, the counter can store the count as an absolute value and/or store the count as a percentage (e.g., percentage of hit/misses over a total number of access requests).[0074] In some embodiments, each counter can store counts in a respective row 762-1, 762-R (individually or collectively referred to as row 762) of the storage area 758. That is, each counter can store counts in different rows of the rows 762. For example, the write hit counter can store counts in a first row (e.g., 762-1) and the read miss counter can store counts in a second row (e.g., 762-R). In some embodiments, each row 763 can include multiple slots 764-1, 764-2, 764-S (individually or collectively referred to as slot 764) to store a count after a metric storage event. For example, after a first metric storage event the metric logic 756 can store a first count from a first counter (e.g., read miss counter) in a first slot 764-1 of a first row (e.g., 762-1) and after a second metric storage event store a second count from a first counter (e.g., read miss counter) in a second slot 764-2 of a first row (e.g., 762-1). In some embodiments, each counter can reset to an initial value after each count is stored in the storage area 758. That is, after each metric storage event each counter can reset to an initial value.[0075] In some embodiments, a management unit controller (e.g., management unit controller 240 of Figure 2) can access data stored in the metric storage area 758 through the interconnect 736. The management unit controller can use the data received from the storage area 758 to determine if at least one characteristic of the computing system should be altered. For example, the metrics (e.g., counts) stored in the storage area can determine if at least one of a characteristic of an interface in the front end portion, a first type of memory device, a second type of memory device, and/or the cache memories should be altered. If it is determined that a characteristic should be altered to improve the computing system performance, the management unit controller can cause the characteristic to be altered. For example, management unit controller can cause the data transfer rate of the interface in the front end portion to be altered based on the received data from the storage area 758.[0076] In some embodiments, the cache 712 can include a load telemetry logic to calculate the load paths within the cache 712. That is, the cache controller logic of the cache 712 can include a load telemetry logic to calculate the load paths. For example, the cache controller logic of the cache 712 can include a requestor load telemetry 750-1 to calculate load request from a host.In addition, the cache controller logic of the cache 712 can include a memory load telemetry 750-2 to calculate load request from a memory device.[0077] For example, the requestor load telemetry 750-1 can receive a signaling indicative of access requests from a host. The signaling can cause a memory operation, such as writing data to the cache 712, to be performed. The requestor load telemetry 750-1 can use the input write path 752-1 to count the write path load request received by the requestor load telemetry 750-1. In some embodiments, the count for the input write path 752-1 can be increased when a write access is observed on the bus. Similarly, the signaling can cause a memory operation, such as reading data from the cache 712, to be performed. The requestor load telemetry 750-1 can use the input read path 754-1 to count the read path load request received by the requestor load telemetry 750-1. In some embodiments, the count for the input read path 754-1 can be increased when a read access is observed on the bus.[0078] In some embodiments, the memory load telemetry 750-2 can receive a signaling indicative of access request from a memory device the signaling can cause a memory operation, such as writing data to or reading data from the cache 712, to be performed. The memory load telemetry 750-2 can use the input write path 752-2 to count the write path load request and the input read path 754-2 received by the memory load telemetry 750-2. In some embodiments, the count for the input write path 752-2 and/or input read path 754-2 can be increased when a write access and/or read access is observed on the bus. [0079] In some embodiments, the memory load telemetry 750-2 can give an 8-bit value that represents the utilization of the memory load telemetry 750-2. The memory load telemetry 750-2 can calculate the load (e.g., Telemetry ratio) by dividing the read telemetry count or the write telemetry count by the telemetry max value. Likewise, the requestor load telemetry 750-1 can give an 8-bit value that represents the utilization of the requestor load telemetry 750-1. The requestor load telemetry 750-1 can calculate the load (e.g., Telemetry ratio) by dividing the read telemetry count or the write telemetry count by the telemetry max value. As used herein, the “telemetry max value” is the maximum number of accesses observed on the bus. In some embodiments, the telemetry max value can be a preset value. In another embodiment, the telemetry max value can be determined based on the numbers of accesses over a set time period.[0080] In some embodiments, the requestor load telemetry 750-1 and the memory load telemetry 750-2 can store the telemetry count and/or the telemetry ratio in the storage area 758. The telemetry count and/or telemetry ratio can be used to alter characteristics of the interface in the front end portion to improve the computing system. For example, the management unit controller can receive telemetry count and/or telemetry ratio data stored in the storage area 758 via the interconnect 736.[0081] In some embodiments, the cache 712 can include and/or can be coupled to a buffer such as a first-in-first-out (FIFO) buffer. The buffer such as a FIFO buffer can include buffer circuitry such as FIFO buffer circuitry. The buffer circuitry can perform various operations such as operations associated with metrics and/or load telemetry. For instance, management unit controller or another controller/logic can monitor a quantity of information (e.g., collected metrics and collected load telemetry) written to the FIFO buffer to determine whether the FIFO buffer contains greater than a threshold quantity of information and/or is full (e.g., can store no more additional metrics and/or load telemetry). Responsive to a determination that the FIFO buffer contains greater than a threshold quantity of information and/or is full, a flag (e.g., an overflow flag) or another type of indicator can be triggered. Triggering of the indicator such as the flag can occur in conjunction with and/or cause an interrupt request (IRQ) to be sent. For instance, an interrupt request can be sent to a host. Triggering of the indicator and/or sending the interrupt request can occur in conjunction with and/or cause information such as collected metrics and/or collected load telemetry to be removed from the FIFO buffer. For instance, a last entry and/or most recent metrics and/or load telemetry stored in the FIFO buffer can be removed such that the FIFO buffer no longer satisfies the threshold quantity of information and/or no longer is full. In some embodiments, the above approaches to a “full” condition of a FIFO buffer can be applied to metrics and/or load telemetry stored in a storage area 758. For instance, a first FIFO buffer can be associated with metrics and a second FIFO buffer can be associated with load telemetry. However, in some embodiments a FIFO buffer can be associated with metrics and load telemetry.[0082] In some embodiments, the management unit controller can used the telemetry count and/or telemetry ratio to determine if at least one characteristic of the computing system (e.g., a characteristic of the interface in the front end portion, a characteristic of the first type of memory device, a characteristic of the second type of memory device, a characteristic of the cache memories) should be altered to improve the performance of the computing system. For example, the management unit controller can alter at least one of a characteristic of the interface in the front end portion, a characteristic of the DRAM memory device, a characteristic of the low latency RAM memory device, and/or a characteristic of the cache 712 based on collected metrics and collected load telemetry received from the storage area 758.[0083] As described herein, collected metrics and collected load telemetry from the metric logic and the load telemetry logic is stored in a storage area 758. As such, if the metric logic and/or load telemetry logic overflow the information collected by the metric logic and/or load telemetry logic can remain unaffected as the information is stored in the storage area 758.[0084] As described herein, a controller can be configured to manage a first type of memory device. In yet another embodiment, the controller can be configured to manage a first type of memory device and a second type of memory device. In some embodiments, the first type of memory device can be a dynamic random access memory (DRAM) device and the second type of memory device can be a low latency RAM memory device. The controller can comprise a front end portion including an interface that includes a plurality of input/output (I/O) lanes and circuitry to manage the interface. In some embodiments, the plurality of I/O lanes are configured to transfer access requests to or from circuitry external to the controller according to a compute express link protocol.[0085] The controller can also include a central controller portion configured to perform memory operations in response to receiving a signaling indicative of access requests from the host. The central controller portion can include a cache 712 to store data associated with the performance of the memory operations. The central controller portion can also include a metric logic 756 and a load telemetry logic (e.g., requestor load telemetry 750-1 and/or memory load telemetry 750-2). The metric logic 756 can be configured to collect metrics related to performance of a memory operation. The load telemetry logic (e.g., requestor load telemetry 750-1 and/or memory load telemetry 750-2) can be configured to collect load telemetry (e.g., requestor load telemetry 750-1 and/or memory load telemetry 750-2) associated with performance of a memory operation within a threshold time. The central controller portion can also include a storage area 758 to store the collected metrics and the collected load telemetry.[0086] In some embodiments, the controller can include a peripheral component interconnect express (PCIe) 5.0 interface coupled to the plurality of I/O lanes, wherein the controller is to receive access requests involving at least one of the cache, the first type of memory device, or the second type of memory device, or any combination thereof, via the PCIe 5.0 interface according to a compute express link protocol.[0087] In some embodiments, the metric logic 756 can include a plurality of counters. The metric logic 756 can be configured to collect, within a threshold amount of time, metrics related to memory operations using the plurality of counters. The plurality of counters can comprise at least one of a read hit counter, write hit counter, read miss counter, write miss counter, replacement counter, writeback counter, total read access counter, total write access counter, cache set read access counter, cache set write access counter, or any combination thereof. In some embodiments, the metric logic 756 can collect counts from each set of a set associative cache to determine the most frequently accessed set. [0088] Figure 8 illustrates a functional block diagram in the form of a cache 812-1, 812-B for managing metrics and telemetry in accordance with a number of embodiments of the present disclosure. In some embodiments, a computing system can include a host and a controller coupled to the host. The controller can be configured to manage a dynamic random access memory (DRAM) device and a low latency RAM memory device. In some embodiments, the controller can comprise a front end portion, comprising an interface configured to couple the controller to the host through a plurality of input/output (I/O) lanes and circuitry to manage the plurality of I/O lanes, and a central controller portion configured to, in response to receiving a request from the host, perform an memory operations.[0089] In some embodiments, the central controller portion can comprise a plurality of cache memories 812 including a first sub-cache 812-1 and a second sub-cache 812-B. Each cache 812-1, 812-B of the plurality of cache memories 812 can comprise a plurality of counters to perform respective counts related to memory operations of the cache 812-1, 812-B. In some embodiments, the first sub-cache 812-1 and the second sub-cache 812-B can include a cache memory and a cache controller logic. The cache memory can be used to store cachelines and the cache controller logic can include metric logic 856-1, 856-B and a load telemetry logic. That is, a cache controller logic of the first sub-cache 812-1 and the second sub-cache 812-B can include a metric logic 856-1, 856-B and a load telemetry logic (e.g., requestor load telemetry 850-1, 850-1B and/or memory load telemetry 850-2, 850-2B) to collect metrics and load telemetry, respectively. In some embodiments, the metric logic 856-1, 856-B can collect, within a threshold amount of time, metrics related to the memory operations using the plurality of counters. In addition, the load telemetry logic (e.g., requestor load telemetry 850-1, 850-1B and/or memory load telemetry 850-2, 850-2B) can collect, within the threshold amount of time, load telemetry associated with performing the memory operations using a plurality of load telemetry counters. In some embodiments, the load telemetry logic (e.g., requestor load telemetry 850-1, 850-1B and/or memory load telemetry 850-2, 850-2B) can calculate a load path of the memory operation of the DRAM memory device or the low latency RAM memory device. The load telemetry logic (e.g., requestor load telemetry 850-1, 850-1B and/or memory load telemetry 850-2, 850-2B) and the metric logic 856-1, 856-B can store the collected count to a storage area 858 after the threshold amount of time has lapsed.[0090] For instance, the central controller portion can also comprise a storage area 858 to store the collected metrics and the collected load telemetry.In some embodiments, a metric logic 856-1, 856-B can initiate a metric storage event to store counts from a plurality of counters in the storage area 858. The metric logic 856-1, 856-B can initiate a metric storage event after a threshold amount of time has passed. In some embodiments, the load telemetry store counts, after a threshold amount of time, from a plurality of telemetry counters to the storage area 858. In some embodiments, a management unit controller can use an interconnect 836 to read the collected metrics and the collected load telemetry from the storage area 858. The management unit controller can, based on the stored metrics and load telemetry, alter at least one of a characteristic of the interface in the front end portion, a characteristic of the DRAM memory device, a characteristic of the low latency RAM memory device, a characteristic of the cache 812-1, 812-B, or any combination thereof. In some embodiments, the management unit controller can alter a data transfer rate of the interface in the front end portion based on the stored metrics and the collected load telemetry in the storage area.[0091] In some embodiments, the metric logic 856-1, 856-B can include a memory operation hit counter configured to increase when a memory operation hit is detected and a memory operation miss counter configured to increase when a memory operation miss is detected. In some embodiment, each counter of the plurality of counters can be configured to reset to an initial value after the respective counts are stored using the storage area 858 and/or after a threshold amount of time has elapsed. In some embodiments, the storage area 858 can include a plurality of rows, each of the plurality of rows includes a plurality of slots. Each counter of the plurality of counters can store respective counts in a respective row of the plurality of rows. That is, each count can be stored in a different row.[0092] In some embodiments, the central controller can improve at least one characteristic of the computing system by receiving, by a central controller portion of a controller from a front end portion of the controller, a signaling indicative of access requests involving either a first type of memory device or a second type of memory device. In some embodiments, the signaling indicative of the access request can be received at a rate of thirty -two gigatransfers per second or greater.[0093] A metric logic 856-1, 856-B can collect metrics related to memory operations received by the cache 812-1, 812-B and a load telemetry logic (e.g., requestor load telemetry 850-1, 850-1B and/or memory load telemetry 850-2, 850-2B) can collect load telemetry related to memory operations received by the cache 812-1, 812-B. In some embodiments, the metric logic 856-1, 856-B and the load telemetry logic (e.g., requestor load telemetry 850-1, 850-1B and/or memory load telemetry 850-2, 850-2B) can store, in a storage area 858, the metrics and load telemetry related to memory operations to alter at least one characteristic of the computing systems. That is, metrics and load telemetry can be used to alter a characteristic of an interface of the front end portion, a characteristic of the first type of memory device, a characteristic of the second type of memory device, and/or a characteristic of the cache 812-1, 812-B.[0094] As described herein, in some embodiments, the cache can be split into at least two sub-cache memories 812-1, 812-B, wherein each sub-cache of the at least two sub-cache memories comprises a respective metric logic and a respective load telemetry logic. For instance, in an example having a total of two sub-cache there can be a total of two metric logic 856-1, 856-B and two telemetry logic (e.g., requestor load telemetry 850-1, 850-1B and/or memory load telemetry 850-2, 850-2B) such that each of the two sub-cache has a respective metric logic and a respective telemetry logic. Stated differently, in some embodiments as total number of sub-cache can be equal to a total number of metric logic and equal to a total number of load telemetry logic.[0095] In some embodiments, each of the cache controller logic of the sub-cache 812-1, 812-B can be accessed substantially concurrently to collect the information (e.g., metrics and load telemetry) related to memory operations. As used herein, the term “substantially” intends that the characteristic need not be absolute, but is close enough so as to achieve the advantages of the characteristic. For example, “substantially concurrently” is not limited to operations that are performed absolutely concurrently and can include timings that are intended to be concurrent but due to manufacturing limitations may not be precisely concurrent. For example, due to read/write delays that may be exhibited by various interfaces and/or buses, accessing the cache during an access request are performed “substantially concurrently” and may not start or finish at exactly the same time.[0096] The figures herein follow a numbering convention in which the first digit or digits correspond to the figure number and the remaining digits identify an element or component in the figure. Similar elements or components between different figures may be identified by the use of similar digits. For example, 104 may reference element “04” in Figure 1, and a similar element may be referenced as 204 in Figure 2. A group or plurality of similar elements or components may generally be referred to herein with a single element number. For example, a plurality of reference elements 216-1 to 216-N may be referred to generally as 216. As will be appreciated, elements shown in the various embodiments herein can be added, exchanged, and/or eliminated so as to provide a number of additional embodiments of the present disclosure. In addition, the proportion and/or the relative scale of the elements provided in the figures are intended to illustrate certain embodiments of the present disclosure and should not be taken in a limiting sense.[0097] Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of one or more embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description. The scope of the one or more embodiments of the present disclosure includes other applications in which the above structures and processes are used. Therefore, the scope of one or more embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled. [0098] In the foregoing Detailed Description, some features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.
Apparatuses and methods can be related to generating an asynchronous process topology in a memory device. The topology can be generated based on the results of a number of processes. The processes can be asynchronous given that the processing resources that implement the processes do not use a clock signal to generate the topology.
What is claimed is:1. An apparatus, comprising: a memory array configured to store data and to function synchronously with a clock signal received from a host device; a processing resource coupled to the memory array and configured to: execute a first process utilizing the data stored by the memory array responsive to receipt of a signal by the apparatus; determine asynchronously with the clock signal of the host device that a result of the first process is greater than a threshold value; and execute a second process utilizing the data responsive to the determination that the result of the first process is greater than the threshold value.2. The apparatus of claim 1, wherein the processing resource is configured to perform the first process asynchronously with the clock signal of the host device.3. The apparatus of claim 1, the processing resource is further configured to perform the first process and the second process asynchronously with the clock signal of the host device.4. The apparatus of any one of claims 1-3, wherein the processing resource is further configured to perform a third process responsive to a determination that the result is greater than a different threshold value.5. The apparatus of any one of claims 1-3, wherein the processing resource is further configured to refrain from executing the second process responsive to a determination that the result is not greater than the threshold value.6. The apparatus of any one of claims 1-3, further comprising input/output (I/O) circuitry configured to provide a result of the second process asynchronously with the clock signal of the host device.7. A method comprising: performing a first process, at a first processing resource implemented under a memory array of a memory device, responsive to a receipt of a command by the memory device, wherein the first process is performed utilizing a first portion of data stored in the memory array; asynchronously performing a determination of whether to provide a signal to a second processing resource based on a result of the first process, wherein the second processing resource is selected based on the result of the first process; and performing, utilizing the second processing resource implemented under the memory array, a second process responsive to receipt of the signal, wherein the second process is performed utilizing a second portion of the data stored in the memory array.8. The method of claim 7, further comprising asynchronously performing the determination utilizing logic configured to compare the result to a plurality of thresholds, wherein each of the plurality of thresholds is associated with the selection of a different processing resource including the second processing resource.9. The method of claim 8, wherein the logic, that functions asynchronously, is coupled to the first processing resource and wherein the method further comprises providing the result of the first process to the logic.10. The method of claim 8, further comprises providing signals between the first processing resource and a plurality of processing resources including the second processing resource via the logic that functions asynchronously to a clock signal of a host device.11. The method of claim 10, further comprising selectively coupling the first processing resource to the second processing resource via the logic that functions asynchronously to the clock signal of the host device.12. The method of claim 10, further comprising selectively coupling the first processing resource to the plurality of processing resources via one or more instances of the logic that functions asynchronously to the clock signal of the host device.13. The method of claim 10, further comprising selectively coupling the second processing resource to the first processing resource and the plurality of processing resource via the one or more instances of the logic that functions asynchronously to the clock signal of the host device.14. The method of claim 10, further comprising operating the logic asynchronously regardless of whether the first processing resource and the second processing resource operate asynchronously to the clock signal of the host device.15. The method of any one of claims 8-14, wherein the logic is implemented under the memory array and further comprising providing the signal under the memory array to the second processing resource.16 An apparatus comprising: a first memory array; a second memory array; a first processing resource implemented under the first memory array of the apparatus and configured to perform a first process utilizing a first data value from the first memory array; a comparator coupled to the first processing resource and configured to provide a signal to a second processing resource based on a result of the first process; wherein the comparator is configured to provide the signal asynchronously to a clock signal of a host device; and wherein the first processing resource and the comparator are implemented in a first bank of the apparatus; and the second processing resource implemented under the second memory array of the apparatus and configured to perform a second process utilizing a second data value from the second memory array, wherein the second processing resource is configured to perform the second process responsive to receipt of the signal from the comparator and wherein the second processing resource is implemented in a second bank of the apparatus.17. The apparatus of claim 16, wherein the second processing resource is configured to perform the second process without receipt of additional signal by the apparatus.18. The apparatus of claim 16, wherein the first memory array, the second memory array, the first processing resource, and the second processing resource function synchronously utilizing the clock signal of the host device.19. The apparatus of any one of claims 16-18, wherein the apparatus is further configured to export the result of the first process and a result of the second process on an interface coupling the apparatus to a host during operation of another command.20. The apparatus of claim 19, wherein the apparatus is configured to export the result of the first process and the result of the second process during operation of the apparatus in a non-compliant mode and is further configured to store the result of the first process and the result of the second process to the first memory array or the second memory array during operation of the apparatus in a compliant mode.
ASYNCHRONOUS PROCESS TOPOLOGY IN A MEMORY DEVICETechnical Field[0001] The present disclosure relates generally to memory, and more particularly to apparatuses and methods associated with implementing an asynchronous process topology in a memory device.Background[0002] Memory devices are typically provided as internal, semiconductor, integrated circuits in computers or other electronic devices.There are many different types of memory including volatile and non-volatile memory. Volatile memory can require power to maintain its data and includes random-access memory (RAM), dynamic random access memory (DRAM), and synchronous dynamic random access memory (SDRAM), among others. Non volatile memory can provide persistent data by retaining stored data when not powered and can include NAND flash memory, NOR flash memory, read only memory (ROM), Electrically Erasable Programmable ROM (EEPROM), Erasable Programmable ROM (EPROM), and resistance variable memory such as phase change random access memory (PCRAM), resistive random access memory (RRAM), and magnetoresistive random access memory (MRAM), among others.[0003] Memory is also utilized as volatile and non-volatile data storage for a wide range of electronic applications including, but not limited to personal computers, portable memory sticks, digital cameras, cellular telephones, portable music players such as MP3 players, movie players, and other electronic devices. Memory cells can be arranged into arrays, with the arrays being used in memory devices.[0004] The memory may be provided command utilizing an interface protocol. The commands provided to the memory may be predefined and may be used to control the function of the memory. The interface may be utilized to provide commands to the memory device to cause the memory device to perform operations. Brief Description of the Drawings[0005] Figure 1 is a block diagram of an apparatus in the form of a computing system including a memory device in accordance with a number of embodiments of the present disclosure.[0006] Figure 2 is a block diagram of a plurality of processes in accordance with a number of embodiments of the present disclosure.[0007] Figure 3 is a block diagram of an apparatus in the form of a memory device including a plurality of processing resources in accordance with a number of embodiments of the present disclosure.[0008] Figure 4 is a block diagram of an apparatus in the form of a memory device including a plurality of banks in accordance with a number of embodiments of the present disclosure.[0009] Figure 5 is a block diagram of an apparatus in the form of a memory device including a plurality of processing resources and comparators in accordance with a number of embodiments of the present disclosure.[0010] Figure 6 is a block diagram an apparatus in the form of a memory device including a processing resource in accordance with a number of embodiments of the present disclosure.[0011] Figure 7 illustrates an example flow diagram of a method for performing operations in memory in accordance with a number of embodiments of the present disclosure.[0012] Figure 8 illustrates an example machine of a computer system within which a set of instructions, for causing the machine to perform various methodologies discussed herein, can be executed.Detailed Description[0013] The present disclosure includes apparatuses and methods related to implementing an asynchronous process topology in a memory device. A memory device can receive clock signals and/or can generate clock signals. The clock signals can be used to synchronize various operations performed by the memory device. The various operations performed by the memory device can be performed synchronously within the memory device and/or can be performed synchronously with a device, such as a host device, external to the memory device. In various examples, a memory device can implement processes and/or a topology corresponding to the processes asynchronously.[0014] As used herein, “synchronous” refers to the use of a clock signal in performing operations and/or processes. A clock signal includes any timing signal or a signal that can be used to track duration of time, a time reference, and/or reference of operations. In various examples, the clock signal can be received from a host device. Asynchronous refers to the performance of operations and/or processes without the use of the clock signal.[0015] The memory device can be configured to implement the processes and/or the topology corresponding thereto asynchronously while in a non-compliant mode. The memory device may not be configurable to implement the processes and/or the topology corresponding thereto asynchronously while in a compliant mode.[0016] A memory device can be compliant to an interface protocol. The interface protocol is defined as the communication between a memory device and a device external to the memory device. Devices can be compliant to an interface protocol if they communicate as defined by the interface protocol. The interface protocol can be defined such that a memory device can receive and process signals from a plurality of devices external to the memory device, where the plurality of devices are manufactured by a plurality of different providers.An example of an interface protocol is the double data rate (DDR) 5 standard. In various instances, the interface protocol can be generated by an organization such as the joint electron device engineering council (JEDEC) that enables any devices compliant with the interface protocol to communicate with each other without the added expense of defining a new interface protocol for multiple devices.[0017] In various examples, the result of a process implemented in a memory device can be used to select a different process for execution. The processes can be performed asynchronously and/or the selection of the different process can be performed asynchronously. Performing asynchronous processes and/or selecting processes asynchronously in a synchronous memory device provides the ability to implement processes that would otherwise not be implementable in a memory device. For example, performing asynchronous processes and/or selecting processes asynchronously in a synchronous memory device provides the ability to implement learning processes in the memory device. Learning processes can include neural networks among other types of learning processes. Although the examples provided herein are provided in the context of neural networks, the examples can also be implemented utilizing different types of processes.[0018] The figures herein follow a numbering convention in which the first digit or digits correspond to the drawing figure number and the remaining digits identify an element or component in the drawing. Similar elements or components between different figures may be identified by the use of similar digits. As will be appreciated, elements shown in the various embodiments herein can be added, exchanged, and/or eliminated so as to provide a number of additional embodiments of the present disclosure. In addition, the proportion and the relative scale of the elements provided in the figures are intended to illustrate various embodiments of the present disclosure and are not to be used in a limiting sense.[0019] Figure 1 is a block diagram of an apparatus in the form of a computing system 100 including a memory device 103 in accordance with a number of embodiments of the present disclosure. As used herein, a memory device 103, memory arrays 110-1, 110-2, 110-3, ... , and 110-N, and/or a host 102, for example, might also be separately considered an “apparatus.” The memory arrays 110-1, 110-2, 110-3, ... , and 110-N can be referred to as memory arrays 110.[0020] In this example, system 100 includes a host 102 coupled to memory device 103 via an interface 104. The computing system 100 can be a personal laptop computer, a desktop computer, a digital camera, a mobile telephone, a memory card reader, or an Intemet-of-Things (IoT) enabled device, among various other types of systems. Host 102 can include a number of processing resources (e.g., one or more processors, microprocessors, or some other type of controlling circuitry) capable of accessing memory 102. The system 100 can include separate integrated circuits, or both the host 102 and the memory device 103 can be on the same integrated circuit. For example, the host 102 may be a system controller of a memory system comprising multiple memory devices 103, with the system controller 102 providing access to the respective memory devices 103 by another processing resource such as a central processing unit (CPU).[0021] In the example shown in Figure 1, the host 102 is responsible for executing an operating system (OS) and/or various applications that can be loaded thereto (e.g., from memory device 103 via controller 105). The OS and/or various applications can be loaded from the memory device 103 by providing access commands from the host 102 to the memory device 103 to access the data comprising the OS and/or the various applications. The host 102 can also access data utilized by the OS and/or various applications by providing access commands to the memory device 103 to retrieve said data utilized in the execution of the OS and/or the various applications.[0022] For clarity, the system 100 has been simplified to focus on features with particular relevance to the present disclosure. The memory arrays 110 can be a DRAM array, SRAM array, STT RAM array, PCRAM array, TRAM array, RR.AM array, NAND flash array, and/or NOR flash array, for instance. The arrays 110 can comprise memory cells arranged in rows coupled by access lines (which may be referred to herein as word lines or select lines) and columns coupled by sense lines (which may be referred to herein as digit lines or data lines).[0023] The memory device 103 includes address circuitry 106 to latch address signals provided over an interface 104. The interface can include, for example, a physical interface employing a suitable protocol (e.g., a data bus, an address bus, and a command bus, or a combined data/address/command bus). Such protocol may be custom or proprietary, or the interface 104 may employ a standardized protocol, such as Peripheral Component Interconnect Express (PCIe), Gen-Z interconnect, cache coherent interconnect for accelerators (CCIX), or the like. Address signals are received and decoded by a row decoder 108 and a column decoder 112 to access the memory arrays 110. Data can be read from memory arrays 110 by sensing voltage and/or current changes on the sense lines using sensing circuitry 111-1 to 111-N. The sensing circuitry 111-1 to 111-N can be referred to as sensing circuitry 111. Each of the sensing circuitry 111-1 to 111 -N can be coupled to a corresponding memory array from the memory arrays 110-1, 110-2, 110-3, ..., 110-N. Each memory array and corresponding sensing circuitry can constitute a bank of the memory device 103. The sensing circuitry 111 can comprise, for example, sense amplifiers that can read and latch a page (e.g., row) of data from the memory array 110. The I/O circuitry 107 can be used for bi-directional data communication with the host102 over the interface 104. The read/write circuitry 113 is used to write data to the memory arrays 110 or read data from the memory arrays 110. As an example, the circuitry 113 can comprise various drivers, latch circuitry, etc. [0024] Control circuitry 105 decodes signals provided by the host 102.The signals can be commands provided by the host 102. These signals can include chip enable signals, write enable signals, and address latch signals that are used to control operations performed on the memory arrays 110, including data read operations, data write operations, and data erase operations. In various embodiments, the control circuitry 105 is responsible for executing instructions from the host 102. The control circuitry 105 can comprise a state machine, a sequencer, and/or some other type of control circuitry, which may be implemented in the form of hardware, firmware, or software, or any combination of the three. In some examples, the host 102 can be a controller external to the memory device 103. For example, the host 102 can be a memory controller which is coupled to a processing resource of a computing device. Data can be provided to the memory arrays 110 and/or from the memory array via the data lines 116.[0025] In various instances, the functionality of the memory device103 can be controlled by the host 102. For examples, the host 102 can provide commands to the memory device 103 through the interface 104 to read the memory arrays 110 and/or write to the memory arrays 110, among other functionalities of the memory array 103. However, an interface protocol implemented may not define commands to control the functionality of processing resources implemented in the memory device 103 to perform operations while in a compliant mode. The memory device can be configured to receive commands to control the functionality of processing resources while in a non-compliant mode.[0026] The processing resources implemented in the memory device103 can be coupled to the data lines 116, can be implemented in the sensing circuitry 111, and/or can be implemented under the memory arrays 110. The processing resources can be controlled to perform a process. As used herein, a process can comprise one or more operations performed by a processing resource. The operations can include logical operations such as AND operations and OR operations, among other types of logical operations. The operations can include addition operations, subtraction operations, multiplication operations, and/or division operations. Operations can also include comparison operations and selection operations.[0027] In various examples, a result of a first process can be used to select a next process to perform and/or can be used to provide data to the host 102. The topology of the processes can be selected based on the results of the processes. In some examples, the coupling of the processing resources that implement the processes can correspond to the topology of the processes and can be based on the results of the processes.[0028] Figure 2 is a block diagram of a plurality of processes 222-1, 222-2, ... , and 222-M in accordance with a number of embodiments of the present disclosure. The processes 222-1, 222-2, ... , and 222-M can be referred to as processes 222. The processes can be performed by a memory device 203. Each of the processes 222 can be executed by one or more processing resources hosted by the memory device 203.[0029] The memory device 203 can receive a command 220 via an interface of the memory device 203. The command 220 can identify a process 222-1 to perform. A first number of processing resources can perform the process 222-1 responsive to receipt of the command. The memory device 203 can utilize the result of the process 222-1 to determine whether to perform the process 222-2 or to provide a result 224 (e.g., output/result 224). The determination can be performed by the first number of processing resources, a different number of processing resources, and/or by a comparator, among other types of circuitry that can initiate the process 222-2 or provide the data. The determination whether to perform the process 222-2 or provide the result 224 can be performed asynchronously. For example, the circuitry performing the determination can perform the determination without reference to a clock signal. The quantity of operations used to perform the determination can be performed without the use of a clock signal.[0030] In various example, the process 222-1 and/or a portion of the process 222-1 can be performed without reference to a clock signal. For example, although a read operation corresponding to the process 222-1 may be implemented based on a clock signal, different operations corresponding to the process 222-1 may be performed without reference to a clock signal.[0031] The result of the process 222-1 may be used to select the process222-2, as shown in the example of Figure 2. Although, in different examples, the result of the process 222-1 could be used to select the process 222-3 (e.g., not shown) and/or a different process.[0032] The result of the process 222-2 can be used to select the process222-3. The result of the process 222-2 can also be used to determine whether to provide the result to a host. In some examples, a result of a process can be used to determine whether to provide the result without the selection of a next process. The result of the process 222-M can be used to determine that no additional processes should be selected and that the result is to be provided to the host.[0033] Although Figure 2 shows the result 224 as being provided by the memory device 203 to a device external to the memory device 203, the result 224 can be stored in a memory array of the memory device 203. A host can thereto read the result from the memory array. For example, a command used to initiate the process 222-1 can also be associated with a location in the memory array such that the result corresponding to the process 222-1 is stored in memory cells, having an address associated with the command, of the memory array. [0034] Figure 3 is a block diagram of an apparatus in the form of a memory device 303 including a plurality of processing resources 334-1, 334-2,... , and 334-M in accordance with a number of embodiments of the present disclosure. The processing resources 334-1, 334-2, ... , and 334-M can be referred to as processing resources 334.[0035] The processing resources 334-1 can be implemented under a memory array (e.g., memory array 110-1). In various examples, the sense amplifiers 332-1, 332-2, ... , and 332-M can also be implemented under the memory array. The sense amplifiers 332-1, 332-2, ... , and 332-M can be referred to as sense amplifiers 332. In different examples, the sense amplifiers 332 can be implemented in line with the memory array as opposed to being implemented under the memory array. Regardless of whether the sense amplifiers 332 are implemented under the memory array or not, the sense amplifiers 332 are coupled to the processing resources 334. For example, the sense amplifier 332-1 is coupled to the processing resource 334-1, the sense amplifier 332-2 is coupled to the processing resource 334-2, ... , and the sense amplifier 332-M is coupled to the processing resource 334-M.[0036] The sense amplifiers 332 can be coupled to sense lines of the memory array. The sense amplifiers 332 can amplify a signal provided from the memory cells of the memory array through the sense lines. The sense amplifiers 332 can provide a signal to the processing resources 334. The processing resource 334 can perform a plurality of operations on data provided from the sense amplifiers 332.[0037] The result of a first process implemented by the processing resource 334-1 can be used to provide the result to the processing resource 334- 2. The processing resource 334-2 can utilize data provided by the sense amplifier 332-2 and/or the processing resource 334-2 to perform a second process that when implemented generates a second result. The second result can be used to determine whether or not to initiate a next process implemented by a processing resource 334-3 (not shown). The processing resource 334-M can utilize a result of a prior process and/or data provided by the sense amplifier 332-M to implement a last process. The result of the last process can be provided through a plurality of I/O lines.[0038] Although a sense amplifier is shown as being coupled to a processing resource, multiple processing resources can be coupled to a sense amplifier and/or multiple sense amplifiers can be coupled to a processing resource. The coupling of sense amplifiers 332 to processing resources 334 can be used to provide data to the processing resources 334.[0039] In various examples, the processing resources 334 may not be coupled to clock signals such that the processes implemented by the processing resources 334 are performed asynchronously. In various examples, the portion of the processing resources 334 that determines whether to provide data to a processing resource may not utilize a clock signal to perform the determination while a remainder of the processing resources 334 may utilize a clock signal to perform different operations. The portion of the processing resources 334 that determines which processing resource to provide data to may not utilize a clock signal to perform the determination while a remainder of the processing resources 334 may utilize a clock signal to perform different operations.[0040] In various examples, the memory device 303 can be a three- dimensional (3D) memory device which includes multiple layers stacked together. As an example, a first layer 336 (e.g., memory array 110-1 as illustrated in Figure 1) of the memory device 303 is coupled to a second layer 315 (e.g., CMOS under array as illustrated in Figure 3) of the memory device 303. Although the first layer 336 is described as being on the second layer 315, the first layer 336 and the second layer 315 can be designed to comprise a number of different orientations such that the first layer 336 is coupled to the second layer 315. The examples described herein are not limited to a specific orientation between the first layer 336 and the second layer 325. The first layer 336 of the memory device 303 can include an array of memory cells. Although embodiments are not so limited, memory cells of the array can include DRAM memory cells.[0041] The second lay er 315 can include a number of logic blocks that are configured to perform various functions, for example, using data values stored in the array of memory cells. The number of logic blocks can include a plurality of processing resources 334 which can also be referred to as a processing resource 334. In various examples, the second layer can also include row drivers and/or column drivers. Although the M quantity of processing resources 334 are shown in Figure 3, the processing resources 334 can include more or fewer processing resources than those shown here.[0042] The second layer 315 may be one of a plurality of logic blocks included within the memory device 303. The processing resources 334 can be configured to perform artificial intelligence (AI) processing. For example, the processing resources 334 can be configured as a network (e.g., neural network). Each of the processing resources 332 can be a node in a neural network. Each of the processing resources 334 can be coupled to different memory cells of a memory array which can store weights of the network and/or inputs to the network. The processing resources 334 can be interconnected such that the outputs of some of the processing resources 334 can be received as input by another of the processing resources 334. A result of the AI processing performed by the processing resources 334 can be stored back to the memory array, can be latched by sense amplifiers, and/or can be provided via I/O lines.As used herein, references to networks or learning processes can refer to artificial networks and learning processes.[0043] Figure 4 is a block diagram of an apparatus in the form of a memory device 403 including a plurality of banks 440-1, 440-2, ... , 440-N in accordance with a number of embodiments of the present disclosure. The banks 440-1, 440-3, ... , 440-N can be referred to as banks 440.[0044] Each of the banks can include a plurality of sense amplifiers and processing resources. For example, the bank 440-1 includes sense amplifiers 432-1, 432-2, ... , and 432-R, and processing resources 434-1, 434-2, ... , and 434-R. The bank 440-2 is shown as including a sense amplifier 432-R+l and a processing resource 434-R+l. The bank 440-N includes the sense amplifier 432- R+2 and processing resource 432-R+2. Although each of the banks 440-2 and 440-N are shown as including a single sense amplifier and a single processing resource, the banks 440-2 and 440-N can include more sense amplifiers and processing resources than those shown in Figure 4. The sense amplifiers 432-1, 432-2, ... , 432-R, 432-R+l, and 432-R+2 can be referred to as sense amplifiers 432. The processing resources 434-1, 434-2, ... , 434-R, 434-R+l, and 434-R+2 can be referred to as processing resources 434.[0045] The banks 440 can be configured to function as a single artificial neural network or as a plurality of artificial neural networks. For instance, the processing resources 434-1, 434-2, ... , and 434-R of the bank 440-1 can be configured as a first neural network, the processing resources, including the processing resource 434-R+l, of the bank 440-2 can be configured as a second neural network, ... , and the processing resources, including the processing resource 432-R+2, of the bank 440-N can be configured into an Nth neural network. In such examples, a process can be defined as the execution of a neural network. A first process can be performed by activating a first neural network. The result of the first neural network can be provided to a second neural network etc.[0046] In a number of examples, each of the banks 440 of the memory device 403 can represent a single layer of a neural network such that a single neural network is implemented comprising N layers. A first layer of a neural network can be represented by the configuring of the processing resources 434- 1, 434-2, ... , and 434-R. A second layer of the neural network can be represented by the configuring of the processing resources, including the processing resource 434-R+l, in the bank 440-2. While an Nth layer of the neural network can be represented by the configuring of the processing resources, including the processing resource 434-R+2, in the bank 440-N.[0047] Each of the processing resources 434 can be coupled to a different sense amplifier of the sense amplifiers 432. Figure 4 shows the processing resources of a layer being coupled to processing resources of a different layer of a neural network. For example, the processing resources 434-1, 434-2, ... , and 434-R are coupled to a processing resource 434-R+l of the bank 440-2. Although not shown, each of the processing resources 434-1, 434-2, ... , and 434-R can be coupled to each of the processing resources of the bank 440-2, each of the processing resources of the bank 440-2 can be coupled to each of the processing resources of a different bank, etc.[0048] A process can include the propagation of signals through a layer of a neural network. The results of a first process including the results the first layer of the neural network can be provided to a second process by providing the results to the second layer of the neural network.[0049] The topology of the neural network can be selected based on the results of the processes. A topology can describe how data is transferred between processing resources. For example, a first processing resource can be coupled to a second processing resource and a third processing resource. Data can be provided from the first processing resource to the second processing resource responsive to a first result of a first process executed by the first processing resource. Data can also be provided from the first processing resource to the third processing resource responsive to a second result of the first process. The passing of data between the first processing resource and the second processing resource can describe a first topology. The passing of data between the first processing resource and the third processing resource can describe a second topology. A topology can also describe an order in which processes executed by the processing resources are executed. For example, the first result of the first process can cause a second process executed by the second processing resource to be executed after the execution of the first process. A second result of the first process can cause a third process executed by the third processing resource to be executed after the execution of the first process. The execution of the second process after the execution of the first process can describe a first topology of the processes while the execution of the third process after the execution of the first process can describe a second topology of the processes.[0050] The topology between the processes of the bank 440-1 and the bank 440-2 can be defined based on the results of the processing resources 434- 1, 434-2, ... , 434-R. Each of the processing resources 434-1, 434-2, ... , and 434-R can be selectively coupled to the processing resource of the bank 440-2 based on the results provided by the processing resources 434-1, 434-2, ... , and 434-R. The processing resources 434-1 can be selectively coupled to the processing resource 434-R+l if the result provided by the processing resource 434-1 is greater than a threshold. If the result provided by the processing resource 434-1 is not greater than the threshold, then the processing resource 434-1 may not be coupled to the processing resource 434-R+l. As used herein, selectively coupling describes selectively providing data to a processing resource based on the results of a process.[0051] The processing resources 434 may utilize inputs and weights stored in the memory array to perform a process which generates a result. Accordingly, the processing resources 434 can be configured to cause the use of different processing resources 434, operation of different sense amplifiers 432, and/or reading of memory cells coupled to the processing resources 434. For example, each of the processing resource 434 can cause the memory cells to be read by corresponding sensing circuitry from the sensing circuitry 432, the sensing circuitry 432 to provide signals to the different processing resources 434, and the different processing resources 434 to receive signals from corresponding processing resources 434. In various examples, a plurality of processing resources 434-1, 434-2, ... , 434-R can cause memory cells to be read by the sensing circuitry 432-R+l, the sensing circuitry 432-R+l to provide signals to the processing resource 434-R+l, and the processing resource 434-R+l to receive signals from the processing resources 434-1, 434-2, ... , 434-R.[0052] Figure 5 is a block diagram of an apparatus in the form of a memory device 503 including a plurality of processing resources 534-1, 534-2, ... , and 534-M and comparators 551-1, 551-2, ... , 551-M in accordance with a number of embodiments of the present disclosure. The processing resources 534-1, 534-2, ... , and 534-M can be referred to as processing resources 534 and the comparators 551-1, 551-2, ... , 551-M can be referred to as comparators 551. [0053] The processing resources 534 can perform a number of processes.The processing resources can be coupled to the comparators 551. For example, the processing resource 534-1 is coupled to the comparator 551-1, the processing resource 534-2 is coupled to the comparators 551-2, ... , and the processing resource 534-M is coupled to the comparator 551-M. The processing resources 534 can provide the results of the processes to the comparators 551. The comparators 551 can comprise circuitry configured to compare a value provide by the processing resources 534 to threshold values.[0054] For example, the comparator 551-1 can compare values provided by the processing resource 534-1 to a first threshold value. The comparator 551- 2 can compare values provided by the processing resource 534-2 to a second threshold value. The comparator 551-M can compare values provided by the processing resource 534-M to an Mth threshold value. Responsive to determining that the values provided by the processing resources 534 are greater than, equal to, or less than a threshold value, the comparators 551 can provide a signal to a corresponding processing resource. For example, the comparator 551-1 can provide a signal to the processing resource 534-2 responsive to the values provided by the processing resource 534-1 being greater than a threshold value.[0055] The comparators 551 can receive inputs, integrate the inputs, and provide an output (e.g., fire). For example, the comparators 551 can receive a plurality of inputs including a first charge and a second charge. The first charge and the second charge can be combined (e.g., integrated) to generate a third charge. The integrated charges can degrade (e.g., leak) over time. For example, the charges stored by a capacitor of the comparator 551 can degrade over time. The comparators 551 can include a resistor and a capacitor, among other components. The resistor and the capacitor can also be referred to a resistor- capacitor (RC) circuit. The capacitor of the comparator 551 can combine charges such that charges that are received at the capacitor. The capacitor can provide a combined charge to circuitry configured to provide a forward spike. [0056] The processing resources 534 can be activated a plurality of times such the results of the process are retained by the comparator 551 until a threshold is reached. Determining whether a threshold is reached comprises comparing multiple values. The comparator 551 can provide a signal (e.g., forward spike) to a corresponding processor. Retaining results does not include storing because the values retains are constantly changing given the degradation of the retained results.[0057] In some examples, the processing resources 534 and/or the comparators 551 can function without reference to a clock signal. For instance, the processing resource 534 and the comparator 551 can function without reference to a clock signal. Although the processing resources 534 and the comparators 551 are shown as two separate components, the processing resources 534 and the corresponding comparators 551 can comprise a single component. For example, the processing resource 534-1 and the comparator 551-1 can comprise a single device.[0058] In various instances, the comparators 551 can provide signals to different processing resources based on the results of the processes performed by one or more processing resources the processing resources. For example, a comparator can provide a first signal to a first processing resource if a value is less that a first threshold, a second signal to a second processing resource if the value is greater than the first threshold but less than a second threshold, or a third signal to a third processing resource if the value is greater than a third threshold, among other possible implementation of mappings between thresholds and processing resources.[0059] Figure 6 is a block diagram of an apparatus in the form of a memory device 603 including a processing resource 634 in accordance with a number of embodiments of the present disclosure. The processing resource 634 of Figure 6 is shown as being coupled to the data lines. Although the processing resource 634 is shown as a single processing resource, the processing resource 634 can represent a number of processing resources such as the processing resources 334 in Figure 3. The processing resource 634 represents the combination of a processing resource and a comparator.[0060] The processing resource 634 can be coupled to the command interface 604-1 and the command interface 604-3. Although not shown, the processing resource 634 can also be coupled to the address interface 604-2. The processing resource 634 can receive command via the command interface 604-1. The commands received by the processing resource 634 can be utilized to program the processing resource 634 to perform the various functions described herein.[0061] The processing resource 634 can activate a row control 608 and/or a column control 612 in addition to activating different processing resources. The row control 608 and the column control 612 can be activated to provide data values from the memory cells of memory array 610 to the sense amplifiers 611 and from the sense amplifiers 611 to a corresponding processing resource. The memory cells can store weights of an artificial neural network such as activating the row control 608 and/or the column control 612 can provide weights to the corresponding processing resource for the performance of corresponding processes.[0062] In various examples, the processing resource 634 can receive and/or output data through the data interface 604-3. Given that the processing resource 634 can output a result which is the results of the performance of a plurality of processes which may in part be asynchronous, the processing resource 634 may not be able to provide the result synchronously with the expectation of the result by a host. To overcome the challenge of providing a result through the data interface 604-3 while the host is expecting the result, the processing resource 634 may hold the result until a synchronous delivery of data is scheduled. The processing resource 634 may provide the result after different data is synchronously provided through the data interface 604-3.[0063] For example, a command to perform the plurality of processes that are asynchronous can be received by the memory device 603 through the command interface 604-1 and/or the address interface 604-2. The plurality or processes can be performed and the result can be generated by the processing resource 634. Independent of the command to perform the plurality of processes, the memory device 603 can receive a command to perform a plurality of operations and/or processes that are synchronous. For example, the read command can be received by the command interface 604-1 and an address corresponding to the read command can be received by the address interface 604-2. The memory device can output data read from memory cells having the address via the data interface 604-3 twenty to twenty two clock cycles after the read command is received by the memory device 603. After the data read is provided, the processing resource 634 can provide the result via the data interface 604-3.[0064] However, providing a result of the processes after the data is read from the memory array 610 may not be compliant with a standard interface protocol but may be compliant with a particular interface protocol. Defining different interface protocols can be different than repurposing a pin utilizing a single interface protocol. For example, an interface protocol can provide for pins that are “open.” An open pin describes a pin that can be used to provide signals that are not defined by the interface protocol. However, providing a signal such as a command through an open pin does not make the interface protocol noncompliant with itself when an address is provided through the open pin. The interface protocol continues to be compliant with itself when different types of signals are provided through an open pin because the pin is open. However, redefining each of the pins can result in a different interface protocol being noncompliant with each other.[0065] As used herein, compliance describes the ability to decode signals received through each of the pins utilizing a first interface protocol or a second interface protocol without losing functionality. Compliance can also include encoding signals utilizing a first interface protocol or a second interface protocol without degrading data encoded through the signals. For example, if a host provides a signal representing a read command through a particular pin to the memory device 603 using a first interface protocol, and the memory device decodes the signal as a write command utilizing a second interface protocol, then the first interface protocol is noncompliant with the second interface protocol. Decoding a single signal as anything other than what it was encoded to be can result in noncompliance between the interface protocol used to encode the signal and the interface protocol used to decode the signal with the exception of signals provided via an open pin.[0066] Implementing a noncompliant interface protocol can also provide the ability to output data at different times as compared to the outputting of data utilizing a compliant interface protocol. The command to perform asynchronous operations can be received while the memory device 603 is in a mode corresponding to the noncompliant interface protocol. The outputting of the results of the asynchronous processes can be provided while the memory device 603 is in a noncompliant interface protocol.[0067] The host can be configured to receive data responsive to providing a read command to the memory device 603 while in a compliant mode. The host can also be configured to receive a result of a plurality of asynchronous processes while in a noncompliant mode. The host can “listen” after receipt of scheduled data to receive data that is not scheduled. As used herein, scheduled data describes the providing and/or receiving data at an expected time. Nonscheduled data describes the providing and/or receiving data at a time that is not expected.[0068] Figure 7 illustrates an example flow diagram of a method for performing operations in memory in accordance with a number of embodiments of the present disclosure. At 760, a first process can be performed, as a first processing resource implemented under a memory array of a memory device, responsive to a receipt of a command by the memory device, wherein the first process is performed utilizing a first portion of data stored in the memory array. At 762, a determination can be asynchronously performed. The determination can determine whether or not to provide a signal to a second processing resource based on a result of the first process, wherein the second processing resource is selected based on the result of the first process. At 764, a second process can be performed utilizing the second processing resource implemented under the memory array. The second process can be performed responsive to receipt of the signal. The second process can be performed utilizing a second portion of the data stored in the memory array.[0069] The method can also include asynchronously performing the determination utilizing logic (e.g., comparator) configured to compare the result to a plurality of thresholds. Each of the plurality of thresholds can be associated with the selection of a different processing resource including the second processing resource. For example, if the result is less than a first threshold, then the comparator can provide a signal to a third processing resource, if the result is greater than the first threshold but less than a second threshold, then the comparator can provide a signal to a fourth processing resource. [0070] The logic that functions asynchronously to a clock signal of the host device can be coupled to the first processing resource. The first processing resource can provide the result to the logic. The logic can be implemented under the memory array. For example, the memory array can be implemented in a first hardware layer of the memory device and the processing resource and/or the logic can be implemented in a second hardware layer of the memory array.[0071] The logic can couple processing resources which provides a topology between the processing resources. For example, the first processing resource can be selectively coupled to the plurality of processing resources including the second processing resource via the logic that functions asynchronously to the clock signal of the host device. In various instances the processing resource can be selectively coupled to a different processing resource as opposed to a plurality of processing resources via the logic.[0072] The processing resource can be selectively coupled to a plurality of processing resources via one or more instance of the logic. For example, the processing resource can provide the result to a plurality instances of the logic. Each instance of the logic can be configured to couple the processing resource to a different processing resource. For example, a first instance of a logic can be configured to couple the first processing resource to a second processing resource if the result is greater than a first threshold value. A second instance of a logic can be configured to couple the first processing resource to a third processing resource if the result is greater than a second threshold value and so forth. The first threshold value and the second threshold value can be equal or can be different. Implementing different instances of the logic provide for the selection of the topology between processing resources based on the results of the processes.[0073] In various examples, a processing resource providing a result can be selectively coupled to a plurality of different processing resources. A processing resource receiving signals can also be selectively coupled to a plurality of different processing resources providing the signals through a plurality of instance of the logic (comparators). The results of the process performed by the processing resource can be provided under the memory array to the logic. The logic can provide a signal to the processing resources, responsive to the result being larger or smaller than a threshold, under the memory array.[0074] The logic can operate asynchronously regardless of whether or not the first processing resource and the second processing resource operate asynchronously to the clock signal of the host device. For example, the processing resources may operate synchronously while the logic operates asynchronously to the clock signal of the host device. As used herein, the processing resources and/or logic can function asynchronously from a control signal and/or a clock signal of a host device. References to synchronicity are in the context of control signals.[0075] The second processing resource can be configured to perform the second process without receipt of additional signals by the apparatus. For example, the host can provide a first command which can be used to initiate performance of the first process by the first processing resource. The result of the first process can be used to initiate a second process which is performed by a second processing resource. The second process can be initiated without the host providing additional commands and/or signals.[0076] The memory device can export the result of the first process and a result of the second process on an interface coupling the apparatus to a host during operation of another command. The operation of another command can include reading data from the memory array and providing the data to the host. During the providing of the data from the memory device to the host responsive to receipt of a read command, the memory device can provide the result of the first process and the second process. For example, the results can be provided after the data is read and provided to the host.[0077] The results of the first process and the second process can be provided (e.g., exported) during the operation of the apparatus in a non- compliant mode. In a compliant mode, the results of the first process and/or the second process can be stored to one or more memory array to which the processing resources performing the first process and the second process are coupled to.[0078] Figure 8 illustrates an example machine of a computer system890 within which a set of instructions, for causing the machine to perform various methodologies discussed herein, can be executed. In various embodiments, the computer system 890 can correspond to a system (e.g., the system 100 of Figure 1) that includes, is coupled to, or utilizes a memory sub system (e.g., the memory device 103 of Figure 1) or can be used to perform the operations of a controller (e.g., the controller circuitry 105 of Figure 1). In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client- server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment.[0079] The machine can be a personal computer (PC), a tablet PC, a set top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.[0080] The example computer system 890 includes a processing resource892, a main memory 894 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 898 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage system 899, which communicate with each other via a bus 897.[0081] Processing resource 892 represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing resource 892 can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing resource 892 is configured to execute instructions 893 for performing the operations and steps discussed herein. The computer system 890 can further include a network interface device 895 to communicate over the network 820.[0082] The data storage system 899 can include a machine-readable storage medium 891 (also known as a computer-readable medium) on which is stored one or more sets of instructions 893 or software embodying any one or more of the methodologies or functions described herein. The instructions 893 can also reside, completely or at least partially, within the main memory 894 and/or within the processing resource 892 during execution thereof by the computer system 890, the main memory 894 and the processing resource 892 also constituting machine-readable storage media.[0083] In one embodiment, the instructions 893 include instructions to implement functionality corresponding to the host 102 and/or the memory device 103 of Figure 1. While the machine-readable storage medium 891 is shown in an example embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.[0084] As used herein, “a number of’ something can refer to one or more of such things. For example, a number of memory devices can refer to one or more memory devices. A “plurality” of something intends two or more. Additionally, designators such as “N,” as used herein, particularly with respect to reference numerals in the drawings, indicates that a number of the particular feature so designated can be included with a number of embodiments of the present disclosure.[0085] Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of various embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combinations of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description. The scope of the various embodiments of the present disclosure includes other applications in which the above structures and methods are used. Therefore, the scope of various embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled.[0086] In the foregoing Detailed Description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.
A method of forming an array of capacitors and access transistors there-above comprises forming access transistor trenches partially into insulative material. The trenches individually comprise longitudinally-spaced masked portions and longitudinally-spaced openings in the trenches longitudinally between the masked portions. The trench openings have walls therein extending longitudinally in and along the individual trench openings against laterally-opposing sides of the trenches. At least some of the insulative material that is under the trench openings is removed through bases of the trench openings between the walls and the masked portions to form individual capacitor openings in the insulative material that is lower than the walls. Individual capacitors are formed in the individual capacitor openings. A line of access transistors is formed in the individual trenches. The line of access transistors electrically couples to the individual capacitors that are along that line. Other aspects, including structure independent of method, are disclosed.
CLAIMS:1. A method of forming an array of capacitors and access transistors there-above, comprising:forming access transistor trenches partially into insulative material, the trenches individually comprising longitudinally-spaced masked portions and longitudinally-spaced openings in the trenches longitudinally between the masked portions, the trench openings having walls therein extendinglongitudinally in and along the individual trench openings against laterally- opposing sides of the trenches;removing at least some of the insulative material that is under the trench openings through bases of the trench openings between the walls and the masked portions to form individual capacitor openings in the insulative material that is lower than the walls;forming individual capacitors in the individual capacitor openings; and forming a line of access transistors in the individual trenches, the line of access transistors electrically coupling to the individual capacitors that are along that line.2. The method of claim 1 comprising forming the walls to be of different composition from that of the insulative material that is laterally- adjacent the trenches.3. The method of claim 2 comprising using the insulative material that is laterally-adjacent the trenches, the walls, and the masked portions as a mask during said removing.4. The method of claim 1 wherein the masked portions are masked with conductive masking material.5. The method of claim 1 wherein the masked portions are masked with insulative masking material.6. The method of claim 1 wherein the walls are conductive.7. The method of claim 1 wherein the walls are insulative.8. The method of claim 1 wherein the walls are semiconductive.9. The method of claim 1 wherein the walls do not extend into space that is longitudinally between the trench openings.10. The method of claim 1 wherein the walls also extend into space that is longitudinally between the trench openings.1 1. The method of claim 10 wherein the walls extend longitudinally into the space from trench opening-to-trench opening between immediately- longitudinally-adjacent of the trench openings.12. The method of claim 1 comprising forming the access transistors to comprise hollow channels.13. The method of claim 1 comprising forming the individual capacitors to comprise a laterally-outer container shape capacitor electrode and which is directly electrically coupled to a capacitor electrode that is shared by multiple capacitors within the array.14. A method of forming a tier of an array of memory cells within an array area, the memory cells individually comprising a capacitor and an elevationally-extending transistor there-above, the method comprising using two, and only two, sacrificial masking steps within the array area of the tier in forming the transistors and the capacitors of the memory cells, each of the two masking steps within the array area of the tier with respect to material that is elevationally inward of masking material removing only dielectric material.15. A method of forming a tier of an array of memory cells within an array area, the memory cells individually comprising a capacitor and an elevationally-extending transistor there-above, the method comprising using two, and only two, sacrificial masking steps within the array area of the tier in forming the transistors and the capacitors of the memory cells, one of the two masking steps within the array area of the tier with respect to material that is elevationally inward of masking material removing only dielectric material, the other of the two masking steps within the array area of the tier with respect to material that is elevationally inward of masking material removing dielectric material and conductive material.16. The method of claim 15 wherein the other is after the one.17. A method of forming an array of capacitors and access transistors there-above, comprising:forming access transistor trenches partially into insulative material, the trenches individually comprising longitudinally-spaced masked portions and longitudinally-spaced openings in the trenches longitudinally between the masked portions;after forming the trench openings, forming encircling walls against peripheral sides of the individual trench openings;removing at least some of the insulative material that is under the trench openings through bases of the trench openings radially inward of the encircling walls to form individual capacitor openings in the insulative material that is lower than the walls;forming individual capacitors in the individual capacitor openings; and forming a line of access transistors in the individual trenches, the line of access transistors electrically coupling to the individual capacitors that are along that line.18. The method of claim 17 comprising forming the encircling walls to be conductive.19. The method of claim 17 comprising forming the encircling walls to be insulative.20. The method of claim 17 comprising forming the encircling walls to be semiconductive.21. A method of forming an array of capacitors and access transistors there-above, comprising:forming access transistor trenches partially into insulative material, the trenches individually comprising longitudinally-spaced masking material and longitudinally-spaced openings in the trenches longitudinally between the masking material;after forming the trench openings, forming sacrificial encircling walls against peripheral sides of the individual trench openings to form individual mask openings within the individual trench openings;removing at least some of the insulative material that is under the mask openings through bases of the mask openings radially inward of the encircling walls to form individual capacitor openings in the insulative material that is lower than the walls;forming individual capacitors in the individual capacitor openings;after forming the capacitors, plugging the mask openings with sacrificial material;removing the sacrificial encircling walls to form longitudinally-spaced sacrificial pillars comprising the sacrificial material within the trenches;forming conductive material in and along the trenches about thesacrificial material pillars to form an access line in the individual trenches;removing the sacrificial pillars to form channel openings in the individual access lines in the trenches; andforming gate insulator and channel material in the channel openings; the access line, the gate insulator, and the channel material being formed to comprise a line of access transistors in the individual trenches, the line of access transistors electrically coupling to the individual capacitors that are along that line of access transistors.22. The method of claim 21 comprising removing the masking material from the trenches such that the longitudinally-spaced sacrificial pillars have no solid material between them longitudinally along the individual trenches.23. The method of claim 21 wherein the masking material isconductive.24. The method of claim 21 wherein the masking material is insulative.25. The method of claim 21 wherein the masking material issemiconductive.26. The method of claim 21 comprising forming the access transistors to comprise hollow channels.27. The method of claim 21 comprising forming the individual capacitors to comprise an upwardly-open container shape capacitor electrode and a laterally-inner electrode that is laterally inward of the upwardly-open container shape capacitor electrode, the line of access transistors directly electrically coupling to the individual of the laterally-inner electrodes along that line of access transistors.28. The method of claim 27 comprising forming the individual laterally-inner electrodes to be, from side-to-side, entirely solid from top-to- bottom in horizontal cross-section.29. A method of forming an array of capacitors and access transistors there-above, comprising:forming access transistor trenches partially into insulative material, the trenches individually comprising longitudinally-spaced masking material and longitudinally-spaced openings in the trenches longitudinally between the masking material;after forming the trench openings, forming conductive encircling walls against peripheral sides of the individual trench openings to form individual channel openings within the individual trench openings;removing at least some of the insulative material that is under the channel openings through bases of the channel openings radially inward of the encircling walls to form individual capacitor openings in the insulative material that is lower than the walls;forming individual capacitors in the individual capacitor openings; and forming gate insulator and channel material in the individual channel openings, the conductive encircling walls comprising an access line in the individual trenches; the access line, the gate insulator, and the channel material being formed to comprise a line of access transistors in the individual trenches, the line of access transistors electrically coupling to the individual capacitors that are along that line of access transistors.30. The method of claim 29 wherein the masking material is conductive and at least some of which remains to comprise the access line.3 1. The method of claim 29 wherein no portion of the masking material remains to comprise the access line.32. The method of claim 31 comprising removing all of the masking material from the individual trenches.33. A method of forming an array of capacitors and access transistors there-above, comprising:forming access transistor trenches partially into insulative material, the trenches individually comprising longitudinally-spaced conductive masking material and longitudinally-spaced openings in the trenches longitudinally between the conductive masking material;after forming the trench openings, forming encircling walls against peripheral sides of the individual trench openings to form individual mask openings within the individual trench openings;removing at least some of the insulative material that is under the mask openings through bases of the mask openings radially inward of the encircling walls to form individual capacitor openings in the insulative material that is lower than the walls;forming individual capacitors in the individual capacitor openings; and forming a line of access transistors in the individual trenches, the line of access transistors electrically coupling to the individual capacitors that are along that line, the conductive masking material comprising an access line of the line of the access transistors in the individual trenches.34. The method of claim 33 comprising forming the encircling walls to be conductive and comprise the access line.35. The method of claim 33 comprising forming the encircling walls to be insulative.36. The method of claim 35 comprising removing at least some of the encircling walls and replacing the at least some of the encircling walls with conductive material that comprises the access line.37. The method of claim 36 comprising removing all of the encircling walls and replacing the encircling walls with the conductive material.38. The method of claim 33 comprising forming the encircling walls to be semiconductive.39. A method of forming an array of capacitors and access transistors there-above, comprising:forming access transistor trenches partially into insulative material;forming a pair of access line walls in individual of the trenches, the access line walls extending longitudinally in and along the individual trenches against laterally-opposing sides of the trenches;forming longitudinally-spaced masked portions and longitudinally-spaced channel openings in the trenches longitudinally between the masked portions; removing at least some of the insulative material that is under the channel openings through bases of the channel openings between the walls and the masked portions to form individual capacitor openings in the insulative material that is lower than the walls;forming individual capacitors in the individual capacitor openings; and forming gate insulator and channel material in the channel openings; the pair of access line walls, the gate insulator, and the channel material being formed to comprise a line of access transistors in the individual trenches, the line of access transistors electrically coupling to the individual capacitors that are along that line of access transistors.40. The method of claim 39 wherein the masked portions are masked with conductive masking material.41. The method of claim 40 wherein the conductive masking material is directly against the pair of access line walls and remains in a finished circuitry construction.42. The method of claim 39 wherein the masked portions are masked with insulative masking material.43. The method of claim 42 wherein the insulative masking material is directly against the pair of access line walls and remains in a finished circuitry construction.44. The method of claim 39 comprising forming peripheral sides of the channel openings to be of the same composition circumferentially from top-to- bottom.45. The method of claim 39 comprising forming the peripheral sides of the channel openings to be of different composition along differentcircumferentially-extending segments, and of the same composition from top-to- bottom within each of the circumferentially-extending segments.46. The method of claim 45 comprising forming the peripheral sides of the channel openings to comprise only two different compositions.47. The method of claim 46 comprising at least two pairs of laterally- opposing circumferentially-extending segments, individual of the laterally- opposing circumferentially-extending segments in each pair being of the same composition.48. The method of claim 46 comprising forming the circumferentially- extending segments to alternate in the two different compositionscircumferentially about the individual channel openings.49. A memory cell comprising:a capacitor comprising an upwardly-open container shape electrode; and a hollow channel transistor above and directly electrically coupled to the capacitor.50. The memory cell of claim 49 wherein the capacitor comprises a laterally-inner electrode that is laterally inward of the upwardly-open container shape electrode, the laterally-inner electrode being, from side-to-side, entirely solid from top-to-bottom in horizontal cross-section.51. The memory cell of claim 49 wherein the capacitor comprises a laterally-inner electrode that is laterally inward of the upwardly-open container shape electrode, the hollow transistor being directly electrically coupled with the laterally-inner electrode.52. The memory cell of claim 51 wherein the capacitor comprises a laterally-inner electrode that is laterally inward of the upwardly-open container shape electrode, the laterally-inner electrode being, from side-to-side, entirely solid from top-to-bottom in horizontal cross-section.53. An array of the memory cells of claim 49.54. The memory cell of claim 49 wherein the memory cell is 1T- 1 C.55. The memory cell of claim 49 wherein the memory cell is not 1 T-1C.56. The memory cell of claim 55 wherein the memory cell is 2T- 1 C.57. An array of memory cells individually comprising a capacitor and a transistor, the array comprising rows of access lines and columns of digit lines, comprising:individual of the rows comprising an access line extending operatively adjacent channels of individual transistors of individual memory cells within the array and interconnecting the transistors in that row;individual of the columns comprising a digit line above the access lines, the digit line being electrically coupled to one source/drain region of the individual transistors and interconnecting the transistors in that column;capacitors of the individual memory cells within the array individually comprising:a laterally-outer electrode having an upwardly-open container shape;a laterally-inner electrode;a capacitor insulator between the laterally-outer electrode and the laterally-inner electrode;the laterally-inner electrode being electrically coupled to the other source/drain region of the individual transistors; andthe laterally-outer electrode having the upwardly-open container shape being directly against a lower conductor that comprises a shared capacitor electrode of multiple of the capacitors within the array.58. The array of claim 57 wherein individual of the channels are hollow channels.59. The array of claim 57 wherein the laterally-outer electrode having the upwardly-open container shape has a bottom that is directly against the lower conductor.60. The array of claim 59 wherein the lower conductor has an uppermost surface within the array, the bottom of the laterally-outer electrode being directly against the uppermost surface of the lower electrode.61. The array of claim 57 wherein the lower conductor comprises a conductive plate under all of the array.62. The array of claim 57 wherein the lower conductor comprises a series of laterally-spaced conductive lines that are directly electrically coupled together.63. The array of claim 62 wherein the conductive lines are angled relative the access lines.64. The array of claim 62 wherein the conductive lines are parallel to the access lines.65. The array of claim 62 wherein the conductive lines are angled relative the digit lines.66. The array of claim 62 wherein the conductive lines are parallel to the digit lines.67. The array of claim 57 wherein,the digit line is directly electrically coupled to the one source/drain region of the individual transistors; andthe laterally-inner electrode is directly electrically coupled to the other source/drain region of the individual transistors.68. The array of claim 57 wherein the memory cells are individually1 T- 1C.69. The memory cell of claim 57 wherein the memory cells are individually 2T- 1C.70. A 2T- 1C memory cell comprising:a capacitor comprising a laterally-outer electrode having an upwardly- open container shape;a laterally-inner electrode;a capacitor insulator between the laterally-outer electrode and the laterally-inner electrode;a lower elevationally-extending transistor having an upper source/drain region thereof electrically coupled to the laterally-outer electrode having the upwardly-open container shape; andan upper elevationally-extending transistor having a lower source/drain region thereof electrically coupled to the laterally-inner electrode.71. The 2T- 1C memory cell of claim 70 wherein the lower transistor is a hollow channel transistor.72. The 2T- 1C memory cell of claim 70 wherein the upper transistor is a hollow channel transistor.73. The 2T- 1C memory cell of claim 70 wherein the lower transistor is a hollow channel transistor and the upper transistor is a hollow channel transistor.74. The 2T- 1C memory cell of claim 70 wherein the upper source/drain region of the lower elevationally-extending transistor is directly electrically coupled to the laterally-outer electrode having the upwardly-open container shape.75. The 2T- 1C memory cell of claim 70 wherein the lower source/drain region of the upper elevationally-extending transistor is directly electrically coupled to the laterally-inner electrode.76. The 2T- 1C memory cell of claim 70 wherein,the upper source/drain region of the lower elevationally-extending transistor is directly electrically coupled to the laterally-outer electrode having the upwardly-open container shape; andthe lower source/drain region of the upper elevationally-extending transistor is directly electrically coupled to the laterally-inner electrode.
DESCRIPTIONA MEMORY CELL, AN ARRAY OF MEMORY CELLS INDIVIDUALLY COMPRISING A CAPACITOR AND A TRANSISTOR WITH THE ARRAY COMPRISING ROWS OF ACCESS LINES AND COLUMNS OF DIGIT LINES, A 2T- 1 C MEMORY CELL, AND METHODS OF FORMING AN ARRAY OF CAPACITORS AND ACCESS TRANSISTORS THERE-ABOVETECHNICAL FIELDEmbodiments disclosed herein pertain to memory cells, to an array of memory cells individually comprising a capacitor and a transistor with the array comprising rows of access lines and columns of digit lines, to 2T- 1 C memory cells, and to methods of forming an array of capacitors and access transistors there-above.BACKGROUNDMemory is one type of integrated circuitry, and is used in computer systems for storing data. Memory may be fabricated in one or more arrays of individual memory cells. Memory cells may be written to, or read from, using digit lines (which may also be referred to as bit lines, data lines, sense lines, or data/sense lines) and access lines (which may also be referred to as word lines) . The digit lines may conductively interconnect memory cells along columns of the array, and the access lines may conductively interconnect memory cells along rows of the array. Each memory cell may be uniquely addressed through the combination of a digit line and an access line.Memory cells may be volatile or non-volatile. Non- volatile memory cells can store data for extended periods of time including when the computer is turned off. Volatile memory dissipates and therefore requires being refreshed/rewritten, in many instances multiple times per second. Regardless, memory cells are configured to retain or store memory in at least two different selectable states. In a binary system, the states are considered as either a "0" or a " 1 ". In other systems, at least some individual memory cells may beconfigured to store more than two levels or states of information. A capacitor is one type of electronic component that may be used in a memory cell. A capacitor has two electrical conductors separated by electrically insulating material. Energy as an electric field may be electrostatically stored within such material. Depending on composition of the insulator material, that stored field will be volatile or non-volatile. For example, a capacitor insulator material including only S1O2 will be volatile. One type of non-volatile capacitor is a ferroelectric capacitor which has ferroelectric material as at least part of the insulating material. Ferroelectric materials are characterized by having two stable polarized states and thereby can comprise programmable material of a capacitor and/or memory cell. The polarization state of the ferroelectric material can be changed by application of suitable programming voltages, and remains after removal of the programming voltage (at least for a time) . Each polarization state has a different charge-stored capacitance from the other, and which ideally can be used to write (i.e. , store) and read a memory state without reversing the polarization state until such is desired to be reversed. Less desirable, in some memory having ferroelectric capacitors the act of reading the memory state can reverse the polarization. Accordingly, upon determining the polarization state, a re-write of the memory cell is conducted to put the memory cell into the pre-read state immediately after its determination. Regardless, a memory cell incorporating a ferroelectric capacitor ideally is non-volatile due to the bi-stable characteristics of the ferroelectric material that forms a part of the capacitor. Other programmable materials may be used as a capacitor insulator to render capacitors non-volatile.BRIEF DESCRIPTION OF THE DRAWINGSFig. 1 is a perspective view of a substrate construction in process in accordance with an embodiment of the invention.Fig. 2 is a view of the Fig. 1 construction at a processing step subsequent to that shown by Fig. 1.Fig. 3 is a cross-sectional view taken through line 3-3 in Fig. 2.Fig. 4 is a view of the Fig. 2 construction at a processing step subsequent to that shown by Fig. 2.Fig. 5 is a cross-sectional view taken through line 5-5 in Fig. 4.Fig. 6 is a view of the Fig. 4 construction at a processing step subsequent to that shown by Fig. 4.Fig. 7 is a view of the Fig. 6 construction at a processing step subsequent to that shown by Fig. 6.Fig. 8 is a cross-sectional view taken through line 8-8 in Fig. 7.Fig. 9 is a view of the Fig. 7 construction at a processing step subsequent to that shown by Fig. 7.Fig. 10 is a view of the Fig. 9 construction at a processing stepsubsequent to that shown by Fig. 9.Fig. 11 is a view of the Fig. 10 construction at a processing step subsequent to that shown by Fig. 10.Fig. 12 is a view of the Fig. 11 construction at a processing step subsequent to that shown by Fig. 1 1.Fig. 13 is a view of the Fig. 12 construction at a processing step subsequent to that shown by Fig. 12.Fig. 14 is a view of the Fig. 13 construction at a processing step subsequent to that shown by Fig. 13.Fig. 15 is a view of the Fig. 14 construction at a processing step subsequent to that shown by Fig. 14.Fig. 16 is a view of the Fig. 15 construction at a processing step subsequent to that shown by Fig. 15.Fig. 17 is a view of the Fig. 16 construction at a processing step subsequent to that shown by Fig. 16. Fig. 18 is a view of the Fig. 17 construction at a processing step subsequent to that shown by Fig. 17.Fig. 19 is a view of the Fig. 18 construction at a processing step subsequent to that shown by Fig. 18.Fig. 20 is a view of the Fig. 19 construction at a processing step subsequent to that shown by Fig. 19.Fig. 21 is a view of the Fig. 20 construction at a processing step subsequent to that shown by Fig. 20.Fig. 22 is a perspective view of a substrate construction in process in accordance with an embodiment of the invention.Fig. 23 is a view of the Fig. 22 construction at a processing step subsequent to that shown by Fig. 22.Fig. 24 is a view of the Fig. 23 construction at a processing step subsequent to that shown by Fig. 23.Fig. 25 is a perspective view of a substrate construction in process in accordance with an embodiment of the invention.Fig. 26 is a view of the Fig. 25 construction at a processing step subsequent to that shown by Fig. 25.Fig. 27 is a view of the Fig. 26 construction at a processing step subsequent to that shown by Fig. 26.Fig. 28 is a perspective view of a substrate construction in process in accordance with an embodiment of the invention.Fig. 29 is a view of the Fig. 28 construction at a processing step subsequent to that shown by Fig. 28.Fig. 30 is a perspective view of a substrate construction in process in accordance with an embodiment of the invention.Fig. 31 is a view of the Fig. 30 construction at a processing step subsequent to that shown by Fig. 30.Fig. 32 is a view of the Fig. 31 construction at a processing step subsequent to that shown by Fig. 3 1.Fig. 33 is a view of the Fig. 32 construction at a processing step subsequent to that shown by Fig. 32.Fig. 34 is a cross-sectional view taken through line 34-34 in Fig. 33. Fig. 35 is a view of the Fig. 33 construction at a processing step subsequent to that shown by Fig. 33.Fig. 36 is a view of the Fig. 35 construction at a processing step subsequent to that shown by Fig. 35.Fig. 37 is a perspective view of a substrate construction in accordance with an embodiment of the invention.Fig. 38 is a schematic of a two transistor/one capacitor memory (2T/1 C) cell in accordance with an embodiment of the invention.Fig. 39 is a diagrammatic perspective view of a 2T/1 C construction in accordance with an embodiment of the invention.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTSEmbodiments of the invention encompass methods of forming an array of capacitors and access transistors there-above and such arrays independent of method of manufacture. Embodiments of the invention also encompass methods of forming a tier of an array of memory cells within an array area, with the memory cells individually comprising a capacitor and an elevationally- extending transistor there-above. Embodiments of the invention also encompass memory cells independent of method of manufacture. Further, embodiments of the invention also encompass an array of memory cells individually comprising a capacitor and a transistor independent of method of manufacture. Example embodiments of methods of forming an array of capacitors and accesstransistors there-above are initially described with reference to Figs. 1 -21.Referring to Fig. 1 , such depicts a portion of a substrate fragment or construction 10 comprising a base substrate 12 having an array or array area 14 within which an array of memory cells individually comprising a transistor and a capacitor will be fabricated. An area (not shown) is peripheral to array 14 and may be fabricated to include circuit components (i.e. , circuitry) . Individual memory cells may be fabricated within array 14 and array 14 may comprise rows of access lines and columns of digit lines. Use of "rows" and "columns" herein is with respect to a series of access lines and a series of digit lines, respectively, and longitudinally along which individual memory cells have been or will be formed within array 14. The rows may be straight and/or curved and/or parallel and/or non-parallel relative one another, as may be the columns. Further, the rows and columns may intersect relative one another at 90° or at one or more other angles. The peripheral area may be considered as starting and array 14 may be considered as stopping where a repeating pattern of memory cells stops (e.g., at peripheral edges of such a repeating pattern) although the rows of access lines and/or the columns of digit lines may and likely will extend into the peripheral area.Base substrate 12 may include any one or more ofconductive/conductor/conducting (i.e. , electrically herein), semiconductive, or insulative/insulator/insulating (i.e., electrically herein) materials. Various materials are shown above base substrate 12. Materials may be aside, elevationally inward, or elevationally outward of the depicted Fig. 1 materials. For example, other partially or wholly fabricated components of integrated circuitry may be provided somewhere above, about, or within substrate 12.Control and/or other peripheral circuitry for operating components within a memory array may also be fabricated, and may or may not be wholly or partially within an array or sub-array. Further, multiple sub-arrays may also befabricated and operated independently, in tandem, or otherwise relative one another. As used in this document, a "sub-array" may also be considered as an array. Regardless, any of the materials, regions, and structures described herein may be homogenous or non-homogenous, and regardless may be continuous or discontinuous over any material which such overlie. Further, unless otherwise stated, each material may be formed using any suitable existing or yet-to-be- developed technique, with atomic layer deposition, chemical vapor deposition, physical vapor deposition, epitaxial growth, diffusion doping, and ionimplanting being examples.A series of laterally-spaced conductive lines 16 has been formed over base substrate 12, along with dielectric material 18 there-between. In some embodiments, conductive lines 16 may be referred to or individually considered as a lower conductor. In this document, unless otherwise indicated,"elevational(ly)", "higher", "upper", "lower", "top", "atop", "bottom", "above, "below", "under", "beneath", "up", and "down" are generally with reference to the vertical direction. Further, "vertical" and "horizontal" as used herein are directions that are perpendicular or within 10 degrees of perpendicular relative one another independent of orientation of the substrate in three-dimensional space. "Horizontal" refers to a general direction (i.e. , within 10 degrees) along a primary substrate surface and may be relative to which the substrate is processed during fabrication. Also, "extend(ing) elevationally" and"elevationally-extending" in this document encompasses a range from vertical to no more than 45° from vertical. Further, "extend(ing) elevationally","elevationally-extending", and "vertical(ly)" with respect to a field effect transistor are with reference to orientation of the transistor's channel length along which current flows in operation between two source/drain regions of the transistor that are at two different elevations. For simplicity and ease of depiction, only two conductive lines 16 are shown although thousands, tens of thousands, etc. would likely be formed within array 14. Further, such lines are shown as being straight-linear although again curved, non-parallel, combination of curved and straight segmented, etc. configurations may be used. A purpose and a circuit configuration of conductive lines 16 are described below.Example materials for conductive lines 16, and for any conductive material herein, include one or more of elemental metal, a mixture or alloy of two or more elemental metals, conductive metal compounds, and conductively- doped semiconductive materials, with TiN being one specific example for lines 16. Example dielectric material 18 includes silicon nitride and/or doped or undoped silicon dioxide. An example elevational thickness for lines 16 and dielectric 18 is 200 to 1 ,000 Angstroms.In this document, "thickness" by itself (no preceding directional adjective) is defined as the mean straight-line distance through a given material or region perpendicularly from a closest surface of an immediately adjacent material of different composition or of an immediately adjacent region.Additionally, the various materials or regions described herein may be of substantially constant thickness or of variable thicknesses. If of variable thickness, thickness refers to average thickness unless otherwise indicated, and such material or region will have some minimum thickness and some maximum thickness due to the thickness being variable. As used herein, "different composition" only requires those portions of two stated materials or regions that may be directly against one another to be chemically and/or physically different, for example if such materials or regions are not homogenous. If the two stated materials or regions are not directly against one another, "different composition" only requires that those portions of the two stated materials or regions that are closest to one another be chemically and/or physically different if such materials or regions are not homogenous. In this document, a material, region, or structure is "directly against" another when there is at least some physical touching contact of the stated materials, regions, or structures relative one another. In contrast, "over", "on", "adjacent", "along", and "against" not preceded by "directly" encompass "directly against" as well as construction where intervening material(s), region(s), or structure(s) result(s) in no physical touching contact of the stated materials, regions, or structures relative one another.Insulative material 20 has been formed over substrate 12/16/18. In one embodiment, such is shown as comprising three insulative materials 21 , 22, and 23. In one embodiment, materials 21 and 23 are of the same composition and material 22 is of different composition from that of materials 21 and 23. An example composition for materials 21 and 23 is doped or undoped silicon dioxide, while that for material 22 is silicon nitride. Example thicknesses for insulative materials 21 , 22, and 23 are 1 ,000 Angstroms to 1.5 microns, 100 to 500 Angstroms, and 200 to 1 ,500 Angstroms, respectively.Referring to Figs. 2 and 3, trenches 24 have been formed partially into insulative material 20. In one embodiment and as shown, trenches 24 extend through insulative materials 23 and 22 to insulative material 21. Trenches 24 may be formed by any suitable existing or yet-to-be developed technique, such as photolithography with or without pitch-multiplication. Access transistors will be formed at least partially within trenches 24 and accordingly such trenches may be considered as access transistor trenches 24. For purposes of the continuing discussion, individual access transistor trenches 24 may beconsidered as comprising laterally-opposing sides 25.Referring to Figs. 4 and 5, masking material 26 has been deposited and patterned as shown to form trenches 24 to individually comprise longitudinally- spaced masked portions 28 and longitudinally-spaced openings 30 longitudinally between masked portions 28. In one embodiment, masking material 26 is insulative, in one embodiment is conductive, and in one embodiment is semiconductive. Regardless, at least some or none of masking material 26 may remain in the finished circuitry construction. Masking material 26 may be patterned using any technique, for example using photolithography with or without pitch multiplication. Individual trench openings 30 may be considered as comprising peripheral sides 25, 27 and a base 32.Referring to Fig. 6, material 34 has been formed over masking material 26 and to line and less-than-fill trench openings 30. Material 34 will be used to form walls within trench openings 30. In one embodiment, material 34 is conductive, in one embodiment is insulative, and in one embodiment is semiconductive. Regardless, ideally material 34 is of different composition from that of material 21. Any suitable conductive, insulative, or semiconductive materials may be used.Referring to Figs. 7 and 8, material 34 has been removed substantially from being over horizontal surfaces (e.g. , by suitable anisotropic etching), and thus re-exposing trench opening bases 32. In one embodiment and as shown, such has resulted in formation of walls 35, 36, 37, and 38 within trench openings 30. Walls 35 and 37 extend longitudinally in and along individual trench openings 30 against laterally-opposing sides 25 of trenches 24. In one embodiment, walls 35 and 37 are formed to be of different composition from that of insulative material that is laterally-adjacent (e.g. , 22, 23) trenches 24. In one embodiment, walls 35, 37 are conductive, in one embodiment aresemiconductive, and in one embodiment are insulative. In one embodiment, walls 35, 37 do not extend into space (e.g. , masked portions 28) that is longitudinally between trench openings 30. In one embodiment, such walls may also extend into space (not shown in Figs. 7 and 8) that is longitudinally between trench openings 30, and as will be described with respect to additional embodiments below. In one embodiment and as shown, walls 35 , 36, 37, and 38 encircle about trench openings 30 and are against (in one embodiment, directly against) peripheral sides 25, 27 of individual trench openings 30.Regardless, in one embodiment, walls 35, 36, 37, and 38 form individual mask openings 40 within individual trench openings 30, and in someembodiments which will comprise channel openings as will be apparent from the continuing discussion. In one embodiment, the depicted Figs. 7 and 8 removing is conducted without any mask being atop the substrate within array 14.Referring to Fig. 9, at least some of insulative material 20 that is under trench openings 30 has been removed through bases 32 (not shown) of trench openings 30 between walls 35, 37 and masked portions 28 (not designated in Fig. 9) shown in to form individual capacitor openings 42 in insulative material 20 that is lower than walls 35, 37. In one embodiment, such has been conducted radially inward of encircling walls 35, 36, 37, and 38 to form individual capacitor openings 42 in insulative material 20. In one embodiment, insulative material (e.g. , 23) that is laterally-adjacent trenches 24, walls 35/36/37/38, and masked portions 28 have been used as a mask during the depicted removal. An example technique for forming capacitor openings 42 includes photolithographic patterning and etch with or without pitch multiplication. An exampleanisotropic plasma chemistry for etching through silicon dioxide is acombination of C4F6, C4F8, and Ar, while that for etching through silicon nitride is a combination of CH2F2, CF4, and O2. In one embodiment and as shown, capacitor openings 42 have been formed through insulative material 20 to upwardly expose lower conductors 16. For simplicity and clarity, array 14 of construction 10 is only shown as comprising four capacitor openings (only the front two openings being viewable and designated with numerals 42) although again likely hundreds, thousands, millions, etc. would be formed within array 14. Capacitor openings 42 may individually be of any one or more shapes in horizontal cross-section, for example circular, ellipsoidal, 4-sided (e.g., square or rectangular), 6-sided, a combination of straight and curved sides, etc.Capacitor openings 42 are shown as having straight vertical sidewalls, although such may be non-vertical and/or not straight. An example maximum open dimension for individual capacitor openings 42 is 300 to 600 Angstroms.Individual capacitors are formed in individual capacitor openings 42. An example method of doing so is described with reference to Figs. 10- 12.Referring to Fig. 10, a capacitor electrode 44 has been formed in individual capacitor openings 42. In one embodiment and as shown, such is of an upwardly-open container shape, and in one embodiment is a laterally-outer (e.g., radially outer) electrode of the individual capacitors being formed. In one embodiment and as shown, laterally-outer electrode 44 having the upwardly- open container shape has been formed to have a bottom 45 extending laterally to and between sidewalls electrode 44. Alternately and by way of example only, electrode 44 may individually comprise an upwardly and downwardly-open (not shown) conductive material cylinder (e.g. , little or no bottom 45 extending between sidewalls of electrode 44). An example technique of forming capacitor electrode 44 is deposition of any suitable conductive material (e.g. , TiN), followed by filling at least lower portions of the depicted container shapes with a fill material (e.g. , photoresist), followed by etching the conductive material of electrode 44 back to be recessed relative to an upper surface of insulative material 21 , for example as shown. An example thickness for the material of electrode 44 is 30 to 50 Angstroms. In one embodiment, capacitor electrode 44 is electrically coupled (in one embodiment, directly electrically coupled) to one of individual lines 16. In this document, regions/materials/components are"electrically coupled" relative one another if in normal operation electric current is capable of continuously flowing from one to the other, and does sopredominately by movement of subatomic positive and/or negative charges when such are sufficiently generated. Another electronic component may be between and electrically coupled to the regions/materials/components. In contrast, when regions/materials/ components are referred to as being "directly electrically coupled", no intervening electronic component (e.g. , no diode, transistor, resistor, transducer, switch, fuse, etc.) is between the directly electrically coupled regions/materials/ components.Referring to Fig. 11 , capacitor insulator 58 has been formed as shown. In one example embodiment, capacitor insulator 58 comprises programmable material such that the capacitors that will be formed are non-volatile and programmable into at least two different magnitude capacitive states(e.g., whereby the programmable material is both sufficiently thick and remains insulative in the different states such that a current sufficient to erase a stored state does not flow there-through at operating voltages). Example such programmable materials include ferroelectric materials, conductive bridging RAM (CBRAM) materials, phase change materials, and resistive RAM (RRAM) materials, with ferroelectrics believed to be ideal. Example ferroelectric materials include ferroelectrics that have one or more of transition metal oxide, zirconium, zirconium oxide, niobium, niobium oxide, hafnium, hafnium oxide, lead zirconium titanate, and barium strontium titanate, and may have dopant therein which comprises one or more of silicon, aluminum, lanthanum, yttrium, erbium, calcium, magnesium, strontium, and a rare-earth element. In one embodiment, capacitor insulator 58 comprises dielectric material such that the capacitors are volatile. For example, such can comprise one or more of nonprogrammable dielectric materials such as silicon dioxide, silicon nitride, aluminum oxide, high-k dielectrics, etc. whereby no charge is retained in material 58 upon removal or sufficient reduction of voltage/potential from one or both of two capacitor electrodes of the capacitor. Non-volatile programmable capacitors may have a capacitor insulator that has a suitable combination of programmable material(s) and non-programmable material(s) . Regardless, an example thickness for capacitor insulator 58 is 30 to 100 Angstroms.Referring to Fig. 12, another capacitor electrode 60 has been formed, thus forming individual capacitors 62 in individual capacitor openings 42. In one embodiment and as shown, capacitor 62 comprises a laterally-inner electrode 60 that is laterally-inward of upwardly-open container shape electrode 44, and in one embodiment with laterally-inner electrode 60 being, from side-to-side, entirely solid from top to bottom in horizontal cross-section. Capacitor electrode 60 may be of any suitable conductive composition, and may be formed by deposition to fill remaining volume of capacitor openings 42, followed by etch-back to produce a construction such as shown.A line of access transistors is ultimately formed in individual trenches 24, with the line of access transistors electrically coupling to individual capacitors that are along that line. Such may be conducted by any existing or yet-to-be developed manner(s) . One such example is next described with reference to Figs. 13- 18.Referring to Fig. 13, sacrificial material 64 has been deposited within mask openings 40 to plug such openings, followed by planarizing construction 10 back at least to an uppermost surface of insulative material 23. Sacrificial material 64 may be any of conductive, insulative, and/or semiconductive.Referring to Fig. 14 and in one embodiment, sacrificial encircling walls 35, 36, 37, and 38 (not shown) have been removed and in one embodiment as shown masking material 26 (not shown) has also been removed, thereby forming longitudinally-spaced sacrificial pillars 65 within trenches 24. In one such embodiment and as shown, such comprises a method of removing all of sacrificial material 26 (not shown) after forming trench openings 30 (not designated in Fig. 14) . Regardless and in one embodiment as shown, masking material 26 (not shown) has been removed from trenches 24 such thatlongitudinally-spaced sacrificial pillars 65 have no solid material between them longitudinally along individual trenches 24. Referring to Fig. 15, conductive material 66 has been formed in and along trenches 24 about sacrificial material pillar 65, thus forming an access line 68 in individual trenches 24.Referring to Fig. 16, sacrificial material pillars 65 (not shown) have been removed to transform former mask openings 40 to be channel openings 40 in individual access lines 68 in trenches 24.Referring to Fig. 17, gate insulator 71 (e.g. , silicon dioxide, silicon nitride, high-k dielectric, ferroelectric material, etc.) and channel material 72 (e.g., polysilicon) have been formed in channel openings 40. Gate insulator 71 may be deposited to line channel openings 40, followed for example by being subjected to an anisotropic etch (e.g., a reactive ion spacer etch) to remove it from being centrally over bases of channel openings 40. Channel material 72, by way of example, may be variously suitably doped during deposition of semiconductive-capable material whereby in the example depicted embodiment a lowest-most region 73 and an uppermost region 74 are suitably conductively doped to function as conductive source/drain regions, having semiconductive channel material 72 there-between. Access line 68, gate insulator 71 , channel material 72, and source/drain regions 73, 74 are formed to comprise a line 76 of access transistors 75 in individual trenches 24, with such access transistors of the respective lines electrically coupling (in one embodiment directlyelectrically coupling) to individual capacitors 62 that are along that line of access transistors 75. Those portions of individual access lines 68 that are laterally adjacent gate insulator 71 and channel material 72 of individual transistors 75 effectively form individual access gates of such individual transistors. In one embodiment and as shown, access transistors 75 are formed to comprise hollow channels 72, and thereby are hollow channel transistors. Hollow channels 72 may be plugged with solid insulative material 77 (e.g., silicon dioxide or silicon nitride) as shown.Embodiments of the invention also encompass methods of forming an array of memory cells individually comprising a capacitor and a transistor, with the array comprising rows of access lines and columns of digit lines, as well as such arrays independent of method of manufacture. By way of example only, such a method and constructions are next described with reference to Figs. 18- 21.Referring to Fig. 18, material 66 of access lines 68 has been recessed back (e.g. , by a timed etch) selectively relative to gate insulator 71 , material of source/drain regions 74, material 23, and material 77. In this document, a selective etch or removal is an etch or removal where one material is removed relative to another stated material at a rate of at least 2.0: 1.Referring to Fig. 19, an isolation dielectric 78 has been deposited to fill the elevational recesses formed in Fig. 18.Referring to Fig. 20, dielectric material 78 has been patterned as shown to form trenches there-between over source/drain regions 74 of individual transistors 75.Referring to Fig. 21 , conductive material has been deposited and planarized back as shown to form digit lines 79 that are electrically coupled (in one embodiment directly electrically coupled) to source/drain regions 74 of individual transistors 75, thus forming individual memory cells MC.Any other attribute(s) or aspect(s) as described herein and/or shown may be used in the embodiments described above with reference to Figs. 1 -21.An example alternate method of forming an array of capacitors and access transistors there-above is next described with reference to Figs. 22-24 with respect to a construction 10b. Like numerals from the above-described embodiments have been used where appropriate, with some differences being indicated with the suffix "b" or with different numerals. The processing shown with respect to Figs. 6- 14 shows ultimate removal of all of masking material 34 and resultant walls 35, 36, 37, and 38 from construction 10. Such may not be desirable particularly where masking material 34 comprises a conductive material. For example, Fig. 22 is intended to show such with respect to a conductive material 34b that is hatched.Referring to Fig. 23, such shows analogous processing of the Fig. 22 substrate through and to the processing depicted by Fig. 13 of the above- described embodiments with respect to construction 10. Thereby, and as an example, dielectric masking material 26 remains in Fig. 23 within trenches 24. Referring to Fig. 24, material 26 (not shown) from Fig. 23 has been removed and conductive material 39 has been formed in-place thereof and has been planarized back at least to the uppermost surface of material 23.Conductive material 39 may be of the same or different composition as that of material 34b, with same composition being shown by dashed interface lines between materials 34b and 39. Such effectively forms trench openings 40 within which sacrificial material pillars 65 are received as comprising channel openings 40 as channels will be formed therein. Sacrificial pillars 65 would be removed, followed by analogous processing to that described above with respect to at least Figs. 16 and 17 to form transistors 75. Any other attribute(s) or aspect(s) as described herein and/or shown may be used.The above described embodiment with respect to formingconstruction 10b is but one example embodiment wherein encircling walls 35, 36, 37, and 38 are formed to be conductive, and with such encircling walls comprising individual access line 68 in trenches 24. In one such embodiment and as shown, no portion of masking material 26 remains to comprise an access line 68, and in one embodiment shows removing of all of masking material 26 from individual trenches 24.An alternate method embodiment is next described with reference to Figs. 25-27 with respect to a construction 10c. Like numerals from the above- described embodiments have been used where appropriate, with somedifferences being indicated with the suffix "c". Construction 10c in Fig. 25 shows masking material 26c as comprising conductive material by the depicted hatching thereof.Fig. 26 shows analogous processing of the Fig. 25 substrate through and to the processing depicted by Fig. 13 of the above-described embodiments with respect to construction 10.Referring to Fig. 27, material 34 (not shown) from Fig. 26 has been removed and conductive material 39 has been formed in-place thereof and has been planarized back at least to the uppermost surface of material 23.Conductive material 39 may be of the same or different composition as that of material 26c, with same composition being shown by dashed interface lines between materials 26c and 39. Such effectively forms trench openings 40 within which sacrificial material pillars 65 are received as comprising channel openings 40. Sacrificial pillars 65 would be removed, followed by analogous processing to that described above with respect to at least Figs. 16 and 17 to form transistors 75. Accordingly, and in one embodiment, conductive masking material 26c may remain as part of the finished circuitry construction and comprise an access line 68 of a line 76 of access transistor 75 in individual trenches 24. Any other attribute(s) or aspect(s) as described herein and/or shown may be usedThe above described processing relative to construction 10b and 10c may be combined, for example as described with reference to Figs. 28 and 29 with respect to a construction l Od. Like numerals from the above-described embodiments have been used where appropriate, with some differences being indicated with the suffix "d" . Fig. 28 shows each of materials 26c and 34b as being conductive by hatching per the embodiments of 10c and 10b, respectively. Materials 26c and 34b in construction lOd may be of the same or different composition, with different composition being shown by different hatching and solid interface lines between materials 26c and 34b.Fig. 29 shows subsequent processing analogous to that described above with respect to Figs. 7- 13 and whereby access lines 68d have been formed.Openings 40 therein comprise channel openings 40 within which transistor materials can be formed analogously to that described above with respect to Fig. 17. Any other attribute(s) or aspect(s) as described herein and/or shown may be used.In one embodiment, the masking material is conductive and at least some of which remains to comprise the access line. In one embodiment, no portion of the masking material remains to comprise the access line. In one embodiment, all of the masking material is removed from the individual trenches. In one embodiment, the encircling walls are formed to be conductive and comprise the access line. In one embodiment, at least some of the encircling walls are removed and the at least some of the encircling walls are replaced with conductive material that comprises the access line. In one such embodiment, all of the encircling walls are removed and replaced with the conductive material. Another example embodiment of forming an array of capacitors and access transistors there-above is next described with reference to Figs. 30-36 with respect to a construction lOe. Like numerals from the above-described embodiments have been used where appropriate, with some differences being indicated with the suffix "e" or with different numerals.Referring to Fig. 30, trenches 24e have been formed within material 23 to material 22.Referring to Fig. 3 1 , a pair of access line walls 35e, 37e has been formed in individual trenches 24e, with such walls extending longitudinally in and along the individual trenches against laterally-opposing sides 25 of trenches 24e. Such may be formed, by way of example, by deposition of conductive material followed by anisotropic etching thereof to produce a construction as shown. Such may be conducted without any masking material being within array 14.Referring to Fig. 32, anisotropic etching has been conducted through material 22 using material 23 and material of walls 35e, 37e as a mask.Referring to Figs. 33 and 34, masking material 26 has been deposited and patterned as shown analogously to that described above with respect to Figs. 4 and 5. Such is but one example method of forming longitudinally-spaced masked portions 28 and longitudinally-spaced channel openings 40/mask openings 30 in trenches 24e longitudinally between masked portions 28. Again, masking material 26 may be any of insulative, semiconductive, and conductive.Referring to Fig. 35, lines 35e, 37e, material 23, and material 26 have been used as a mask while etching into underlying insulative material 21 to form capacitor openings 42, followed by formation of capacitors 62.Referring to Fig. 36, subsequent processing has been conductedanalogously to that described above whereby individual pairs of access line walls 35e and 37e comprise an access line 68e of resultant transistors 75.In one embodiment, the pairs of walls extend into space (e.g., 28) that is longitudinally between the trench openings, and in one embodiment extend from trench opening to trench opening between immediately-longitudinally-adjacent of the trench openings. In one embodiment, masking material 26 is conductive and is directly against the pair of access line walls and remains in a finished circuitry construction. In one embodiment, masking material 26 is insulative or semiconductive and is directly against the pair of access line walls and remains in a finished circuitry construction.In one embodiment, peripheral sides of the channel openings are formed to be of the same composition circumferentially from top-to-bottom (e.g. , Fig. 33 walls 35e, 37e are of the same composition, and masking material 26 is conductive and of the same composition as walls 35e, 37e). In one embodiment, peripheral sides of the channel openings are formed to be of differentcomposition along different circumferentially-extending segments, and of the same composition from top-to-bottom within each of the circumferentially- extending segments (e.g. , masking material 26 and walls 35e, 37e in trenches 24e each being a different circumferentially-extending segment of peripheral sides of the channel openings, and at least one of such being of different composition from another) . In one embodiment, peripheral sides of the channel openings are formed to comprise only two different compositions (e.g., walls 35e and 37e being of the same composition, and masking material 26 being of different composition to that of walls 35e, 37e) . In one embodiment, peripheral sides of the channel openings are formed to comprise at least two pairs of laterally-opposing circumferentially-extending segments, with individual of the laterally-opposing circumferentially-extending segments in each pair being of the same composition (e.g. , walls 35e, 37e being of the same composition and one pair, masking material 26 on opposing sides being another pair) . In one embodiment, the circumferentially-extending segments are formed to alternate in the two different compositions circumferentially about the individual channel openings (e.g., walls 35e, 37e being of the same composition and masking material 26 being on opposing sides circumferentially between sides formed by masking material 26) .Any other attribute(s) or aspect(s) as described herein and/or shown may be used with respect to the embodiment of Figs. 30-36.An embodiment of the invention comprises a memory cell independent of method of manufacture. Such a memory cell comprises a capacitor (e.g. , 62) comprising an upwardly-open container shape electrode (e.g. , 44) . The memory cell also comprises a hollow channel transistor (e.g. , 75) above and directly electrically coupled to the capacitor. In one embodiment, the capacitor comprises a laterally-inner electrode (e.g., 60) that is laterally inward of the upwardly-open container shape electrode, with the hollow transistor being directly electrically coupled to the laterally-inner electrode. An embodiment of the invention also encompasses an array of such memory cells. Any other attribute(s) or aspect(s) as described herein and/or shown may be used.In one embodiment, memory cells of an array individually comprise a capacitor and a transistor, with the array comprising rows of access lines and columns of digit lines. One such embodiment is described with reference to Fig. 21. Such shows individual rows 80 that comprise an access line 68 extending operatively adjacent channels 72 of individual transistors 75 of individual memory cells MC within array 14 and which interconnect transistor 75 in that row. Such also shows columns 81 that individually comprise a digit line 79 above access lines 68, with digit line 79 being electrically coupled to one source/drain region (e.g. , 74, and in one embodiment, directly electrically coupled thereto) of individual transistors 75 and which interconnect transistors 75 in that column 81. Capacitors 62 of individual memory cells MC within array 14 individually comprise a laterally-outer electrode (e.g., 44) having an upwardly-open container shape. Capacitors 62 also comprise a laterally-inner electrode (e.g., 60). A capacitor insulator 58 is between laterally-outer electrode 44 and laterally-inner electrode 60. Laterally-inner electrode 44 is electrically coupled (in one embodiment, directly electrically coupled) to the other source/drain region (e.g., 73) of individual transistors 75. Laterally-outer electrode 44 having the upwardly-open container shape is directly against a lower conductor (e.g. , 60) that comprises a shared capacitor electrode of multiple capacitors 62 within array 14. In one embodiment and as shown, laterally-outer electrode 44 has a bottom 45 that is directly against lower conductor 16. Any other attribute(s) or aspect(s) as described herein and/or shown may be used.In one embodiment, lower conductor 16 comprises a series of laterally- spaced conductive lines that are directly electrically coupled together, for example as is shown schematically by a schematic interconnect line 82. Such interconnection may physically occur within and/or outwardly of array area 14. In one embodiment, the conductive lines are angled relative to the access lines. In one embodiment, the conductive lines are parallel to the access lines. In one embodiment, the conductive lines are angled relative the digit lines. In one embodiment, the conductive lines are parallel to the digit lines.An alternate example construction 10a is shown in Fig. 37 with respect to a construction 10a. Like numerals from the above-described embodiments have been used where appropriate, with some differences being indicated with the suffix "a" or with different numerals. Construction 10a comprises the lower conductor in the form of a conductive plate 84 which in one embodiment is under all of array 14, forming and thereby directly electrically coupling all capacitor electrodes 44 together within array 14. Any other attribute(s) or aspect(s) as described herein and/or shown may be used.In one embodiment, individual of the channels are hollow channels. In one embodiment, the laterally-outer electrode having the upwardly-open container shape has a bottom that is directly against a lower conductor. In one such embodiment, the lower conductor has an uppermost surface within the array, with the bottom of the laterally-outer electrode being directly against the uppermost surface of the lower electrode. In one embodiment, the digit line is directly electrically coupled to the one source/drain region of the individual transistors and the laterally-inner electrode is directly electrically coupled to the other source/drain region regions of the individual transistors.In one embodiment, memory cells MC are 1T- 1 C memory cells, although any other architecture may be employed. 1 T- 1C memory cells are individually characterized by having only one transistor and only one capacitor and no other/additional operable electronic component (e.g. , no other selectdevice, etc.), yet may also include conductive material interconnecting the transistor and capacitor together and the individual memory cell to other components outside of the individual memory cells.An embodiment of the invention comprises a 2T- 1C memory cell, and in one embodiment an array of such memory cells. Referring to Fig. 38, an example 2T- 1 C memory cell configuration 2 includes two transistors and one capacitor. The two transistors are labeled as T l and T2, and the capacitor is labeled as CAP. A source/drain region of T l connects with a first node of the capacitor (CAP), and the other source/drain region of T l connects with a first comparative bit line (BL- 1 ). A gate of T l connects with a word line (WL) . A source/drain region of T2 connects with a second node of the capacitor (CAP), and the other source/drain region of T2 connects with a second comparative bit line BL-2. A gate of T2 connects with the word line (WL) . The comparative bit lines BL- 1 and BL-2 extend to circuitry 4 which compares electrical properties (e.g., voltage) of the two to ascertain a memory state of memory cell 2. An advantage of the 2T- 1 C memory cell is that a memory state may be ascertained by comparing the electrical properties of the two comparative bit lines BL- 1 an BL-2 to one another, and accordingly a reference bit line associated with prior art memory (for instance, 1T- 1 C memory) may be omitted. The 2T- 1 Cconfiguration of Fig. 38 may be used in DRAM (dynamic random access memory) and/or other types of memory.An alternate embodiment construction to that of Fig. 21 that may comprise 2T- 1 C architecture like that shown in Fig. 38 is shown in Fig. 39. Like numerals from the above-described embodiments have been used where appropriate, with some differences being indicated with the suffix "f".Construction lOf comprises individual memory cells MCf of 2T- 1C architecture and which may be volatile or non-volatile depending on composition of the capacitor insulator. Memory cells MCf individually comprise a capacitor 62 comprising a laterally-outer electrode 44 having an upwardly-open container shape. Capacitor 62 comprises a laterally-inner electrode 60 and a capacitor insulator 58 between laterally-outer electrode 44 and laterally-inner electrode 60. Memory cell MCf comprises an upper elevationally-extending transistor 75 that has a lower source/drain region 73 thereof electrically coupled (in one embodiment directly electrically coupled) to laterally-inner electrode 60. In one embodiment, the upper transistor is a hollow channel transistor. Memory cell MCf comprises a lower elevationally-extending transistor 75L that has an upper source/drain region 74L thereof electrically coupled (in one embodiment directly electrically coupled) to laterally-outer electrode 44 having the upwardly-open container shape. In one embodiment, the lower transistor is a hollow channel transistor. Lower transistor 75L may be fabricated using any existing or yet-to-be-developed method, including that disclosed herein with respect to fabrication of transistor 75. Materials of transistor 75L, including dielectric material there-about, are designated with the suffix "L" and which may be the same as those described above for transistors 75 without the suffix "L" . Access lines 68 and 68L may be electrically coupled together inaccordance with the Fig. 38 schematic. A line 79 and a line 16 may comprise comparative bit lines BL- 1 and BL-2 and extend to circuitry 4. Insulative material 20f is shown comprising an insulator 19 separating access lines 68L from lines 16.Any other attribute(s) or aspect(s) as described herein and/or shown may be used with respect to the Fig. 39 embodiment.The above-described processing and figures show fabrication of, for example, one tier (which is generic to "deck" and "level") of an array of memory cells. Additional such tiers may be provided or fabricated above or below the one tier depicted in the figures. Alternately, only a single such tier may be fabricated.Regardless, a method embodiment of the invention comprises forming a tier of an array of memory cells within an array area. The memory cells will individually comprise a capacitor and an elevationally-extending transistor there-above. The method comprises using two, and only two, sacrificial masking steps within the array area of the tier in forming the transistors and the capacitors of the memory cells. In the context of this document, a "sacrificial masking step" is a patterning technique using masking material that is patterned over substrate material combined with subsequent removal (e.g. , by etching) of substrate material that is uncovered by the masking material, and with at least an uppermost portion of the masking material being sacrificial and thereby ultimately being removed from being over the substrate. The masking material may include a lowest portion that remains as part of the finished circuitry construction. Alternately, all of the sacrificial masking material may be completely removed. In accordance with on embodiment, each of the two masking steps within the array area of the tier with respect to material that is elevationally inward of masking material removes only dielectric material. For example, and by way of example only, an above such processing described with respect to Figs. 1 -21 is such a method where materials 21 , 22, 23 , and 26 are dielectric. Specifically, Figs. 1 -3 is one masking step, and Figs. 4-9 is another masking step. In the above-described example embodiments and in accordance with the one embodiment of this paragraph, there are no other sacrificial masking steps within array area 14 of the depicted tier in forming the individual memory cells. Such may be facilitated by forming circuit components in a self- aligned manner. In this document, "self-aligned" means a technique whereby at least a lateral surface of a structure is defined by deposition of material against a sidewall of a previously patterned structure. Any other attribute(s) or aspect(s) as described herein and/or shown may be used.An embodiment of the invention comprises a method of forming a tier of an array of memory cells within an array area, where the memory cells will individually comprise a capacitor and an elevationally-extending transistor there-above. The method comprises using two, and only two, sacrificial masking steps within the array area of the tier in forming the transistors and the capacitors of the memory cells. One of the two masking steps within the array area of the tier with respect to material that is elevationally inward of masking material removes only dielectric material. The other of the two masking steps within the array area of the tier with respect to material that is elevationally inward of masking material removes dielectric material and conductive material. For example, and by way of example only, an above such processing described with respect to Figs. 1 -21 is such a method where materials 21 , 22, and 23 are dielectric and at least one of materials 26 and 36 are conductive. Specifically, Figs. 1 -3 is the one masking step (dielectric material only is etched), and Figs. 4-9 is the other masking step (dielectric material and conductive material are etched) . In one embodiment, the other is conducted after the one.CONCLUSIONIn some embodiments, a method of forming an array of capacitors and access transistors there-above comprises forming access transistor trenches partially into insulative material. The trenches individually compriselongitudinally-spaced masked portions and longitudinally-spaced openings in the trenches longitudinally between the masked portions. The trench openings have walls therein extending longitudinally in and along the individual trench openings against laterally-opposing sides of the trenches. At least some of the insulative material that is under the trench openings is removed through bases of the trench openings between the walls and the masked portions to formindividual capacitor openings in the insulative material that is lower than the walls. Individual capacitors are formed in the individual capacitor openings. A line of access transistors is formed in the individual trenches. The line of access transistors electrically couples to the individual capacitors that are along that line.In some embodiments, a method of forming a tier of an array of memory cells within an array area, with the memory cells individually comprising a capacitor and an elevationally-extending transistor there-above, comprises using two, and only two, sacrificial masking steps within the array area of the tier in forming the transistors and the capacitors of the memory cells. In each of the two masking steps within the array area of the tier with respect to material that is elevationally inward of masking material, only dielectric material is removed.In some embodiments, a method of forming a tier of an array of memory cells within an array area, with the memory cells individually comprising a capacitor and an elevationally-extending transistor there-above, comprises using two, and only two, sacrificial masking steps within the array area of the tier in forming the transistors and the capacitors of the memory cells. In one of the two masking steps within the array area of the tier with respect to material that is elevationally inward of masking material, only dielectric material is removed. In the other of the two masking steps within the array area of the tier with respect to material that is elevationally inward of masking material, dielectric material and conductive material is removed.In some embodiments, a method of forming an array of capacitors and access transistors there-above comprises forming access transistor trenches partially into insulative material. The trenches individually compriselongitudinally-spaced masked portions and longitudinally-spaced openings in the trenches longitudinally between the masked portions. After forming the trench openings, encircling walls are formed against peripheral sides of the individual trench openings. At least some of the insulative material that is under the trench openings is removed through bases of the trench openings radially inward of the encircling walls to form individual capacitor openings in the insulative material that is lower than the walls. Individual capacitors are formed in the individual capacitor openings. A line of access transistors is formed in the individual trenches. The line of access transistors electrically couples to the individual capacitors that are along that line.In some embodiments, a method of forming an array of capacitors and access transistors there-above comprises forming access transistor trenches partially into insulative material. The trenches individually compriselongitudinally-spaced masking material and longitudinally-spaced openings in the trenches longitudinally between the masking material. After forming the trench openings, sacrificial encircling walls are formed against peripheral sides of the individual trench openings to form individual mask openings within the individual trench openings. At least some of the insulative material that is under the mask openings is removed through bases of the mask openings radially inward of the encircling walls to form individual capacitor openings in the insulative material that is lower than the walls. Individual capacitors are formed in the individual capacitor openings. After forming the capacitors, the mask openings are plugged with sacrificial material. The sacrificial encircling walls are removed to form longitudinally-spaced sacrificial pillars comprising the sacrificial material within the trenches. A conductive material is formed in and along the trenches about the sacrificial material pillars to form an access line in the individual trenches. The sacrificial pillars are removed to form channel openings in the individual access lines in the trenches. Gate insulator and channel material is formed in the channel openings. The access line, the gate insulator, and the channel material are formed to comprise a line of access transistors in the individual trenches. The line of access transistors electrically couples to the individual capacitors that are along that line of access transistors.In some embodiments, a method of forming an array of capacitors and access transistors there-above comprises forming access transistor trenches partially into insulative material. The trenches individually comprising longitudinally-spaced masking material and longitudinally-spaced openings in the trenches longitudinally between the masking material. After forming the trench openings, conductive encircling walls are formed against peripheral sides of the individual trench openings to form individual channel openings within the individual trench openings. At least some of the insulative material that is under the channel openings is removed through bases of the channel openings radially inward of the encircling walls to form individual capacitor openings in the insulative material that is lower than the walls. Individual capacitors are formed in the individual capacitor openings. Gate insulator and channel material are formed in the individual channel openings. The conductive encircling walls comprise an access line in the individual trenches. The access line, the gate insulator, and the channel material are formed to comprise a line of access transistors in the individual trenches. The line of access transistors electrically couples to the individual capacitors that are along that line of access transistors.In some embodiments, a method of forming an array of capacitors and access transistors there-above comprises forming access transistor trenches partially into insulative material. The trenches individually compriselongitudinally-spaced conductive masking material and longitudinally-spaced openings in the trenches longitudinally between the conductive masking material. After forming the trench openings, encircling walls are formed against peripheral sides of the individual trench openings to form individual mask openings within the individual trench openings. At least some of the insulative material that is under the mask openings is removed through bases of the mask openings radially inward of the encircling walls to form individual capacitor openings in the insulative material that is lower than the walls. Individual capacitors are formed in the individual capacitor openings. A line of access transistors is formed in the individual trenches. The line of access transistors electrically couples to the individual capacitors that are along that line. The conductive masking material comprising an access line of the line of the access transistors in the individual trenches.In some embodiments, a method of forming an array of capacitors and access transistors there-above comprises forming access transistor trenches partially into insulative material. A pair of access line walls is formed in individual of the trenches. The access line walls extend longitudinally in and along the individual trenches against laterally-opposing sides of the trenches. Longitudinally-spaced masked portions are formed in the trenches and longitudinally-spaced channel openings are formed in the trencheslongitudinally between the masked portions. At least some of the insulative material that is under the channel openings is removed through bases of the channel openings between the walls and the masked portions to form individual capacitor openings in the insulative material that is lower than the walls.Individual capacitors are formed in the individual capacitor openings. Gate insulator and channel material are formed in the channel openings. The pair of access line walls, the gate insulator, and the channel material are formed to comprise a line of access transistors in the individual trenches. The line of access transistors electrically couples to the individual capacitors that are along that line of access transistors.In some embodiments, a memory cell comprises a capacitor comprising an upwardly-open container shape electrode. A hollow channel transistor is above and directly electrically coupled to the capacitor.In some embodiments, an array of memory cells individually comprising a capacitor and a transistor, with the array comprising rows of access lines and columns of digit lines, comprises individual of the rows comprising an access line extending operatively adjacent channels of individual transistors of individual memory cells within the array and interconnecting the transistors in that row. Individual of the columns comprise a digit line above the access lines. The digit line is electrically coupled to one source/drain region of the individual transistors and interconnects the transistors in that column. Capacitors of the individual memory cells within the array individually comprise a laterally-outer electrode having an upwardly-open container shape and a laterally-inner electrode. A capacitor insulator is between the laterally-outer electrode and the laterally-inner electrode. The laterally-inner electrode is electrically coupled to the other source/drain region of the individual transistors. The laterally-outer electrode having the upwardly-open container shape is directly against a lower conductor that comprises a shared capacitor electrode of multiple of the capacitors within the array.In some embodiments, a 2T- 1C memory cell comprises a capacitor comprising a laterally-outer electrode having an upwardly-open container shape and a laterally-inner electrode. A capacitor insulator is between the laterally- outer electrode and the laterally-inner electrode. A lower elevationally- extending transistor has an upper source/drain region thereof electrically coupled to the laterally-outer electrode having the upwardly-open container shape. An upper elevationally-extending transistor has a lower source/drain region thereof electrically coupled to the laterally-inner electrode.
PROBLEM TO BE SOLVED: To recode a floating-point multiply instruction to a floating-point multiply-add instruction.SOLUTION: In a denormal support mode, a normalization circuit of a floating-point adder is used to normalize or denormalize output of a floating-point multiplier. Each floating-point multiply instruction is speculatively converted to a multiply-add instruction, with the addend forced to zero. This preserves the value of the product, while normalizing or denormalizing the product using the floating-point adder's normalization circuit. When the operands to the multiply operation are available, they are inspected. If the operands will not generate an unnormal intermediate product or a denormal final product, the add operation is suppressed, such as by operand-forwarding. Additionally, each non-fused floating-point multiply-add instruction is replaced with a multiply-add instruction having a zero addend, and a floating-point add instruction having the addend of the original multiply-add instruction is inserted into the instruction stream.
Converting a floating point multiply instruction to a floating point multiply-add instruction that operates to perform a floating point multiply process and a floating point add process, and forcing one addend of the floating point add process to zero And executing a floating point multiplication instruction for handling denormalized inputs and / or denormalized products.The method of claim 1, wherein the method steps are performed only in a denormalized support mode.The method of claim 1, wherein converting the floating point multiply instruction to a floating point multiply-add instruction occurs prior to an execution pipeline stage.4. The method of claim 3, wherein converting the floating point multiply instruction to a floating point multiply-add instruction occurs in a decode pipeline stage.The method of claim 1, wherein the floating point multiply-add instruction is not fused and further comprises forwarding the output of the floating point multiplier to a normalization circuit of the floating point adder.6. The method of claim 5, wherein forwarding the output of the floating point multiplier to a normalization circuit of a floating point adder directly sends the output of the floating point multiplier to the normalization circuit.The sending of the output of the floating point multiplier to a normalization circuit of a floating point adder includes sending the output of the floating point multiplier to the normalization circuit through one or more pipeline storage elements. Method 5.Inspecting the multiplier and multiplicand of the floating-point multiply instruction, and based on the inspection, determining that the product of the multiplication process is not a denormalized number and that the operand of the multiplication is not denormalized. And suppressing the floating point addition process in response to such a determination.The method of claim 8, wherein the multiplier and multiplicand checks occur in an execution pipeline stage.9. The method of claim 8, wherein inhibiting the floating point addition process includes operand forwarding of the output of the floating point multiplier so that it can be consumed by subsequent instructions.The floating point multiply instruction is an unfused floating point multiply-add instruction, and the conversion of the floating point multiply instruction substitutes the zero value for the addend of the floating point multiply-add instruction. And inserting a floating point add instruction with the addend of the original floating point multiply-add instruction after the floating point multiply-add instruction.Based on checking the multiplier and multiplicand of the floating point multiply-add instruction, determining based on the check that the product of the multiplication process will not be a denormalized number, and based on the check Determining that the input to the multiplication process is not a denormalized number, and replacing the zero addend with the addend of the original multiply-add instruction in response to such determination. And converting the floating point add instruction to a NOP.One or more instruction execution pipelines, a floating point multiplier, a floating point adder having a normalization circuit, and a floating point output by the floating point multiplier utilizing the normalization circuit of the floating point adder A pipeline controller that operates to normalize or denormalize the decimal product.14. The processor of claim 13, wherein the pipeline controller performs normalization or denormalization of the floating point product only in a denormalization support mode.The pipeline controller directs the pipeline to convert each floating point multiply instruction into a floating point multiply-add instruction that operates to perform floating point multiply and add operations; and 14. The processor of claim 13, wherein the floating point product is normalized or denormalized by forcing one addend of the floating point addition process to zero.16. The processor of claim 15, wherein the pipeline controller directs the pipeline to convert each floating point multiply instruction to a floating point multiply-add instruction prior to the execution pipe stage.The processor of claim 16, wherein the pipeline controller directs the pipeline to convert each floating point multiply instruction to a floating point multiply-add instruction in a decode pipe stage.The pipeline controller may further generate an unnormalized intermediate product or a denormalized final product by examining a multiplier and a multiplicand before the floating-point multiplier performs the floating-point multiplication process. 14. The processor of claim 13 that operates to predict whether and otherwise operates to suppress normalization or denormalization of the floating point multiplier output.19. The processor of claim 18, wherein the pipeline controller suppresses normalization or denormalization of the output of the floating point multiplier by operand forwarding of the product so that it can be consumed by subsequent instructions.Directing the pipeline to convert each unfused floating-point multiply-add instruction to a floating-point multiply-add instruction with an addend of zero, and after the floating-point multiply-add instruction The pipeline controller normalizes the denormalized floating point product by inserting a floating point add instruction with the addend of the original floating point multiply-add instruction, or 14. The processor of claim 13, wherein the processor performs denormalization.The pipeline controller may further generate an unnormalized intermediate product or an unnormalized final product, possibly by the floating point multiplier, by examining the multiplier and multiplicand before performing the floating point multiplication process. 21. The processor of claim 20, operable to predict whether or not to suppress normalization of the floating point multiplier output.The pipeline controller replaces the zero addend with the addend of the original multiply-add instruction, and converts the floating-point add instruction to NOP, thereby converting the floating-point multiplier to the NOP. The processor of claim 21, wherein output normalization or denormalization is suppressed.
Mode-based multiply-add processor for denormalized operandsThe present disclosure relates generally to the processor field, and more particularly to a mode-based method for recoding floating-point multiply instructions to floating-point multiply-add instructions to handle denormalized operands.Microprocessors perform numerical computations in a wide variety of applications. High execution speed, low power consumption, and small size are important goals for processor designers, especially in embedded applications such as portable electronic devices. Modern processors use a pipeline system so that consecutive instructions, each having multiple execution steps, overlap during execution. In the pipeline method, each instruction is executed in a series of execution stages such as fetch, decode, execute, and write-back, but each may include a plurality of pipelines. The pipe stage consists of storage elements and logic that execute all or part of the instruction execution stage. Instructions flow continuously through the pipeline. The execution stage performs arithmetic operations, logical operations, or memory access operations specified by instructions, and can perform various arithmetic operations, particularly on numeric values.Digital processors represent numbers in either fixed-point or floating-point format. Floating point numbers comprise a fixed point significand (also known as a mantissa) multiplied by the base 2 raised to an integer exponent. In some formats, such as the IEEE 754 standard, the floating point representation additionally includes a sign bit, as incorporated herein by reference. Multiplying the mantissa by a raised to an integer exponent of 2 is a binary analog to the scientific notation of the radix-10 system. That is, the value of the exponent determines the number and direction of bit positions to which the binary point of the mantissa part should be shifted in order to realize the actual numerical value. Hence, the term floating point is used.A floating point value is considered a “normalized” number if the mantissa part is in the range 1 <= mantissa part <2 and the exponent is in its defined range. The mantissa part of a normalized floating point number is thus 1. It consists of a fraction format (1.fraction). Here, “fraction” is a binary value indicating a fractional portion of one or more mantissa parts. The exponent value shifts the binary point to the left (for negative exponents) or right (for positive exponents). In the IEEE 754 standard, exponent values for single precision floating point numbers range from -126 to 127. When encoding numbers in IEEE 754 single precision format, a bias of 127 is added to the raw exponent so that all of the encoded exponents are positive.A mantissa part with an arbitrary exponent smaller than 1, that is, a floating point value represented by 0 <mantissa part <1, is referred to herein as an “unnormal” number. A subset of unusual floating-point numbers of particular interest is denormalized numbers (also known as subnormal numbers). A denormal floating-point number has a value smaller than 1.0 × 2-126 by using a mantissa part in a range of 0 <mantissa part <1 and an exponent −126. Represent. A denormalized floating-point number has a leading zero in the fraction ranging from zero to the width of the fractional part with a range from zero to the fraction-1 width. the fraction-1.), 0. Has a mantissa part in the form of a fraction (0.fraction). At the expense of a loss of precision, such that fewer bits accurately represent the number, the denormalized number is used in the normalized mantissa to achieve a “left shift” of the binary point beyond the 126 bit position. Effective use of fractional bit positions. A denormalized number represents a value very close to zero, and can be used for the implementation of a gradual underflow that allows the calculation to slowly lose accuracy if the result is very small.In the case of a floating point multiplier circuit, the denormal product occurs in several ways. Either the multiplier or multiplicand may be a denormalized number. In this case, the mantissa part of the intermediate product is generally anomalous (that is, less than 1), but depending on the value of the operand, the product that is finally rounded (product) Can be normalized or denormalized. When both the multiplier and the multiplicand are denormalized numbers, the final rounded product is zero or the smallest representable denormalized number.Furthermore, if the exponent is small and yields a normalized number that requires an exponent of -126 or less (for single precision), the product of the two normalized numbers can be a denormalized number. It should be noted that in this case the normal situation assumes that the intermediate value of the multiplication is in a “non-normal” form. The normalized mantissa part may assume values in the range [1,2], ie, exactly 1 to almost 2 (assuming 1.0000 to 1.1111 for a 5-bit mantissa part). The product of two normalized mantissa parts may assume a range [1, 4), that is, a value between exactly 1 and approximately 4. The mantissa part of this intermediate product is 1. Decimal part, or 1x. May assume a fractional format. Regarding the latter, assumptions are made for values from 2 to almost 4 (10.0000 to 11.11111). As usual for floating point multiplication, the floating point multiplier adjusts this intermediate result by shifting the binary point to the left and incrementing the exponent by one. Such “non-normal” intermediate results are not considered to be denormalized numbers herein and are not explicitly addressed by this disclosure.In general processor applications, such as some embedded processors, denormalized numbers need not necessarily be supported. For example, a denormalized value may be represented simply as zero without a significant loss of accuracy. However, the Java programming language demonstrates support for denormalized numbers. Thus, the supporting processor commands the execution of Java code to accommodate the denormalized floating point number, at least during the Java execution mode.Denormalized floating point numbers can be supported in software by generating exceptions in the software routine for detection of denormalized numbers and processing of denormalized numbers. This process is slow and causes a significant amount of overhead. It reduces system performance and increases power consumption.Denormalized numbers can be supported in hardware by adding denormalization detection and normalization circuitry to each floating point computation element. For example, by moving the mantissa part to a normalized position (ie 1. fractional part) and allowing exponent values (non-standard) less than -126 (for single precision case) The number can be “normalized”. Similarly, by shifting the mantissa part to a denormalized position (ie, 0. decimal part) so that the exponent is -126 (as opposed to the single precision case), the result is "denormalized". "obtain. However, such additional circuitry increases silicon area, increases latency, introduces a delay in throughput, potentially increases minimum cycle time, and therefore reduces maximum operating frequency. In addition, denormalized numbers are rarely encountered, and optimizing performance for rare cases at the expense of the normal case reduces overall processor performance.The floating point adder includes circuitry for aligning the addends, normalizing the sum, and rounding. According to one or more embodiments, in the denormal support mode, the normalization circuit of the floating point adder is used to normalize or denormalize the result from the floating point multiplier. . Each multiply instruction is speculatively replaced with a multiply-add (also known as multiply-accumulate) instruction with an addend forced to zero. This directs the multiplier output to use the adder normalization circuit without changing its value, but to use the normalization circuit of the adder to normalize or denormalize the product. Do. If it is determined that the intermediate product is not unnormal or the final product is not a denormalized number, the processing of the summation part is suppressed, for example, by operand forwarding ( suppress). In many cases, this determination can be made early in the execution of the multiplication, by processing the exponent of the multiply instruction operand. One embodiment relates to a method of executing a floating point multiply instruction that handles an abnormal middle mantissa part or a denormalized final product. The floating point multiply instruction is converted into a floating point multiply-add instruction that operates to perform a floating point multiply process and a floating point add process. Then, one addend of the floating point addition process is forced to zero.Another embodiment relates to a processor. The processor includes one or more instruction execution pipelines having normalization circuitry and including floating point multiply-accumulate units. In addition, the processor operates to normalize or denormalize the anomalous intermediate mantissa or non-normalized floating point product output by the floating point multiplier using the normalization circuit of the floating point adder. Includes pipeline controller.Detailed descriptionFIG. 1 shows a functional block diagram of the processor 10. The processor 10 executes instructions in the instruction execution pipeline 12 according to the control logic 14. The control logic 14 includes one or more registers, such as a status register 15 that defines various processing modes. Pipeline 12 may be a superscalar design with multiple parallel pipelines such as 12a and 12b. Each of the pipelines 12a and 12b includes various registers and latches 16 and one or more Arithmetic Logic Units (ALUs) 18 configured in the pipe stage. The pipe stage register or latch 16 and the ALU 18 can read operands from registers in the general purpose register file 28 and / or write results to the registers.The pipelines 12a and 12b are connected to the instruction cache (I cache or I) by memory addressing and permission managed by an Instruction-side Translation Lookaside Buffer (ITLB) 22. $) The instruction is fetched from 20. Data is accessed from a data cache (D-cache or D $) 24 with memory addressing and permissions managed by a main Translation Lookaside Buffer (TLB) 26. In various embodiments, ITLB 22 may include a copy of a portion of TLB 26. Alternatively, ITLB 22 and TLB 26 can be integrated. Similarly, in various embodiments of processor 10, I-cache 20 and D-cache 24 may be integrated or may be integrated. Misses in the I-cache 20 and / or the D-cache 24 cause access to the main (off-chip) memory 36 under the control of the memory interface 34.The processor 10 may include an input / output (I / O) interface 38 that controls access to various peripheral devices 40, 42. Those skilled in the art will recognize that many variations of the processor 10 are possible. For example, the processor 10 may include a second level (L2) cache for either the I cache, the D cache, or both. Further, one or more functional blocks drawn in the processor 10 may be omitted from certain embodiments.In one or more embodiments, the processor 10 operates in a denormalized support mode indicated by, for example, a denormalized support bit in the status register 15. In particular, the denormalized support mode may be entered whenever the processor 10 directly executes Java code, and another way in which the programmer chooses to support denormalized floating point numbers. There is a possibility of getting into the case.In the denormalized support mode, the processor 10 speculatively assigns each floating point multiply instruction to a multiply-add (or multiply-accumulate) instruction with the addend being zero. Convert to The multiply-add process can consist of fused or unfused types. In the case of the fused multiply-add process, the full width of the intermediate product (double the input width) is sent to the adder without intermediate rounding. In the case of multiplication-addition processing that is not fused, the intermediate product of the multiplication processing is rounded (often to input accuracy) before the addition processing is performed.In some embodiments, each floating point multiply instruction is speculatively replaced with a fused multiply-add instruction with an addend forced to zero. This instruction flow change is generally performed at the beginning of the associated pipeline 12a, 12b, such as the decode stage, or in any case before the execution stage. With the normalization processor process, in the fused multiply-add execution stage, the output of the floating point multiplier will be directed to the input of the floating point adder, as shown in FIG. Floating point adders that support fused multiply-add instructions have sufficient input width to receive intermediate products from floating point multipliers.FIG. 2 is a functional block diagram illustrating that the output of the floating point multiplier 50 is routed to the input of the floating point adder 52. The floating-point adder 52 includes an alignment circuit 54 for aligning floating-point addends, an adder circuit 56 for calculating a floating-point sum, and a normalization circuit for normalizing (or denormalizing) the sum. 58 includes a rounding circuit 60 for rounding the shifted sum. The multiplier (MR) and multiplicand (MD) input to the multiplier 50 and the addend input to the floating point adder 52 can be registered values as stored in the GPR file 28. The added number 37 is multiplexed to the floating point adder 52 in order to use the floating point adder 52 in the normalization support mode.In order to preserve the output value of the floating-point multiplier 50, the addend of the floating-point addition process is forced to zero while the number is normalized or denormalized. This can be realized in many ways. For example, as shown in FIG. 2, zero values may be multiplexed into the alignment circuit 54. Alternatively, the zero value may be stored in the GPR register 29 for retrieval by a floating point multiply-add instruction as part of normalization. As a further example, the output of the GPR register 29 may be gated with control logic that includes a denormalized support mode bit to gate off the register value and indicate zero in the denormalized support mode. In either case, the zero value is applied as one addend to the floating point adder 52, and the double width output of the floating point multiplier 50 is applied as the other addend. The addition to zero in the adding circuit 56 does not change the numerical value. Then, the numerical value is normalized / denormalized by the normalizing circuit 58 and rounded by the rounding circuit 60. In this scheme, the processor 10 utilizes existing hardware in the floating point adder 52 to match the unnormalized output of the floating point multiplier 50 with the denormalized final result.In other embodiments where the instruction set structure only supports unfused multiply-add operations, each floating point multiply instruction is speculatively replaced with an unfused multiply-add instruction. In this case, the intermediate full width product should be sent to the adder normalization logic 58 without rounding. This can be realized in various ways. For example, as shown in multiplexer 57, the product may bypass adder circuit 56 and be sent directly to normalization logic 58. Although not shown in the figure, the floating point adder 52 can be realized as a pipeline unit having an intermediate register. In such cases, the data sent to the normalization logic can be pipelined for consistency. Alternatively, the floating point adder input logic may be modified to receive a full width intermediate product. In either case, the adder circuit 56 and the normalizer 58 are already sufficiently wide for the data. Similarly, in the case of a non-fused multiply-add process, the leading zero count should be performed in the upper half of the intermediate product. This count should be sent to normalization logic for control and exponential logic for exponent generation (not shown).Generally, the multiplier (MR) and multiplicand (MD) values are known only when the pipeline depth is increased, as in the execution stage. As soon as the values of MR and MD are known, they are all checked to determine if both are normalized values, generating a normalized mantissa from a floating point multiplier; Become. In parallel, processing may be performed based on the exponent value to determine whether the final result is in a normalized state. If the output of the floating point multiplier 50 is in a normalized format and the final result is a normalized floating point number, the addition process may be suppressed. In this case, the output of the floating point multiplier 50 may bypass the floating point adder 52 by operand forwarding, as shown in FIG. This allows subsequent instructions to consume this data without waiting for it to pass through the adder depending on the result. In cases where the results are hardly denormalized, it is difficult to determine early whether the results are denormalized. In these cases, if the final product is denormal, an addition process will be performed to denormalize.The floating point multiplication process of the multiply-add instruction may similarly generate a denormalized number or an abnormal number as an intermediate product. In the denormalized support mode, unfused multiply-add instructions are modified to add a zero value to the product of the multiplication process. The add instruction is then inserted into the instruction stream after the multiply-add instruction along with the addend of the original multiply-add instruction. That is, the full width product of the floating point multiplication process is added to zero before performing the addition process with the original addend. As discussed above for multiply instructions that are converted to multiply-add instructions, the floating point adder should be modified to receive a wider intermediate product, or this product is normalized Should be led to the circuit. Similarly, a leading zero count should be maintained for the significant bits of the product used to control the normalization circuit. In this way, the addition process of the multiply-add instruction is for performing any normalization or denormalization of the product without changing its value before executing the addition process through separate floating point addition instructions. used. The logic that implements the fused multiply-add instruction does not need to insert a subsequent add instruction, and can handle an abnormal or denormalized intermediate product.As shown in FIG. 2, the output of floating point multiplier 50 is routed to one input of floating point adder 52. Then, at the other input to the floating point adder 52, zero is forced. As described above, the adder circuit 56 does not change the value of the intermediate product, which is normalized / denormalized by the normalization circuit 58 and rounded by the rounding circuit 60. The normalized (or denormalized) number is then sent to one input of floating point adder 52, as shown in path 64. The addend of the original multiply-add instruction stored in the GPR register 29 is routed to the other input of the floating point adder 52. Then, a floating point addition instruction is executed. In this way, the output of the floating point multiplier 50 is normalized / denormalized using the circuit of the floating point adder 52 before performing the addition processing of the original unfused multiply-add instruction. .Here, the addition instruction is inserted into the instruction stream, and the multiplication-addition instruction change for substituting the addend of zero is generally performed at the beginning of the pipeline, such as in the decoding stage. As in the execution stage, once the multiplier (MR) and multiplicand (MD) values are known, they can be examined. The exponent may then be processed to determine if the multiplication process can possibly produce an intermediate abnormal output, or if the final result is denormalized. Otherwise, the modification of the multiply-add instruction can be reversed or "undone" by substituting the original addend for the zero addend. There is. Furthermore, the inserted floating point add instruction may be converted to a NOP (no operation) that can be removed by conventional pipeline optimization.FIG. 3 shows the operation of the processor 10 when processing a floating point multiply instruction in the denormalization support mode. The instruction is fetched (as from instruction cache 20) and decoded (block 70). If the processor is not operating in the denormalized support mode (block 72), the instruction is processed by conventional pipeline processing (block 86). If the processor is in denormalized support mode (block 72), the decoded instruction is examined to determine if it is a floating point multiply instruction (block 74). Otherwise, the instruction is executed as usual (block 86).If the instruction is a floating point multiply instruction, the processor 10 uses a floating point multiply-add instruction with an addend of zero instead of the floating point multiply instruction (block 76). If operands for floating point multiplication are available, they are checked to determine if the floating point multiplication is guaranteed to produce a normalized output. If the floating point multiplication process produces an abnormal intermediate output, or if the final result may be denormal (block 80), the substituted multiply-add instruction is replaced with a conventional pipe. Processing is performed by line processing (block 86), and the intermediate product is normalized or denormalized using the normalization circuit 58 of the floating point adder 52 as described above. In the case of fused multiply-add instructions, no further control is required. In the case of unfused multiply-add instructions, adder circuit 56 is wide enough to handle intermediate products, but floating point adder 52 needs to be modified to properly send bits to the adder. There is. Alternatively, the intermediate product is sent directly to the normalization circuit 58 of the floating point adder 52, potentially including intervening state elements, as shown in FIG.If it is determined that the floating point multiplication process produces a normalized intermediate and final result (block 80), the “add to zero” process may be suppressed, for example, by operand forwarding (block 82). This avoids the performance penalty of performing the “add to zero” process. In that case, normalization / denormalization of the output of the floating point multiplier 50 is not necessary.FIG. 4 shows the processing of the processor 10 when processing the floating point multiply-add instruction in the denormalization support mode. The instruction is fetched (as from instruction cache 20) and decoded (block 90). If the processor is not operating in the denormalized support mode (block 92), then conventional pipeline processing instructions are processed (block 106). If the processor is in denormal support mode (block 92), the decoded instruction is examined to determine if it is an unfused floating point multiply-add instruction (block 94). If the instruction is not a floating point multiply-add instruction or is a fused multiply-add instruction, the instruction is executed in a conventional manner (block 106).If the instruction is an unfused floating point multiply-add instruction (block 94), the processor normalizes / denormalizes the intermediate product of the floating point multiply process before performing the floating point add process. . Initially, the value zero is assigned to the addend in the floating point multiply-add instruction (block 96). The floating point add instruction with the original addend is then inserted into the instruction stream following the modified floating point multiply-add instruction (block 98).If floating-point multiplication operands are available, as in the execution stage, they are checked to determine if the floating-point multiplication guarantees the generation of normalized intermediate and final results. If the floating-point multiplication process may produce an abnormal intermediate or a denormalized final result (block 100), the modified multiply-add instruction and the additional add instruction are processed by conventional pipeline processing. Processed (block 106) and normalize / denormalize the product using the normalization circuit 68 of the floating point adder 52 before performing the floating point addition process as described above.If the floating point multiplication process is determined to produce a normalized intermediate and a final product (block 100), product normalization is suppressed. The original addend is used in place of zero in the floating point multiply-add instruction (block 102). Additional floating point addition processing is then suppressed by converting the floating point addition instruction to NOP (block 104). Instruction processing then continues with conventional pipeline processing (block 86).According to one or more embodiments, the intermediate normal output and the denormalized final result of the floating point multiplication process are normalized / denormalized using the normalization circuit 58 of the floating point adder 52. Is called. This eliminates the need to add normalization circuitry to the output of the floating point multiplier 50, which adds latency and delay, enlarges the silicon area, and increases power consumption.Although the invention has been described herein with reference to particular properties, aspects, and embodiments, many variations, modifications, and other embodiments are possible over the broad scope of the invention, and thus all variations, modifications. Apparently, the embodiments are considered within the scope of the disclosure. Accordingly, the present embodiment should be construed in all aspects as illustrated and should not be limited. All changes that occur within the scope equivalent to the meaning of the appended claims are intended to be embraced therein.The functional block diagram of a processor.FIG. 3 is a functional block diagram of a floating point adder supplied by a floating point multiplier.6 is a flowchart of a method for executing a floating point multiply instruction.6 is a flowchart of a method for executing a floating point multiply-add instruction.
A method and system for enhanced security and manageability using secure storage. The system may include a crypto-processor and a memory coupled to receive memory transactions through the crypto-processor. The memory transactions are passed to the memory by the crypto-processor. The system may include a first processor, a second processor coupled to the first processor, and a storage device operably coupled to the first processor through the second processor. The second processor is configured to control access to the storage device. The method includes transmitting a request for a memory transaction for a storage location in the storage device and receiving the request for the memory transaction at the crypto-processor. The method also includes determining if the memory transaction is authorized for the storage location, and passing the request for the memory transaction to the storage device if the memory transaction is authorized for the storage location.
What is claimed is:1. A system, comprising:a crypto-processor;a memory coupled to receive memory transactions through the crypto-processor, wherein the memory transactions are passed to the memory by the crypto-processor;a device different from the crypto-processor, wherein the device is configured to request the memory transactions passed to the memory by the crypto-processor;wherein the crypto-processor includes a secret; and wherein the crypto-processor is configured to demand an authorization before passing memory access to the memory, wherein the authorization comprises an indication from the device that knows the secret, wherein the indication of the secret comprises a correct response to a challenge-response authentication.2. The system of claim 1, wherein the crypto-processor includes a memory permission table that maps at least a portion of the memory; and wherein the crypto-processor is configured to pass the memory transactions to the memory if the memory access is indicated as allowed by the memory permission table.3. The system of claim 2, wherein the crypto-processor is configured to pass the memory transactions to the memory only if the memory access is indicated as allowed by the memory permission table.4. The system of claim 1, further comprising:a bridge;a first bus coupled between the device and the bridge; anda second bus coupled between the bridge and the crypto-processor.5. The system of claim 1, wherein the memory comprises a ROM.6. The system of claim 5, wherein the ROM comprises a BIOS ROM.7. The system of claim 1, wherein the memory comprises a flash memory.8. The system of claim 1, wherein the crypto-processor and the memory are integrated into a protected storage device, the protected storage device comprising:one or more storage areas;logic for controlling access to the one or more storage areas;a random number generator; anda secret.9. The system of claim 8, wherein the one or more storage areas comprises a data storage and a code storage.10. The system of claim 9, wherein the secret is comprised within the code storage.11. The system of claim 1, wherein the memory comprises a protected storage, the protected storage comprising:one or more storage areas;logic for controlling access to the one or more storage areas; anda secret.12. The system of claim 11, wherein the one or more storage areas comprise a data storage and a code storage.13. The system of claim 12, wherein the secret is comprised within the code storage.14. The system of claim 1, wherein the memory further includes a secret.15. A method of operating a computer system, the computer system including a crypto-processor, a device different from the crypto-processor, and a storage device, the method comprising:transmitting a request for a memory transaction for a storage location in the storage device, wherein transmitting the request for the memory transaction for the storage location in the storage device comprises the device initiating the request for the memory transaction for the storage location in the storage device;receiving the request for the memory transaction at the crypto-processor;determining if the memory transaction is authorized for the storage location;passing the request for the memory transaction to the storage device if the memory transaction is authorized for the storage location, wherein passing the request for the memory transaction to the storage device if the memory transaction is authorized for the storage location comprises passing the request for the memory transaction to the storage device only if the memory transaction is authorized for the storage location;wherein the crypto-processor includes a secret; and wherein determining if the memory transaction is authorized for the storage location comprises demanding an authorization from the device initiating the request, wherein the authorization comprises an indication from the device that knows the secret;wherein the indication of the secret comprises a correct response to a challenge-response authentication; and wherein demanding an authorization from the device initiating the request comprises providing a challenge to the device, and the device providing the correct response to the challenge.16. The method of claim 15, wherein the crypto-processor includes a memory permission table that maps at least a portion of the storage locations in the storage device; and wherein determining if the memory transaction is authorized for the storage location comprises determining if the memory permission table includes an indication that the memory transaction at the storage location is allowed.17. The method of claim 15, wherein the computer system further comprises a bridge, a first bus coupled between the device and the bridge, and a second bus coupled between the bridge and the crypto-processor, wherein transmitting the request for the memory transaction for the storage location in the storage device further comprises:transmitting the request for the memory transaction for the storage location in the storage device over the first bus;receiving the request for the memory transaction for the storage location in the storage device from the first bus; andtransmitting the request for the memory transaction for the storage location in the storage device over the second bus.18. The method of claim 15, wherein the storage device comprises a memory; wherein transmitting a request for a memory transaction for a storage location in the storage device comprises transmitting the request for the memory transaction for a memory location in the memory; wherein determining if the memory transaction is authorized for the storage location comprises determining if the memory transaction is authorized for the memory location; and wherein passing the request for the memory transaction to the storage device only if the memory transaction is authorized for the storage location comprises passing the request for the memory transaction to the memory only if the memory transaction is authorized for the memory location.19. The method of claim 18, wherein the memory comprises a ROM; wherein transmitting the request for the memory transaction for a memory location in the memory comprises transmitting the request for the memory transaction for a memory location in the ROM; and wherein passing the request for the memory transaction to the memory only if the memory transaction is authorized for the memory location comprises passing the request for the memory transaction to the ROM only if the memory transaction is authorized for the memory location.20. The method of claim 19, wherein the memory comprises a flash memory; wherein transmitting the request for the memory transaction for a memory location in the memory comprises transmitting the request for the memory transaction for a memory location in the flash memory; and wherein passing the request for the memory transaction to the memory only if the memory transaction is authorized for the memory location comprises passing the request for the memory transaction to the flash memory only if the memory transaction is authorized for the memory location.21. The method of claim 15, wherein the storage device comprises a protected storage, comprising one or more storage areas, logic for controlling access to the one or more storage areas, and a secret, wherein the one or more storage areas includes the storage location; wherein transmitting the request for the memory transaction for the storage location in the storage device comprises transmitting the request for the memory transaction for the storage location in the protected storage; and wherein passing the request for the memory transaction to the storage device only if the memory transaction is authorized for the storage location comprises passing the request for the memory transaction to the protected storage only if the memory transaction is authorized for the storage location; the method further comprising:receiving the request for the memory transaction at the logic;verify the authorization using the logic and the secret; andpassing the request for the memory transaction an appropriate one of the one or more storage areas.22. A computer readable program storage device encoded with instructions that, when executed by a computer system including a crypto-processor, a device different from the crypto-processor, and a storage device, performs a method of operating the computer system, the method comprising:transmitting a request for a memory transaction for a storage location in the storage device, wherein transmitting the request for the memory transaction for the storage location in the storage device comprises the device initiating the request for the memory transaction for the storage location in the storage device;receiving the request for the memory transaction at the crypto-processor;determining if the memory transaction is authorized for the storage location;passing the request for the memory transaction to the storage device if the memory transaction is authorized for the storage location, wherein passing the request for the memory transaction to the storage device if the memory transaction is authorized for the storage location comprises passing the request for the memory transaction to the storage device only if the memory transaction is authorized for the storage location;wherein the crypto-processor includes a secret; and wherein determining if the memory transaction is authorized for the storage location comprises demanding an authorization from the device initiating the request, wherein the authorization comprises an indication from the device that knows the secret;wherein the indication of the secret comprises a correct response to a challenge-response authentication; and wherein demanding an authorization from the device initiating the request comprises providing a challenge to the device, and the device providing the correct response to the challenge.23. The computer readable program storage device of claim 22, wherein the crypto-processor includes a memory permission table that maps at least a portion of the storage locations in the storage device; and wherein determining if the memory transaction is authorized for the storage location comprises determining if the memory permission table includes an indication that the memory transaction at the storage location is allowed.24. The computer readable program storage device of claim 22, wherein the computer system further comprises a bridge, a first bus coupled between the device and the bridge, and a second bus coupled between the bridge and the crypto-processor, wherein transmitting the request for the memory transaction for the storage location in the storage device further comprises:transmitting the request for the memory transaction for the storage location in the storage device over the first bus;receiving the request for the memory transaction for the storage location in the storage device from the first bus; andtransmitting the request for the memory transaction for the storage location in the storage device over the second bus.25. The computer readable program storage device of claim 22, wherein the storage device comprises a memory; wherein transmitting a request for a memory transaction for a storage location in the storage device comprises transmitting the request for the memory transaction for a memory location in the memory; wherein determining if the memory transaction is authorized for the storage location comprises determining if the memory transaction is authorized for the memory location; and wherein passing the request for the memory transaction to the storage device only if the memory transaction is authorized for the storage location comprises passing the request for the memory transaction to the memory only if the memory transaction is authorized for the memory location.26. The computer readable program storage device of claim 25, wherein the memory comprises a ROM; wherein transmitting the request for the memory transaction for a memory location in the memory comprises transmitting the request for the memory transaction for a memory location in the ROM; and wherein passing the request for the memory transaction to the memory only if the memory transaction is authorized for the memory location comprises passing the request for the memory transaction to the ROM only if the memory transaction is authorized for the memory location.27. The computer readable program storage device of claim 25, wherein the memory comprises a flash memory; wherein transmitting the request for the memory transaction for a memory location in the memory comprises transmitting the request for the memory transaction for a memory location in the flash memory; and wherein passing the request for the memory transaction to the memory only if the memory transaction is authorized for the memory location comprises passing the request for the memory transaction to the flash memory only if the memory transaction is authorized for the memory location.28. The computer readable program storage device of claim 22, wherein the storage device comprises a protected storage, comprising one or more storage areas, logic for controlling access to the one or more storage areas, and a secret, wherein the one or more storage areas includes the storage location; wherein transmitting the request for the memory transaction for the storage location in the storage device comprises transmitting the request for the memory transaction for the storage location in the protected storage; and wherein passing the request for the memory transaction to the storage device only if the memory transaction is authorized for the storage location comprises passing the request for the memory transaction to the protected storage only if the memory transaction is authorized for the storage location; the method further comprising:receiving the request for the memory transaction at the logic;verify the authorization using the logic and the secret; andpassing the request for the memory transaction an appropriate one of the one or more storage areas.
This application is a continuation-in-part of co-pending U.S. patent application Ser. No. 09/852,372, entitled, "Secure Execution Box and Method," filed on May 10, 2001, whose inventors are Dale E. Gulick and Geoffrey S. Strongin. This application is also a continuation-in-part of co-pending U.S. patent application Ser. No. 09/852,942 entitled, "Computer System Architecture for Enhanced Security and Manageability," filed on May 10, 2001, whose inventors are Geoffrey S. Strongin and Dale E. Gulick.BACKGROUND OF THE INVENTION1. Field of the InventionThis invention relates generally to computing systems, and, more particularly, to a method and system for enhanced security and manageability for PC BIOS ROM and other secure storage.2. Description of the Related ArtFIG. 1A illustrates an exemplary computer system 100. The computer system 100 includes a processor 102, a north bridge 104, memory 106, Advanced Graphics Port (AGP) memory 108, a Peripheral Component Interconnect (PCI) bus 110, a south bridge 112, a battery, an AT Attachment (ATA) interface 114 (more commonly known as an Integrated Drive Electronics (IDE) interface), a universal serial bus (USB) interface 116, a Low Pin Count (LPC) bus 118, an input/output controller chip (SuperI/O(TM)) 120, and BIOS memory 122. It is noted that the north bridge 104 and the south bridge 112 may include only a single chip or a plurality of chips, leading to the collective term "chipset." It is also noted that other buses, devices, and/or subsystems may be included in the computer system 100 as desired, e.g. caches, modems, parallel or serial interfaces, SCSI interfaces, network interface cards, etc. ["SuperI/O" is a trademark of National Semiconductor Corporation of Santa Clara, Calif.]The processor 102 is coupled to the north bridge 104. The north bridge 104 provides an interface between the processor 102, the memory 106, the AGP memory 108, and the PCI bus 110. The south bridge 112 provides an interface between the PCI bus 110 and the peripherals, devices, and subsystems coupled to the IDE interface 114, the USB interface 116, and the LPC bus 118. The battery 113 is shown coupled to the south bridge 112. The Super I/O(TM) chip 120 is coupled to the LPC bus 118.The north bridge 104 provides communications access between and/or among the processor 102, memory 106, the AGP memory 108, devices coupled to the PCI bus 110, and devices and subsystems coupled to the south bridge 112. Typically, removable peripheral devices are inserted into PCI "slots" (not shown) that connect to the PCI bus 110 to couple to the computer system 100. Alternatively, devices located on a motherboard may be directly connected to the PCI bus 110.The south bridge 112 provides an interface between the PCI bus 110 and various devices and subsystems, such as a modem, a printer, keyboard, mouse, etc., which are generally coupled to the computer system 100 through the LPC bus 118 (or its predecessors, such as an X-bus or an ISA bus). The south bridge 112 includes the logic used to interface the devices to the rest of computer system 100 through the IDE interface 114, the USB interface 116, and the LPC bus 118.FIG. 1B illustrates certain aspects of the prior art south bridge 112, including those provided reserve power by the battery 113, so-called "being inside the RTC battery well" 125. The south bridge 112 includes south bridge (SB) RAM 126 and a clock circuit 128, both inside the RTC battery well 125. The SB RAM 126 includes CMOS RAM 126A and RTC RAM 126B. The RTC RAM 126B includes clock data 129 and checksum data 127. The south bridge 112 also includes, outside the RTC battery well 125, a CPU interface 132, power and system management units 133, PCI bus interface logic 134A, USB interface logic 134C, IDE interface logic 134B, and LPC bus interface logic 134D.Time and date data from the clock circuit 128 are stored as the clock data 129 in the RTC RAM 126B. The checksum data 127 in the RTC RAM 126B may be calculated based on the CMOS RAM 126A data and stored by BIOS during the boot process, such as is described below, e.g. block 148, with respect to FIG. 2A. The CPU interface 132 may include interrupt signal controllers and processor signal controllers. The power and system management units 133 may include an ACPI (Advanced Configuration and Power Interface) controller.From a hardware point of view, an x86 operating environment provides little for protecting user privacy, providing security for corporate secrets and assets, or protecting the ownership rights of content providers. All of these goals, privacy, security, and ownership (collectively, PSO) are becoming critical in an age of Internet-connected computers. The original personal computers were not designed in anticipation of PSO needs.From a software point of view, the x86 operating environment is equally poor for PSO. The ease of direct access to the hardware through software or simply by opening the cover of the personal computer allows an intruder or thief to compromise most security software and devices. The personal computer's exemplary ease of use only adds to the problems for PSO.SUMMARY OF THE INVENTIONIn one aspect of the present invention, a system is disclosed. The system includes a crypto-processor and a memory coupled to receive memory transactions through the crypto-processor. The memory transactions are passed to the memory by the crypto-processor. The crypto-processor may include a memory permission table that maps at least a portion of the memory. The crypto-processor may thus be configured to pass the memory transactions to the memory if the memory access is indicated as allowed by the memory permission table.In another aspect of the present invention, another system is disclosed. This system includes a first processor, a second processor coupled to the first processor, and a storage device operably coupled to the first processor through the second processor. The second processor is configured to control access to the storage device.In yet another aspect of the present invention, a method of operating a computer system including a crypto-processor and a storage device is disclosed. The method includes transmitting a request for a memory transaction for a storage location in the storage device and receiving the request for the memory transaction at the crypto-processor. The method also includes determining if the memory transaction is authorized for the storage location, and passing the request for the memory transaction to the storage device if the memory transaction is authorized for the storage location.The crypto-processor may include a memory permission table that maps at least a portion of the storage locations in the storage device. The method may then also include determining if the memory permission table includes an indication that the memory transaction at the storage location is allowed. The memory may include memory locations with a non-standard mapping. The method may then also include translating the request for the memory transaction from a standard mapping to the non-standard mapping used by the memory.In still another aspect of the present invention, another method for operating a computer system is disclosed. The computer system includes a requesting device, a storage device, and a security device, with the requesting device operably coupled to the storage device through the security device. The method includes receiving a transaction request for a storage location associated with the storage device from the requesting device, determining if the requesting device is authorized to access the storage device; and mapping the storage location in the transaction request according to the address mapping of the storage device if the requesting device is authorized to access the storage device. Determining if the requesting device is authorized to access the storage device may include providing a challenge in response to receiving the transaction request, receiving a response to the challenge, and determining if the response to the challenge is equal to an expected response.BRIEF DESCRIPTION OF THE DRAWINGSThe invention may be understood by reference to the following description taken in conjunction with the accompanying drawings, in which like reference numerals identify similar elements, and in which:FIG. 1A illustrates a block diagram of a prior art computer system, while FIG. 1B illustrates a block diagram of a prior art south bridge;FIGS. 2A and 2B illustrate flowcharts of prior art methods for operating a computer system using code stored in ROM;FIG. 3 illustrates a flowchart of an embodiment of data and command flow in a computer system having a secure execution box, according to one aspect of the present invention;FIG. 4 illustrates a block diagram of an embodiment of a computer system including security hardware in the south bridge as well as a crypto-processor, according to one aspect of the present invention;FIGS. 5A and 5B illustrate block diagrams of embodiments of a south bridge including security hardware for controlling SMM, according to various aspect of the present invention;FIG. 6 illustrates a block diagram of an embodiment of a south bridge including security hardware for secure SMM operations, according to one aspect of the present invention;FIGS. 7A, 7B, 7C, and 7D illustrate embodiments of secure storage, according to various aspects of the present invention;FIGS. 8A and 8B illustrate block diagrams of embodiments of a BIOS ROM and an SMM ROM for secure SMM operations, respectively, according to various aspects of the present invention;FIGS. 9A and 9B illustrate block diagrams of embodiments of a computer system operable to control the timing and duration of SMM operations, according to one aspect of the present invention;FIG. 10A illustrates a flowchart of an embodiment of a method for forcing a processor out of SMM, according to one aspect of the present invention, while FIG. 10B illustrates a flowchart of an embodiment of a method for reinitiating SMM upon the early termination of SMM, according to one aspect of the present invention;FIGS. 11A and 11B illustrate flowcharts of embodiments of methods for updating a monotonic counter stored in the SMM ROM, according to various aspects of the present invention;FIGS. 12A and 12B illustrate flowcharts of embodiments of methods for updating a monotonic counter in the south bridge, according to various aspects of the present invention;FIGS. 13A and 13B illustrate flowcharts of embodiments of a method for providing a monotonic value in a computer system, according to one aspect of the present invention;FIGS. 14A and 14B illustrate block diagrams of embodiments of processors including random number generators using entropy registers, according to one aspect of the present invention;FIG. 15 illustrates a block diagram of another embodiment of a random number generator, according to one aspect of the present invention;FIGS. 16A, 16B, 16C, 16D, 16E, 16F, and 16G illustrate flowcharts of embodiments of methods for accessing the security hardware, which may be locked, according to various aspects of the present invention;FIGS. 17A, 17B, and 17C illustrate block diagrams of embodiments of the access locks 460 shown in FIG. 6, while FIG. 17D illustrates a block diagram of an embodiment of the override register, all according to various aspects of the present invention;FIG. 18A illustrates a prior art flowchart of an SMM program, while FIG. 18B illustrates a flowchart of an embodiment of operation of an interruptible and re-enterable SMM program, and FIG. 18C illustrated a flowchart of an embodiment of operation of a computer system running the interruptible and re-enterable SMM program, according to various aspects of the present invention;FIGS. 19A, 19B, and 19C illustrate block diagrams of embodiments of computer systems with the BIOS ROM accessible to the processor at boot time and to the south bridge at other times, according to various aspects of the present invention;FIGS. 20A-20D illustrate block diagrams of embodiments of processors including lock registers and logic, according to various aspects of the present invention;FIG. 21 illustrates a flowchart of an embodiment of a method for initiating HDT mode, according to one aspect of the present invention;FIG. 22 illustrates a flowchart of an embodiment of a method for changing the HDT enable status, according to one aspect of the present invention;FIG. 23 illustrates a flowchart of an embodiment of a method for initiating the microcode loader, according to one aspect of the present invention;FIG. 24 illustrates a flowchart of an embodiment of a method for changing the microcode loader enable status, according to one aspect of the present invention;FIGS. 25A, 25B, 26, and 27 illustrate flowcharts of embodiments of methods for secure access to storage, according to various aspects of the present invention;FIG. 28 illustrates a prior art challenge-response method for authentication;FIGS. 29A, 29B, 29C, 29D, and 29E illustrate embodiments of computer devices or subsystems including GUIDs and/or a stored secret and/or a system GUID, according to various aspects of the present invention;FIGS. 30A and 30B illustrate flowcharts of embodiments of methods for operating a computer system including a biometric device, such as the biometric device shown in FIG. 29A, according to various aspects of the present invention;FIGS. 31A, 31B, 32A, 32B, 32C, and 33 illustrate flowcharts of embodiments of methods for authenticating a device in a computer system, such as computer systems including the computer subsystems of FIGS. 29A, 29D, and 29E, according to various aspects of the present invention;FIGS. 34 and 35 illustrate flowcharts of embodiments of methods for removing a device from a computer system once the device has been united with the computer system using a introduced bit, according to various aspects of the present invention;FIG. 36 illustrates a block diagram of an embodiment of a computer subsystem including bus interface logics with master mode capabilities, according to one aspect of the present invention;FIG. 37 illustrates a flowchart of an embodiment of a method for operating in a master mode outside the operating system, according to one aspect of the present invention;FIG. 38A illustrates a flowchart of an embodiment of a method for booting a computer system including authentication via the crypto-processor using master mode logic, while FIG. 38B illustrates a flowchart of an embodiment of a method for booting a computer system including authentication via the security hardware using the master mode logic, according to various aspects of the present invention;FIGS. 39A, 39B, and 39C illustrate block diagrams of embodiments of computer systems 5000 for securing a device, a computer subsystem, or a computer system using timers to enforce periodic authentication, according to various aspects of the present invention;FIGS. 40A and 40B illustrate flowcharts of embodiments of a method for securing a device, a computer subsystem, or a computer system, such as a portable computer, by limiting use to finite periods of time between successive authorizations, according to various aspects of the present invention;FIG. 41 illustrates a flowchart of an embodiment of a method for booting a computer system including initializing a timer to enforce periodic authentication and authorization, according to one aspect of the present invention; andFIGS. 42A and 42B illustrate block diagrams of embodiments of the system management registers, according to various aspects of the present invention.While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTSIllustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will, of course, be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure. The use of a letter in association with a reference number is intended to show alternative embodiments or examples of the item to which the reference number is connected.System Management Mode (SMM) is a mode of operation in the computer system that was implemented to conserve power. The SMM was created for the fourth generation x86 processors. As newer x86 generation processors have appeared, the SMM has become relatively transparent to the operating system. That is, computer systems enter and leave the SMM with little or no impact on the operating system.Referring now to the drawings, and in particular to FIG. 2A, a flowchart of a prior art method of initializing a computer system using code stored in the BIOS 122 is shown. During initialization of the power supply, the power supply generates a power good signal to the north bridge, in block 136. Upon receiving the power good signal from the power supply, the south bridge (or north bridge) stops asserting the reset signal for the processor, in block 138.During initialization, the processor reads the default jump location, in block 140. The default jump location in memory is usually at a location such as FFFF0h. The processor performs a jump to the appropriate BIOS code location (e.g. FFFF0h) in the ROM BIOS, copies the BIOS code to the RAM memory, and begins possessing the BIOS code instructions from the RAM memory, in block 142. The BIOS code, processed by the processor, performs a power-on self test (POST), in block 144.The BIOS code next looks for additional BIOS code, such as from a video controller, IDE controller, SCSI controller, etc. and displays a start-up information screen, in block 146. As examples, the video controller BIOS is often found at C000h, while the IDE controller BIOS code is often found at C800h. The BIOS code may perform additional system tests, such as a RAM memory count-up test, and a system inventory, including identifying COM (serial) and LPT (parallel) ports, in block 148. The BIOS code also identifies plug-and-play devices and other similar devices and then displays a summary screen of devices identified, in block 150.The BIOS code identifies the boot location, and the corresponding boot sector, in block 152. The boot location may be on a floppy drive, a hard drive, a CDROM, a remote location, etc. The BIOS code next calls the boot sector code at the boot location to boot the computer system, such as with an operating system, in block 154.It is noted that for a cold boot or a hard (re)boot, all or most of the descriptions given in blocks 136-154 may occur. During a warm boot or a soft (re)boot the BIOS code usually jumps from block 142 into block 148, skipping the POST, memory tests, etc.In FIG. 2B, a flowchart of a prior art method of operating a computer system in SMM using code stored in the BIOS 122 is shown. An interrupt controller receives a request for SMM, in block 172. The interrupt controller signals the request for SMM to the processor by asserting a system management interrupt (SMI#) signal, in block 174.The processor recognizes the request for SMM and asserts an SMI ACTive (SMIACT#) signal, in block 176. The system recognizes the SMIACT# signal, disables access to the system RAM, and enables access to system management RAM (SMRAM) space, in block 178.The current processor state is saved to SMRAM, in block 180. The processor resets to the SMM default state and enters SMM, in block 182. The processor next reads the default pointer and jumps to the appropriate place in SMRAM space, in block 184. In block 186, the source and/or nature of the SMI request is identified.An SMI handler services the SMI request, in block 188. After servicing the SMI request, the SMI handler issues a return from SMM (RSM) instruction to the processor, in block 190. Upon operating on the RSM instruction, the processor restores the saved state information and continues normal operation, in block 192.FIG. 3 illustrates a block diagram of an embodiment of a flowchart showing data and command flow in a computer system having a secure execution box 260, according to one aspect of the present invention. User input and output (I/O) data and/or commands 205 are provided to and received from one or more applications 210. The applications 210 exchange data and commands with cryptography service providers 215 within the computer system, such as the computer system 100 or any other computer system. The cryptography service providers 215 may use API (Application Programming Interface) calls 220 to interact with drivers 225 that provide access to hardware 230.According to one aspect of the present invention, the drivers 225 and the hardware 230 are part of a secure execution box configured to operate in a secure execution mode (SEM) 260. Trusted privacy, security and ownership (PSO) operations, also referred to simply as security operations, may take place while the computer system is in SEM 260. Software calls propagated from the user I/O 205 and/or the applications 210 may be placed into the secure execution box in SMM 260 via an SMM initiation register 425B (or SMM initiator 425A) discussed below with respect to FIG. 5B (or FIG. 5A). Parameters may be passed into and out of the secure execution box in SEM 260 via an access-protected mailbox RAM 415, also discussed below with FIGS. 5A and 5B. The software calls have access to the secure execution box in SEM 260 to various security hardware resources, such as described in detail below.In various embodiments of the present invention, power management functions may be performed inside SEM 260. One current standard for power management and configuration is the Advanced Configuration and Power Interface (ACPI) Specification. The most recent version is Revision 2.0, dated Jul. 27, 2000, and available from the ACPI website currently run by Teleport Internet Services, hereby incorporated herein by reference in its entirety. According to the ACPI specification, control methods, a type of instruction, tell the system to go do something. The ACPI specification does not know how to carry out any of the instructions. The ACPI specification only defines the calls, and the software must be written to carry out the calls in a proscribed manner. The proscribed manner of the ACPI specification is very restrictive. One cannot access some registers in your hardware. To access those registers, various aspects of the present invention generate an SMI# to enter SMM and read these registers. As power management has the potential to be abused e.g. change the processor voltage and frequency, raised above operating limits to destroy the processor, or lowered below operating limits leading to a denial of service, ACPI calls should be carried out in a secure manner, such as inside SEM 260.Inside SEM 260, each ACPI request can be checked against some internal rules for safe behavior. Using terminology more completely described below, the ACPI request would be placed in the inbox of the mailbox, parameter values read from the inbox, the ACPI request evaluated using the inbox parameters for acceptability, and then either carryout the request or not, based on the evaluation results. For additional details of various embodiments, see FIGS. 6, 42A, and 42B below.FIG. 4 illustrates a block diagram of an embodiment of a portion of an improved version of computer system 100 including security hardware 370 in a south bridge 330, as well as a crypto-processor 305, according to one aspect of the present invention. The south bridge 330 includes the security hardware 370, an interrupt controller (IC) 365, USB interface logic 134C, and the LPC bus interface logic (LPC BIL) 134D. The IC 365 is coupled to the processor 102. The USB interface logic 134C is coupled through an optional USB hub 315 to a biometric device 320 and a smart card reader 325. The LPC bus 118 is coupled to the south bridge 330 through the LPC BIL 134D. The crypto-processor 305 is also coupled to the LPC bus 118. A memory permission table 310 within the Crypto-processor 305 provides address mappings and/or memory range permission information. The memory permission table 310 may be comprised in a non-volatile memory. A BIOS 355, i.e. some memory, preferably read-only memory or flash memory, is coupled to the crypto-processor 305. The security hardware 370 may include both security hardware and secure assets protected by the security hardware.The security hardware 370 in the south bridge 330 may be operable to provide an SMI interrupt request to the IC 365 for the processor 102. The security hardware 370 may also interact with the crypto-processor 305. Access to the BIOS 355 is routed through the crypto-processor 305. The crypto-processor 305 is configured to accept and transfer access requests to the BIOS 355. The crypto-processor 305 therefore may understand the address mappings of the BIOS 305. According to one aspect of the present invention, the security hardware 370 allows the computer system 100 to become an embodiment of the secure execution box 260 shown in FIG. 3.In one embodiment, the crypto-processor 305 is configured to accept an input from the biometric device 320 and/or the smart card reader 325 over the USB interface, i.e. through the optional USB hub 315 and the USB interface logic 134C, and over the LPC bus 118. Other interfaces, such as IDE or PCI, may be substituted. The crypto-processor 305 may request one or more inputs from the biometric device 320 and/or the smart card reader 325 to authenticate accesses to the BIOS 355, other storage devices, and/or another device or subsystem in the computer system 100.It is noted that the IC 365 may be included in the processor instead of the south bridge 330. The IC 365 is also contemplated as a separate unit or associated with another component of the computer system 100. It is also noted that the operations of the LPC bus 118 may correspond to the prior art Low Pin Count Interface Specification Revision 1.0 of Sep. 29, 1997. The operations of the LPC bus 118 may also correspond to the extended LPC bus disclosed in co-pending U.S. patent application Ser. No. 09/544,858, filed Apr. 7, 2000, entitled "Method and Apparatus For Extending Legacy Computer Systems", whose inventor is Dale E. Gulick, which is hereby incorporated by reference in its entirety. It is further noted that the USB interface logic 134C may couple to the LPC BIL 134D is any of a variety of ways, as is well known in the art for coupling different bus interface logics in a bridge.FIGS. 5A and 5B illustrate block diagrams of embodiments of the south bridge 330, including the security hardware 370, according to various aspects of the present invention. In FIG. 5A, the south bridge 330A includes the security hardware 370A and IC 365. The security hardware 370A includes sub-devices such as an SMM timing controller 401A, an SMM access controller 402A, and control logic 420A. The sub-devices may be referred to as security hardware or secure assets of the computer system 100. The SMM timing controller 401A includes an SMM indicator 405, a duration timer 406A, a kick-out timer 407A, and a restart timer 408. The SMM access controller 402A includes SMM access filters 410, mailbox RAM 415, and an SMM initiator 425A.As shown in FIG. 5A, the control logic 420 is coupled to control operation of the SMM timing controller 401A, the SMM access controller 402A, and the SMM initiator 425A. Input and output (I/O) to the security hardware 370A pass through the SMM access filters 410 and are routed through the control logic 420A.The SMM timing controller 401A includes the duration timer 406A, which measures how long the computer system 100 is in SMM. The kick-out timer 407A, also included in the SMM timing controller 401A, counts down from a predetermined value while the computer system 100 is in SMM. The control logic 420A is configured to assert a control signal (EXIT SMM 404) for the processor to exit SMM, such as in response to the expiration of the kick-out timer 407A. The restart timer 408, included in the SMM timing controller 401A, starts counting down from a predetermined value after the kick-out timer 407A reaches zero. The SMM indicator 405, also included in the SMM timing controller 401A, is operable to monitor the status of one or more signals in the computer system, such as the SMI# (System Management Interrupt) signal and/or the SMIACT# (SMI ACTive) signal to determine if the computer system is in SMM.The SMM access controller 402A includes the SMM access filters 410, which are configured to accept input requests for the sub-devices within the security hardware 370A. When the computer system 100 is in SMM, the SMM access filters are configured to pass access requests (e.g. reads and writes) to the control logic 420A and/or the target sub-device. When the computer system 100 is not in SMM, the SMM access filters are configured to respond to all access requests with a predetermined value, such as all '1's. The SMM access controller 402A also includes the mailbox RAM 415. In one embodiment, the mailbox RAM 415 includes two banks of RAM, such as 512 bytes each, for passing parameters into and out of the secure execution box 260. Parameters passed to or from the sub-devices included within the security hardware 370 are exchanged at the mailbox RAM 415. One bank of RAM 415, an inbox, is write-only to most of all of the computer system in most operating modes. Thus, parameters to be passed to the sub-devices included within the security hardware 370 may be written into the inbox. During selected operating modes, such as SMM, both read and write accesses are allowed to the inbox. Another bank of RAM 415, an outbox, is read-only to most of all of the computer system in most operating modes. Thus, parameters to be received from the sub-devices included within the security hardware 370 may be read from the outbox. During selected operating modes, preferably secure modes, such as SMM, both read and write accesses are allowed to the outbox.The SMM initiator 425A may advantageously provide for a convenient way to request that the computer system 100 enter SMM. A signal may be provided to the SMM initiator 425A over the request (REQ) line. The signal should provide an indication of the jump location in SMM memory. The SMM initiator 425A is configured to make a request for SMM over the SMM request (SMM REQ) line, for example, by submitting an SMI# to the interrupt controller 365. The SMM initiator 425A is also configured to notify the control logic 420A that the request for SMM has been received and passed to the interrupt controller 365.In FIG. 5B, the south bridge 330B includes the security hardware 370B. The IC 365 is shown external to the south bridge 330B. The security hardware 370B includes an SMM timing controller 401B, an SMM access controller 402B, and control logic 420B. The SMM timing controller 401B includes an SMM indicator 405, a duration/kick-out timer 407B, and a restart timer 408. The SMM access controller 402B includes SMM access filters 410 and mailbox RAM 415. An SMM initiation register 425B is shown external to the south bridge 330B.As shown in FIG. 5B, the control logic 420B is coupled to control operation of the SMM timing controller 401B and the SMM access controller 402B. Input and output (I/O) signals to the security hardware 370B pass through the SMM access filters 410 and are routed through the control logic 420B. The control logic 420B is also coupled to receive an indication of a request for SMM from the SMM initiation register 425B.The SMM timing controller 401B includes the duration/kick-out timer 407B measures how long the computer system 100 is in SMM, counting up to a predetermined value while the computer system 100 is in SMM. The control logic 420B is configured to assert a control signal for the processor to exit SMM in response to the duration/kick-out timer 407B reaching the predetermined value. The restart timer 408 starts counting down from a predetermined value after the duration/kick-out timer 407B reaches the predetermined value. The SMM indicator 405 is operable to monitor the status of one or more signals in the computer system, such as the SMI# (System Management Interrupt) signal and/or the SMIACT# (SMI ACTive) signal, to determine if the computer system is in SMM.The SMM access controller 402B includes the SMM access filters 410, which are configured to accept input requests for the sub-devices within the security hardware 370B. When the computer system 100 is in SMM, the SMM access filters are configured to pass access requests (e.g. reads and writes) to the control logic 420B and/or the target sub-device. When the computer system 100 is not in SMM, the SMM access filters may be configured to respond to all access requests with a predetermined value, such as all '1's. The SMM access controller 402B also includes the mailbox RAM 415, described above with respect to FIG. 5A.The SMM initiation register 425B may advantageously provide for a convenient way to request that the computer system 100 enter SMM. A signal may be provided to the SMM initiation register 425B over the request (REQ) line. The signal should provide an indication of the jump location in SMM memory. The SMM initiation register 425B is configured to provide the indication to the control logic 420B. The control logic 420B is configured to make a request for SMM over the SMM request (SMM REQ) line, for example, by submitting an SMI# to the interrupt controller 365.It is noted that in the embodiment illustrated in FIG. 5A, the SMM initiator 425A includes internal logic for handling the SMM request. In the embodiment illustrated in FIG. 5B, the SMM initiation register 425B relies on the control logic 420B to handle the SMM request. It is also noted that the SMM initiator 425A is part of the security hardware 370A, while the SMM initiation register 425B is not part of the security hardware 370B.FIG. 6 illustrates a block diagram of an embodiment of the south bridge 330C including security hardware 370C, according to one aspect of the present invention. As shown, the security hardware 370C includes sub-devices, such as the SMM timing controller 401, the SMM access controller 402, the control logic 420, a TCO counter 430, a monotonic counter 435A, the scratchpad RAM 440, a random number generator 455, secure system (or SMM) management registers 470, OAR-(Open At Reset) locks 450, and an OAR override register 445. The SMM access controller 402 includes one or more access locks 460 within the SMM access filters 410. Some aspects of embodiments of the SMM timing controller 401, the SMM access controller 402, and the control logic 420 are described herein with respect to FIGS. 5A and 5B, above.The embodiment of the SMM access controller 402 illustrated in FIG. 6 includes the one or more access locks 460 within the SMM access filters 410. The access locks 460 provide a means of preventing (or locking) and allowing (or unlocking) access to one or more of the devices within the security hardware 370C. Various embodiments for the one or more access locks 460 are shown in FIGS. 17A-17C and described with reference thereto.In one embodiment, the access locks 460 are open at reset (OAR), allowing the BIOS software access to the security hardware 370. The BIOS software then closes the access locks 460 prior to calling the boot sector code, shown in block 154 in FIG. 2A. In various embodiments, the access locks 460 may be opened by software or hardware to allow for access to the security hardware 370. For example, the access locks 460 may be opened by a signal from the IC 365 or the processor 102 (or 805A or 805B from FIGS. 9A and 9B) or the control logic 420. The access locks 460 may be opened in response to an SMI# or in response to the processor 102 or 805 entering SMM. Additional information on the access locks 460 may be obtained from one or more of the methods 1600A-1600C described below with respect to FIGS. 16A-16C.The TCO counter (or timer) 430 may include a programmable timer, such as a count-down timer, that is used to detect a lock-up of the computer system 100. Lock-up may be defined as a condition of the computer system 100 where one or more subsystems or components do not respond to input signals for more than a predetermined period of time. The input signals may include internal signals from inside the computer system 100 or signals from outside the computer system 100, such as from a user input device (e.g. keyboard, mouse, trackball, biometric device, etc.). It is also noted that the lock-ups may be software or hardware in nature. According to various aspects of the present invention, the TCO counter 430 may be programmed and read from inside SMM. The TCO counter 430 is preferably programmed with value less than a default duration for the kick-out timer 407. In one embodiment, the TCO timer 430 generates an SMI# upon a first expiration of the TCO timer 430, and the TCO timer 430 generates a reset signal for the computer system upon a second, subsequent expiration of the TCO timer 430.In one embodiment, the TCO timer 430 may be accessed by the computer system 100 or software running in the computer system 100 for the computer system 100 to recover from lock-ups when the computer system is not in SMM. In another embodiment, the TCO timer 430 may be accessed by the computer system 100 both in and out of SMM.The monotonic counter 435A comprises a counter, preferably at least 32 bits and inside the RTC battery well 125, which updates when the value stored in the monotonic counter 435A is read. The monotonic counter 435A is configured to update the value stored to a new value that is larger than the value previously stored. Preferably, the new value is only larger by the smallest incremental amount possible, although other amounts are also contemplated. Thus, the monotonic counter 435A may advantageously provide a value that is always increasing up to a maximum or rollover value. Additional details may be found below with respect to FIGS. 8, 12, and 13.The scratchpad RAM 440 includes one or more blocks of memory that are available only while the computer system 100 is in certain operating modes, such as SMM. It is also contemplated that other sub-devices of the security hardware 370 may use the scratchpad RAM 440 as a private memory. One embodiment of the scratchpad RAM 440 includes 1 kB of memory, although other amounts of memory are also contemplated. In one embodiment, the scratchpad RAM is open at reset to all or most of the computer system 100, while in another embodiment, the scratchpad RAM is inaccessible while the computer system is booting.The random number generator (RNG) 455 is configured to provide a random number with a number of bits within a predetermined range. In one embodiment, a new random number with from 1 to 32 bits in length is provided in response to a request for a random number. It is noted that restricting access to the RNG, such as only in SMM, may advantageously force software to access the RNG through a standard API (application programming interface), allowing for increased security and easing hardware design constraints. Additional details may be found below with respect to FIGS. 14 and 15.The OAR locks 450 may include a plurality of memory units (e.g. registers), which include associated programming bit (or lock bits) that lock the memory (or memories) used to store BIOS information or other data, for example, BIOS ROM 355 and SMM ROM 550 in FIGS. 7A and 7B below. Each memory unit may have, by way of example, three lock bits associated with it. In one embodiment, four 8-bit registers may store the lock bits for each 512 kB ROM-page, one register for every two 64-kB segment. With sixteen blocks of four registers, a maximum of 8 MB of ROM may be locked. Addressing may be as follows:<tb>64-kB segment<sep>Register<sep>ADDRESS<tb>0, 1<sep>Register 0<sep>FFBx,E000h<tb>2, 3<sep>Register 1<sep>FFBx,E001h<tb>4, 5<sep>Register 2<sep>FFBx,E002h<tb>6, 7<sep>Register 3<sep>FFBx,E003hEach physical ROM chip may include four identification pins (ID[3:0]), known as strapping pins. The strapping pins may be used to construct sixteen spaces of 64 kB each. The 'x' in the address may represent the decode of the strapping pins, or the inverse.The lock registers from the OAR locks 450 may include:<tb>Register\Bits<sep>7<sep>OAR Lock 6:4<sep>3<sep>OAR Lock 2:0<tb>Register 0<sep>Reserved<sep>Segment 1<sep>Reserved<sep>Segment 0<tb>Register 1<sep>Reserved<sep>Segment 3<sep>Reserved<sep>Segment 2<tb>Register 2<sep>Reserved<sep>Segment 5<sep>Reserved<sep>Segment 4<tb>Register 3<sep>Reserved<sep>Segment 7<sep>Reserved<sep>Segment 6In one embodiment, one bit controls write access, one bit controls read access, and one bit prevents the other two bits from being changed. In one embodiment, once the locking bit is set (also described as the state being locked down), the write access bit and read access bit cannot be reprogrammed until the memory receives a reset signal. The layout of each register may include:<tb>Bit<sep>7<sep>6<sep>5<sep>4<sep>3<sep>2<sep>1<sep>0<tb>Value<sep>Rsvrd<sep>Lock<sep>Lock<sep>Lock 0<sep>Rsvrd<sep>Lock 2<sep>Lock 1<sep>Lock 0<tb><sep><sep>2<sep>1With a decode of the three lock bits including:<tb><sep>Read Lock<sep>Lock-Down<sep>Write Lock<sep><tb>Decode<sep>Data 2<sep>Data 1<sep>Data 0<sep>Resulting block state<tb>0x00<sep>0<sep>0<sep>0<sep>Full access<tb>0x01<sep>0<sep>0<sep>1<sep>Write locked (default<tb><sep><sep><sep><sep>state)<tb>0x02<sep>0<sep>1<sep>0<sep>Lock open (full access<tb><sep><sep><sep><sep>locked down)<tb>0x03<sep>0<sep>1<sep>1<sep>Write locked down<tb>0x04<sep>1<sep>0<sep>0<sep>Read locked<tb>0x05<sep>1<sep>0<sep>1<sep>Read and write locked<tb>0x06<sep>1<sep>1<sep>0<sep>Read locked down<tb>0x07<sep>1<sep>1<sep>1<sep>Read and write locked<tb><sep><sep><sep><sep>downThe embodiment of the security hardware 370C illustrated in FIG. 6 also includes the OAR override register 445. The OAR override register 445 provides a mechanism for allowing (or unlocking) and preventing (or locking) access to one or more of the devices within the security hardware 370C. The OAR override register 445 also provides a mechanism to override the access locks 460. In one embodiment, the OAR override register 445 includes a first indicator that the access locks 460 are to be ignored, with access to the security hardware locked by the access locks 460 either always available or never available, as implemented. The OAR override register 445 may also include a second indicator that the status of the first indicator may be changed, or not. If the second indicator shows that the first indicator may not be changed, then the device including the OAR override register 445 preferably needs reset for the second indicator to be changed. In other words, the second indicator is preferably OAR, similar to one embodiment of the access locks 460.Methods that include using the access locks 460 and/or the OAR override indicators are described below with respect to FIGS. 16A-16F. Various embodiments for the one or more access locks 460 are shown in FIGS. 17A-17C and described with reference thereto, and an embodiment of the OAR override register 445 is shown in FIG. 17D and described with reference thereto.Example embodiments of the secure system management registers 470 are shown below in FIGS. 98A and 98B and described therewith. Briefly, in one embodiment, the secure system management registers 470 include one or more ACPI lock bits 9810 to secure various ACPI or related functions against unauthorized changes. The ACPI lock bits 9810, once set, prevent changes to the ACPI or related functions. A request to change one of the ACPI or related functions requires that a respective ACPI lock bit 9810N be released before the respective one of the ACPI or related functions is changed. In another embodiment, the secure system management registers 470 include one or more ACPI range registers 9820 and/or one or more ACPI rule registers 9830. Each ACPI range register 9820 may be configured to store a value or values that define allowable or preferred values for a specific ACPI or related function. Each ACPI rule register 9830 may be configured to store part or all of a rule for determining if a change to one of the ACPI or related functions should be allowed. Examples of ACPI or related functions include changing a voltage, changing a frequency, turning on or off a cooling fan, and a remote reset of the computer system.In one embodiment, the access locks 460 are open at reset (OAR), allowing the BIOS software access to the security hardware 370. The BIOS software then closes the access locks 460 prior to calling the boot sector code, shown in block 154 in FIG. 2A. In various embodiments, the access locks 460 may be opened by software or hardware to allow for access to the security hardware 370. For example, the access locks 460 may be opened by a signal from the IC 365 or the processor 102 (or 805A or 805B from FIGS. 9A and 9B) or the control logic 420. The access locks 460 may be opened in response to an SMI# or in response to the processor 102 or 805 entering SMM. Additional information on the access locks 460 may be obtained from one or more of the methods 1600A-1600C described below with respect to FIGS. 16A-16C.It is noted that in one embodiment, all of the security hardware 370 (and the SMM initiation register 425B are inside the RTC battery well 125. In other embodiments, selected sub-devices of the security hardware 370 are excluded from the RTC battery well 125. In one embodiment, only a portion of the scratchpad RAM 440 is inside the RTC battery well 125 with the remaining portion outside the RTC battery well 125. For example, in one embodiment, the mailbox RAM 415 is outside the RTC battery well 125.FIGS. 7A and 7B illustrate embodiments of extended BIOS security, according to various aspects of the present invention. In FIG. 7A, the BIOS ROM 355 and the SMM ROM 550 are coupled to the LPC bus 118. As shown, a crypto processor 305, including a secret 610A, is coupled between the BIOS ROM 355 and the LPC bus 118. In FIG. 7B, an extended BIOS ROM 555 is shown coupled to the LPC bus 118. The extended BIOS ROM 555 includes the BIOS ROM 355 and the SMM ROM 550.BIOS ROM 355 memory space in the computer system 100 may include anywhere from 128 kB to 4 MB, divided into 64 kB segments. An additional one or more 4 MB of SMM ROM 550 memory space may be addressed via a paging mechanism, for example, where the second page of ROM memory space is within separate chips and selected by an additional set of identification select (IDSEL) pins. Each segment of the BIOS ROM 355 memory space and the SMM ROM 550 memory space may be lockable, and open at reset. In one embodiment, the access protection mechanism (i.e. the lock) is not implemented in the BIOS ROM 355 or SMM ROM 550, but, for example, in the south bridge 330C in the security hardware 370C, as previously described with respect to FIG. 6.In one embodiment, the BIOS ROM 355 includes 4 MB of memory space. Read access to the BIOS ROM 355 memory space may be unrestricted at any time. Write locks on the BIOS ROM 355 memory space may be OAR and cover the memory space from FFFF,FFFFh to FFC0,0000h, in 32-bit address space on the LPC bus 145.In one embodiment, the crypto processor 305 is a specialized processor that includes specialized cryptographic hardware. In another embodiment, the crypto processor 305 includes a general-purpose processor programmed with cryptographic firmware or software. In still another embodiment, the crypto processor 305 includes a general-purpose processor modified with specialized cryptographic hardware. Selected methods that may use or include the crypto processor 305 are described with respect to FIGS. 25A-26, with an example of a prior art challenge-response authentication (or verification) method shown in FIG. 28.Other embodiments are also contemplated. For example, the BIOS ROM 355 may be coupled to the LPC bus 118, and the crypto processor 305 may be coupled between the SMM ROM 550 and the LPC bus 118. Also, the crypto processor 305 may be coupled between the extended BIOS ROM 555 and the LPC bus 118.FIG. 7C illustrates an embodiment of protected storage 605, according to one aspect of the present invention. As shown, protected storage 605 is coupled to the LPC bus 118 and includes logic 609 and secret 610B, in addition to its storage locations. The protected storage 605 may include memory, such as RAM, ROM, flash memory, etc., or other storage media, such as hard drives, CDROM storage, etc. Although shown as a single unit, the protected storage is also contemplated as a sub-system that includes separate components for storage and logic, such as shown in FIG. 7D. According to FIG. 7D, a crypto-processor 305, including a secret 610A, is coupled in front of a protected storage 605B. Access to the protected storage 605B is through the crypto-processor 305. The protected storage 605B includes data storage 608A, access logic 609B, a lock register 606, and code storage 607, including a secret 610B.FIGS. 8A and 8B illustrates block diagrams of embodiments of a BIOS ROM 355 and an SMM ROM 550 for secure SMM operations, respectively, according to various aspects of the present invention. As shown in FIG. 8A, the BIOS ROM 355 may include data storage 608B, a secret 610C, and private memory 606.As shown in FIG. 8B, the SMM ROM 550 may be divided into a plurality of SMM ROM blocks 605-615, a stored secret 620, a plurality of public ROM blocks 625-630, one or more reserved ROM blocks 635, one or more registers 640, and a monotonic counter 435B.The plurality of SMM ROM blocks 605-615 may include an SMM ROM 0 block 605, an SMM ROM 1 block 610, and an SMM ROM 2 block 615. The plurality of public ROM blocks 625-630 may include a public ROM block 0 625 and a public ROM block 1 630. One embodiment of access rights, lock status, and 32-bit address ranges in the LPC bus 118 space are given here in table form.<tb>ROM<sep>READ<sep>WRITE<sep>ADDRESS<tb>BLOCK<sep>ACCESS<sep>LOCK<sep>RANGE<tb>SMM ROM 0<sep>SMM<sep>Write Once<sep>FFBx,1FFFh:FFBx,0000h<tb>605<sep>Only<tb>SMM ROM 1<sep>SMM<sep>Never Erase<sep>FFBx,3FFFh:FFBx,2000h<tb>610<sep>Only<tb>SMM ROM 2<sep>SMM<sep>None<sep>FFBx,5FFFh:FFBx,4000h<tb>615<sep>Only<tb>SMM Counter<sep>SMM<sep>None<sep>FFBx,7FFFh:FFBx,6000h<tb>620<sep>Only<tb>Public 0<sep>Un-<sep>Write Once<sep>FFBx,9FFFh:FFBx,8000h<tb>625<sep>restricted<sep>In SMM<tb>Public 1<sep>Un-<sep>Never Erase,<sep>FFBx,BFFFh:FFBx,A000h<tb>630<sep>restricted<sep>Write in SMM<tb>Reserved<sep>N/A<sep>N/A<sep>FFBx,DFFFh:FFBx,C000h<tb>635<tb>Registers<sep>N/A<sep>N/A<sep>FFBx,FFFFh:FFBx,E000h<tb>640The 'x' in the address ranges given in the table may denote the strapping pin decode or their inverse. In one embodiment, the ROM blocks 605-615 and 625-630 in the table are each 64 kB in size. In one embodiment, the computer system may support up to 8 MB of extended BIOS ROM 555 storage, divided into sixteen pages of 512 kB each. In another embodiment, the memory address range from FFBx,FFFFh down to FFBx,0000h includes the plurality of SMM ROM blocks 605-615, the SMM counter 620, the plurality of public ROM blocks 625-630, the one or more registers 640, and the monotonic counter 435B.The one or more reserved ROM blocks 635 may be used for future expansion. The one or more registers 640 may store additional data, as needed.In one embodiment, the monotonic counter 435B is stored flat, such as a chain of 8-bit values in an 8K-byte ROM. This embodiment provides 8K bits that counted by noting the number of changed bits (or the most significant bit that is the different). It is noted that 8K bits stored flat translates into 13 bits binary (i.e. 8*1024=8192=2<13> ) The monotonic counter 435B is initially in the erased state, such as with all bits set to one. Any time the computer system is reset as a result of a power failure and there is an invalid RTC checksum, such as when the RTC battery 113 is not present, boot software inspects the monotonic counter 435B and updates it. The boot software may look for the most significant byte including at least one changed bit, such as zero. Initially, byte 0 (zero) is chosen when the monotonic counter 435B is in the erased state. Typically, the RTC checksum 127 is typically calculated by boot code from the BIOS whenever it updates the CMOS RAM 126A in the RTC battery well 125. The RTC checksum 127 is then stored in the RTC RAM 126B, also in the RTC battery well 125, which also holds date and time data. Typically, the RTC RAM 126B may be 256 bytes in size.Flat encoding of the monotonic counter 435B is preferred to other methods of encoding primarily when the monotonic counter 435B is stored in flash memory. Other methods of encoding may be preferred when other memory types are used to store the values for the monotonic counter 435B. One consideration in choosing the method of encoding is which method of encoding provides for a maximum use.Continuing with the above embodiment for updating the monotonic counter 435B, the next most significant bit, in the most significant byte including at least one zero, is set to zero. For example, if byte five of the monotonic counter 435B returns 0000,0000b and byte six of the monotonic counter 435B returns 1111,1000b, then the boot software will write byte six of the monotonic counter 435B as 1111,0000b. If byte five of the monotonic counter 435B returns 0000,0000b and byte six of the monotonic counter 435B returns 1111,1111b, then the boot software would write byte six of the monotonic counter 435B as 1111,1110b.Reading the monotonic counter 435B as the most significant bits and the monotonic counter 435A shown in FIG. 6 as the least significant bits, a 45-bit monotonic counter 435 may be read to obtain an always-increasing 48-bit value, when monotonic counter 435B provides 13 bits and monotonic counter 435A provides 32 bits. In this embodiment, the monotonic counter 435A provides bytes zero, one, two, and three, while the monotonic counter 435B provides bytes four and five of the six byte value. Numbers of bits other than 45 are likewise contemplated.Two special conditions are contemplated. If the monotonic counter 435A is read when storing the default or erased value, such as all ones, then the monotonic counter 435B in the SMM ROM 550 is updated. If the monotonic counter 435B in the SMM ROM 550 is updated a predetermined number of times, such as 65,536 times, then the boot software must erase the monotonic counter 435B in the SMM ROM 550 and start over with the default value, e.g. all ones.By way of example and not limitation, consider the monotonic counter 435A and the monotonic counter 435B each storing one byte of eight bits. For this example, the monotonic counter 435A, in the south bridge 330, returns with '00001111', while the monotonic counter 435B, in the SMM ROM 550, returns '11110000'. The value from the flat encoded monotonic counter 435B is converted to standard binary as '00000100b'. The 16-bit monotonic value becomes '000001000000111b' when the binary value from monotonic counter 435B is combined with the binary value from monotonic counter 435AA flat encoding may advantageously allow for increased reliability if the monotonic counter 435B is stored in flash memory. Updating the monotonic counter 435B has no cost, while erasing the flash memory does have a cost in long-term reliability. The monotonic counter 435B should be stored in non-volatile memory. Other memory types contemplated include encapsulated RAM with an included power supply.One use of the monotonic counters 435A and 435B is as a source for a nonce. Each nonce must be different. Differences may be predictable or unpredictable. Nonces may be used to help prevent replay attacks. When a message is encrypted, changing even one bit changes the encrypted message. Any strong encryption method distributes even a one-bit change extensively. A nonce may be used in a challenge-response method, such as described below.Providing the monotonic counters 435A and 435B as two counters, instead of one, may advantageously allow for larger values while minimizing the number of bits stored in the non-volatile memory. Access to the monotonic counter 435A is typically faster than access to the monotonic counters 435B, so monotonic counter 435A may be used independently when a fast access time is important, so long as the length of the monotonic value stored in the monotonic counter 435A is adequate for the desired purpose.FIGS. 9A and 9B illustrate block diagrams of embodiments of computer systems 800A and 800B that control the timing and duration of SMM, according to various aspects of the present invention. FIGS. 9A and 9B include a processor 805, a north bridge 810, memory 106, and the south bridge 330. The processor includes an SMM exit controller 807 and one or more SMM MSRs (machine specific registers) 807. The north bridge 810 includes a memory controller 815. The south bridge 330 includes the SMM timing controller 401 and the scratchpad RAM 440. The north bridge 810 is coupled between the processor 805 and the south bridge 330, to the processor 805 through a local bus 808 and to the south bridge 330 through the PCI bus 110. The north bridge 810is coupled to receive the SMIACT# signal from the processor 805.In the embodiment of FIG. 9A, the computer system 800A signals that the processor 805 is in SMM using standard processor signals (e.g. SMIACT# to the north bridge 810) and/or bus cycles on the local bus 808 and PCI bus 110. In the embodiment of FIG. 9B, the computer system 800B signals that the processor 805 is in SMM using standard processor signals (e.g. SMIACT#) to both the north bridge 810 and the south bridge 330. An exit SMM signal 404 is also shown between the SMM timing controller 401 and the SMM exit controller 806.While the processor 805 is in SMM, the processor 805 knows that it is in SMM and asserts SMIACT# to either the north bridge 810 and/or the south bridge 330. The processor 805 may, for example, set and read one or more hardware flags or signals associated with SMM. These hardware flags or signals may be in the processor 805, or in the north bridge 810. In one embodiment, the north bridge 810 receives the SMIACT# signal and in response to receiving the SMIACT# signal, signals the south bridge 330 that the processor is in SMM by sending a special bus cycle or an encoded bus cycle over the PCI bus 110. In another embodiment, the SMIACT# signal is received directly by the south bridge 330.In one embodiment, an SMM-specific hardware flag at an internal memory interface in the processor 805 is set when the processor 805 enters SMM. Any address call by the processor 805 is routed through the internal memory interface. The internal memory interface determines where the address call should be routed. If the SMM-specific hardware flag is set, then memory calls to SMM memory addresses are recognized as valid SMM memory calls. If the SMM-specific hardware flag is not set, then memory calls to SMM memory addresses are not recognized as valid SMM memory calls.It is noted that other buses using other bus protocols may couple the processor 805, the north bridge 810, and the south bridge 330. These buses may use bus protocols that include a bus cycle that indicates that the computer system 800 is in SMM. It is also noted that processor signals other than SMIACT# may be directly received by the south bridge 330, such as the SMI# signal or another dedicated signal.The SMM exit controller 806 in the processor 805 is configured to receive a request to the processor 805 to exit SMM. In one embodiment, the SMM exit controller 806 is operable to exit SMM prior to completing the task for which the SMI# was originally asserted that led to the processor 805 being in SMM. Upon receiving the request to exit SMM, the SMM exit controller 806 is configured to read the contents of the one or more SMM MSRs 807 to obtain a jump location for a clean-up routine, preferably stored in ROM, in SMM memory space. The SMM MSRs 807 may also store one or more bits to indicate that an SMM routine has been interrupted and/or a re-entry point (e.g. an address in SMM memory space) in the interrupted SMM routine. The SMM exit controller 806 may be configured to store the one or more bits indicating that the SMM routine has been interrupted and the re-entry point.FIG. 10A illustrates a block diagram of one embodiment of a flowchart of a method for forcing the processor 805 out of SMM early, according to one aspect of the present invention. The method includes checking if the computer system is in SMM in decision block 905. If the computer system is not in SMM in decision block 905, then the method continues checking to determine if the computer system is in SMM in decision block 905. If the computer system is in SMM in decision block 905, then the method initiates the kick-out timer 407 in block 910.The method next checks to determine if the kick-out timer 407 has expired in decision block 915. If the kick-out timer 407 has not expired, then the method continues checking to determine if the kick-out timer 407 has expired in decision block 915. If the kick-out timer 407 has expired in decision block 915, then the method transmits a request to the processor to exit SMM without completing the SMI request that invoked SMM, in block 920. The processor saves the state of the SMM session without finishing the SMM session and exits SMM, in block 925.The request to the processor to exit SMM, in block 920, may include submitting an RSM (Resume from System Management mode) instruction, or other control signal delivered over the system bus, to the processor. Upon executing the RSM instruction, or receiving the control signal through the interface logic to the system bus, the processor exits SMM and the processor's previous state is restored from system management memory. The processor then resumes any application that was interrupted by SMM. In another embodiment, the request to the processor to exit SMM includes another device in the computer system, such as the south bridge, asserting a control signal, such as the exit SMM signal, to the processor to exit SMM.The processor saving the SMM state, in block 925, may include setting a bit to indicate that the SMM session was not finished. If the SMM code has multiple entry points, then the processor may also save an indication of which entry point should be used upon re-entering SMM, to finish the unfinished SMM session. These indications may be saved in any of a number of places, such as the one or more SMM MSRs 807 or the scratchpad RAM 440. It is also contemplated that another specific storage location could be designed into or associated with the processor 805, the north bridge 810, the interrupt controller 365, and/or the south bridge 330.FIG. 10B illustrates a block diagram of an embodiment of a flowchart of a method for reinitiating SMM a preselected period of time after the early termination of SMM, according to one aspect of the present invention. It is noted that FIG. 10B may be a continuation of the method shown in FIG. 10A, or a stand-alone method. The method of FIG. 10B includes initiating the restart timer 408, in block 1010. The method checks if the restart timer 408 has expired, in decision block 1015. If the restart timer 408 has not expired, then the method continues checking to determine if the restart timer 408 has expired, in decision block 1015.If the restart timer 408 has expired in decision block 1015, then the method asserts an SMI request to the processor, in block 1020. The processor enters SMM and looks for an entry indicating that a previous SMM session has been ended prior to fulfilling the previous SMM request, in block 1025. The entry may be, as examples, a flag bit that has been set, or a stored jump location in a default location. The method checks for an unfinished SMM session in decision block 1030. If there is no unfinished SMM session in decision block 1030, then the method starts a new SMM session, in block 1035. If there is unfinished SMM session in decision block 1030, then the method reads the saved status information about the previous SMM session, in block 1040, and continues the previous SMM session, in block 1045. It is noted that the method may make use of the saved status information, from block 1040, when continuing the previous SMM session, in block 1045.FIGS. 11A and 11B illustrate flowcharts of embodiments of methods 1100A and 1100B for upgrading the monotonic counter 435B, which may be stored in the SMM ROM 550, according to various aspects of the present invention. The method 1110A, shown in FIG. 11A, includes checking the RTC checksum, in block 1105. In decision block 1110, if the RTC checksum is valid, then the method 1100A exits. In decision block 1110, if the RTC checksum is not valid, then the method 1100 inspects the monotonic counter 435B in the SMM ROM 550 in block 1115. In decision block 1120A, the method checks if the value stored in the monotonic counter 435B in the SMM ROM 550 is the default (e.g. reset or rollover) value.In decision block 1120A, if the value stored in the monotonic counter 435B in SMM ROM 550 is the default value, then the method 1100A updates the value stored in the monotonic counter 435B to an incremental value, in block 1130A, preferably the smallest possible incremental value. In decision block 1120A, if the value stored in the monotonic counter 435B in the SMM ROM 550 is not equal to the default value, then the method 1100A identifies the value stored in monotonic counter 435B in the SMM ROM 550, in block 1125A. After identifying the value stored, in block 1125A, the method 1100A updates the value stored in the monotonic counter 435B in the SMM ROM 550 by the incremental value, in block 1135A.The method 1100B, shown in FIG. 11B, includes checking the RTC checksum, in block 1105. In decision block 1110, if the RTC checksum is valid, then the method 1100A exits. In decision block 1110, if the RTC checksum is not valid, then the method 1100 inspects the monotonic counter 435B in the SMM ROM 550 in block 1115. In decision block 1120B, the method checks if the values stored in the monotonic counter 435B in the SMM ROM 550 are all ones.In decision block 1120B, if all values in the monotonic counter 435B in SMM ROM 550 are equal to one (i.e. the reset value), then the method 1100B updates the first byte so that a zero is stored as the least significant bit in block 1130B. In decision block 1120B, if all values in the monotonic counter 435B in the SMM ROM 550 are not equal to one, then the method 1100B identifies the highest numbered byte with a zero in a most significant bit location, in block 1125B, or the first byte if no byte has a zero in the most significant bit position. After identifying a highest numbered byte with a zero in its most significant bit location or the first byte, in block 1125B, the method 1100B updates the next highest numbered byte or the first byte with a zero in its next most significant bit location without a zero, in block 1135B.FIGS. 12A and 12B illustrate flowcharts of embodiments methods 1200A and 1200B for updating a monotonic counter 435A in the south bridge 330, according to various aspects of the present invention. The method 1200A checks to see if the value stored in the monotonic counter 435A in the south bridge 330 is the maximum value that can be stored, in decision block 1205A. If the value stored in the monotonic counter 435A in the south bridge 330 is not the maximum value, in decision block 1205, then the method 1200A exits. If the value stored in the monotonic counter 435A in the south bridge 330 is the maximum value that can be stored, in decision block 1205, then the method 1200A inspects the monotonic counter 435B in the SMM ROM 550 in decision block 1210. The method 1200A checks to see if the value stored in the monotonic counter 435B in the SMM ROM 550 is the default (or reset) value, in decision block 1215A.If in decision block 1215A, the value stored in the monotonic counter 435B in the SMM ROM 550 is the default value, then the method 1200A updates the value stored in the monotonic counter 435B in the SMM ROM 550 with an incremental value, in block 1225A, preferably the smallest possible incremental value. If, in decision block 1215A, the value stored in the monotonic counter 435B in SMM ROM 550 is not the default value, then the method 1200A identifies the value stored in the monotonic counter 435B in the SMM ROM 550, in block 1220A. After the method 1200A identifies value stored, in block 1220, the method 1200A updates the value stored in the monotonic counter 435B in the SMM ROM 550 by the incremental value, in block 1230A.The method 1200B, shown in FIG. 12B, checks to see if all values in the monotonic counter 435A in the south bridge 330 are equal to one (i.e. the reset value), in decision block 1205B. If all values in the monotonic counter 435A in the south bridge 330 are not equal to one, in decision block 1205B, then the method 1200B exits. If all values in the monotonic counter 435A in the south bridge 330 are equal to one, in decision block 1205B, then the method 1200B inspects the monotonic counter 435B in the SMM ROM 550, in decision block 1210. The method 1200B checks to see if all values in the monotonic counter 435B in the SMM ROM 550 are equal to one, in decision block 1215B.If in decision block 1215B, all values in the monotonic counter 435B in the SMM ROM 550 are equal to one, then the method 1200B updates the first byte with a zero in its least significant bit, in block 1225B. If, in decision block 1215B, all values in the monotonic counter 435B in SMM ROM 550 are not equal to one, then the method 1200B identifies the highest numbered byte with a zero in its most significant bit location, in block 1220B, or the first byte if no byte has a zero in the most significant byte location. After the method 1200B identifies the highest numbered byte with a zero in its most significant bit location or the first byte, in block 1220B, the method 1200B upgrades the next highest numbered byte, or the first byte, with a zero in the next most significant bit location, in block 1230B.FIG. 13A and FIG. 13B illustrate block diagrams of flowcharts of embodiments of methods 1300A and 1300B for providing a value from a monotonic counter 435 in the computer system, according to various aspects of the present invention. The method 1300A receives a request for a value from the monotonic counter 435 in block 1305. The method 1300A requests the value from the monotonic counter 435A in the south bridge 330 in block 1310. The method 1300A updates the value in the monotonic counter 435A in south bridge 330 in block 1315. The method 1300A checks the updated value from monotonic counter 435A in the south bridge 330 for a rollover value, in block 1320.In decision block 1325, if the rollover value has been reached, then the method 1300A updates the value in the monotonic counter 435B in the SMM ROM 550 in block 1320. If the rollover value has not reached in decision block 1325, or if the method 1300A has updated the value in the monotonic counter 435A in the SMM ROM 550 in block 1330, then the method 1300A provides the updated value from the monotonic counter 435A in the south bridge 330 in block 1335.The method 1300B requests the value from the monotonic counter 435B in the SMM ROM 550, in block 1340. The method 1300B receives the value from the monotonic counter 435B in the SMM ROM 550 in block 1345. The value from the monotonic counter 435A in the south bridge 330 is combined with the value from the monotonic counter 435B in the SMM ROM 550 in block 1350. The method 1300B provides the combined value in response to the request for the value from the monotonic counter in block 1355.As noted above, the monotonic counter 435A in the south bridge 330 may include a 32-bit value, while the monotonic counter 435B in the SMM ROM 550 may include a 15-bit value. The returned value from the monotonic counter 435, provided in response to the request for the value of the monotonic counter, would then include a 45-bit value.It is noted that the 32-bit value from the monotonic counter 435A in the south bridge 330 may be provided by software instead of being read from the south bridge 330. In the software embodiment, the software itself provides a 32-bit, always increasing, i.e. monotonic value, which is combined with the value from the monotonic counter 435B in the SMM ROM 550 to provide a unique 45-bit value. It is also noted that the size of the monotonic counters 435A and 435B in the south bridge 330 and in the SMM ROM 550, respectively, may be designed with other bit sizes, as desired.Although the methods 1100A, 1100B, 1200A, and 1200B show updates to the monotonic counters 435A and 435B as being in-line with monotonic value requests, it is also contemplated that software or hardware may be used to update the monotonic counters 435A and 435B separately from the monotonic value requests. Such updates could occur, for example, after the monotonic value request that leads to the monotonic value reaching the rollover value.FIGS. 14A and 14B illustrate block diagrams of embodiments of processors 805A and 805B, including random number generators 455A and 455B using entropy registers 1410, according to one aspect of the present invention. The RNG 455 in FIG. 6 may also use an entropy register 1410, similar to what is shown here. FIG. 14A shows an embodiment of a processor 805A, which includes a plurality of performance registers 1405A-1405N coupled through a plurality of bit lines 1406 to a random number generator 455A. FIG. 14B shows another embodiment of a processor 805B, which includes the plurality of performance registers 1405A-1405N coupled through a plurality of bit lines 1406 to a random number generator 455B.Common to both FIGS. 14A and 14B, the performance registers 1405A through 1405N each store a value indicative of a different performance metric. Exemplary performance metrics may include first-level-cache hit rate, second-level-cache hit rate, third-level-cache hit rate, branch target cache, and/or other model specific registers (MSRs), such as those used for measuring performance. In one embodiment, the performance registers include any register that updates the least significant bit at a rate asynchronous to the local and/or system clock.In one embodiment, each of the plurality of bit lines 1406 couple between the least significant bit entry in one of the performance registers 1405 and an entry in an entropy register 1410 in the RNG 455. Each entry of the entropy register 1410 may couple to a different one of the performance registers 1405. In another embodiment, each entry of the entropy register 1410 may couple to one or more entries in one or more of the performance registers 1405 or other sources of single bits within the processor 805.FIG. 14A includes the RNG 455A, which also includes an entropy control unit 1415 coupled to receive a request over a request line (REQ) from the processor 805A for a random number over output lines (RN). The entropy control unit 1415 is configured to assert a control signal (C) to the entropy register 1410 and read out the value in the entropy register 1410 over the data lines (D). The entropy control unit 1415 is further configured to provide the random number from the entropy register 1410 over the output lines (RN) in response to the request line (REQ) being asserted by the processor 805A.FIG. 14B includes the RNG 455B, which includes the entropy register 1410. The entropy register 1410 of FIG. 14B may be read by the processor 805B. The entropy register 1410 latches the values received over plurality of bit lines 1406 upon receiving a clocking signal (CLK). The random number from the entropy register 1410 may then be read out over the output lines (RN) by the processor 805B.It is noted that the RNG 455A and the RNG 455B may be included in other devices in the computer system other than the processor 805. Contemplated locations for the RNG 455A and the RNG 455B include the north bridge 810 and the south bridge 330. It is also noted that the performance registers 1405 are not normally accessible to a user of the processor 805 once the processor 805 is in a computer system, as the performance registers 1405 are primarily used for testing during the design and engineering stages of the manufacturing process. This may advantageously allow for better randomness with less likelihood of tampering with the random number obtained from the entropy register 1410.FIG. 15 illustrates a block diagram of another embodiment of a random number generator 455C, according to one aspect of the present invention. The RNG 455C includes a plurality of ring oscillators (RO0-RO7) 1514A-1514H, a linear feedback shift register (LFSR) 1515, a digital to analog converter (D/A) 1520, a voltage controlled oscillator (VCO) 1525, a sample and hold circuit 1530, a cyclic redundancy code generator 1535 (CRC), a self test circuit 1511, a multiplexer (MUX) 1545, and a counter 1540.The CLK signal 1505 is received by the RNG 455C by the LFSR 1515, the sample and hold circuit 1530, the CRC 1535, and the counter 1540. Either a system reset signal (SYSTEM_RESET) 1507 or a read strobe (READ_STROBE) are received by the counter 1540 at the reset (RST) input port. The LFSR 1515 receives output signals of each of the ring oscillators (RO0-RO7) 1514A-1514H at one input port (RO[7:0]) and the output signals of the sample and hold circuit at another input (IN) terminal. A plurality of values are provided by the LFSR 1515 at the output (OUT) terminal. As shown, one of the plurality of values delivered by the LFSR 1515 is XORed with the CLK signal 1505 before all of the plurality of values provided by the LFSR 1515 are delivered to the D/A 1520. The analog output signal of the D/A 1520 is provided as a control signal to the VCO 1525.The output of the VCO 11525 is provided to the input (IN) terminal of the sample and hold circuit 1530 and clocked on the CLK signal 1505. The output (OUT) signal of the sample and hold circuit 1530 is provided to the input terminal of the CRC 1535 and clocked on the CLK signal 1505, as well as to the IN terminal of the LFSR 1515, as described above. A plurality of output values is provided to the MUX 1545 through the CRC output port (OUT). The MUX 1545 selects between the output values of the CRC 1535 and ground (GND). The MUX 1545 provides the random number over output lines (RN[31:0]).A request for a random number over the read strobe line (READ_STROBE) results in the counter 1540 counting a prerequisite number of clock cycles prior to asserting a signal (FULL) to the selection input (SEL) of the MUX 1545. The FULL signal may also be read by the requestor of the random number as a signal (DONE) that the requested random number is available over the RN[31:0] lines. The system reset signal 1507 also asserts a signal on the reset input terminal of the counter 1540. A self test circuit 1511 may be present to provide a known value to the MUX 1545 to be read on the RN[31:0] lines in place of a random number generated by the RNG 455C.The RNG 455C is preferably configured to meet all appropriate requirements for an RNG in Federal Information Processing Standards Publication FIPS-140-1, entitled SECURITY REQUIREMENTS FOR CRYPTOGRAPHIC MODULES, issued on Jan. 11, 1994, by the U.S. National Institute of Standards and Technology (NIST), which is hereby incorporated by reference. The Federal Information Processing Standards Publication Series of the NIST is the official series of publications relating to standards and guidelines adopted and promulgated under the provisions of Section 111 (d) of the Federal Property and Administrative Services Act of 1949 as amended by the Computer Security Act of 1987, Public Law 100-235.It is noted that for increased randomness, the ring oscillators 1514A-1514H may be operated at frequencies and phases that do not correlate between or among the plurality of ring oscillators 1514. It is also noted that the RNG 455C may be included in locations other than the south bridge 330. Contemplated locations include the processor 805 and the north bridge 810.FIGS. 16A-16G illustrate flowcharts of embodiments of methods 1600A-1600G that attempt to access the security hardware 370, which may be locked, according to various aspects of the present invention. FIG. 16A shows a method 1600A of locking the security hardware 370 as a part of the boot (or cold reboot) process. FIG. 16B shows a method 1600B of unlocking and later locking the security hardware 370 as a part of a reboot (or warm boot) process. FIG. 16C shows a method 1600C of checking for rights to lock or unlock the security hardware 370 and checking a bit to disable changing the rights. FIG. 16D shows a method 1600D of attempting to use the security hardware 370 while the computer system 100 is not in SMM. FIG. 16E shows a method 1600E of checking and/or setting the lock on the OAR access locks 460 and checking the bit to disable changing the lock. FIG. 16F shows a method 1600F of unlocking and later locking the security hardware 370 while the computer system 100 is in SMM. FIG. 16G shows a method 1600G of checking for rights to unlock and later lock the security hardware 370 while the computer system 100 is in SMM.Referring now to FIG. 16A, the method 1600A includes the processor executing the BIOS code instructions from SMM space in the RAM memory, in block 1620. The BIOS code, executed by the processor, performs a power-on self test (POST), in block 1625. The method 1600A includes accessing the security hardware 370, in block 1630. The accesses to the computer hardware 370 may initiate an unlocking of the security hardware 370, if the security hardware 370 is not open-at-reset. The accesses to the security hardware 370 may be by the BIOS code or other device or subsystem in the computer system 100, or from outside the computer system 100, if allowed. The method 1600A may optionally include entering a BIOS management mode, in block 1632. The BIOS management mode could allow for, for example, remote booting instructions, remote or secure permission to continue the boot sequence, other remote operations or remote hardware accesses or set-ups, or choosing between or among boot choices, such as hardware configurations and/or operating systems or other software choices.The BIOS code next looks for additional BIOS code, such as from a video controller, IDE controller, SCSI controller, etc. and displays a start-up information screen, in block 1635. As examples, the video controller BIOS is often found at C000h, while the IDE controller BIOS code is often found at C800h. The BIOS code may perform additional system tests, such as a RAM memory count-up test, and a system inventory, including identifying COM (serial) and LPT (parallel) ports, in block 1640. The BIOS code also identifies plug-and-play devices and other similar devices and then displays a summary screen of devices identified, in block 1645.The method includes closing the access locks to the security hardware, in block 1650. The BIOS code or another device or agent in the computer system 100 may close the access locks. The BIOS code identifies the boot location, and the corresponding boot sector, in block 1655. The boot location may be on a floppy drive, a hard drive, a CDROM, a remote location, etc. The BIOS code next calls the boot sector code at the boot location to boot the computer system, such as with an operating system, in block 1660.Referring now to FIG. 16B, the method 1600B includes opening the access locks to the security hardware, in block 1615. The processor executes the BIOS code instructions from SMM space in the RAM memory, in block 1620. The computer system accesses the security hardware 370 while in SMM, while booting, in block 1630. The method 1600B may optionally include entering a BIOS management mode, in block 1632.The BIOS code next looks for additional BIOS code, such as from a video controller, IDE controller, SCSI controller, etc. and displays a start-up information screen, in block 1635. As examples, the video controller BIOS is often found at C000h, while the IDE controller BIOS code is often found at C800h. The BIOS code also identifies plug-and-play devices and other similar devices and then displays a summary screen of devices identified, in block 1645.The BIOS code closes the access locks to the security hardware, in block 1650. The BIOS code identifies the boot location, and the corresponding boot sector, in block 1655. The boot location may be on a floppy drive, a hard drive, a CDROM, a remote location, etc. The BIOS code next calls the boot sector code at the boot location to boot the computer system, such as with an operating system, in block 1660.Turning now to FIG. 16C, the method 1600C includes deciding whether to set the OAR-lock, in decision block 1646. The OAR-lock in decision block 1646 may correspond to the first indicator described above with respect to FIG. 6. The OAR-lock in decision block 1646 may also correspond to setting the OAR lock override bit 1750 described below with respect to FIG. 17D. If the decision is made to set the OAR-lock, then, according to one embodiment, all access to the security hardware 370 is blocked, in block 1647. If the decision is made not to set the OAR-lock, then the method 1600C moves to decision 1648. In decision block 1648, the method 1600C decides whether to set the OAR-lock change bit. The OAR-lock change bit in decision block 1648 may correspond to the second indicator described above with respect to FIG. 6. The OAR-lock change bit in decision block 1648 may also correspond to setting the change OAR lock override bit 1755 described below with respect to FIG. 17D. If the decision is made to set the OAR-lock change bit, in decision block 1648, then, according to one embodiment, the OAR-lock cannot be changed, thereafter, as changes to the OAR-lock are themselves locked out, in block 1649.Turning now to FIG. 16D, the method 1600D includes a processor, such as processors 102, 805, etc., operating in a mode that is not SMM, in block 1604. In block 1606, code being processed by the processor attempts to access any part of the security hardware 370, or other hardware whose access may require a check of an access lock similar to the access locks 460. The method checks, at decision block 1607, to see if the security hardware 370 is available. If the security hardware 370 is not available, at decision block 1607, then the method 1600D exits or returns. If the security hardware 370 is available, at decision block 1607, then the method 1660D accesses the security hardware 370, at block 1630. The method, optionally, closes the access locks to the security hardware, if necessary, at block 1650.Turning now to FIG. 16E, the method 1600E includes an embodiment of decision block 1607 from FIG. 16D. The method 1600E includes checking if access to all security hardware is locked out, i.e. forbidden, at decision block 1690. If access to all security hardware is locked out, then at decision block 1690 the method 1600E moves to decision block 1692. If access to all security hardware is not locked out, then the method 1600E moves to decision block 1691. In decision block 1691, the method 1600E checks if the requested security hardware is locked out (e.g. separately using one or more access locks).If the requested security hardware is locked out, then the method 1660E moves to decision block 1692. If the requested security hardware is not locked out, then the method 1660E moves directly to block 1693. In decision block 1692, the method 1660E checks if the access lock for the requested security hardware can be changed, e.g., unlocked. If the access lock for the requested security hardware cannot be changed, then in decision block 1692 the method 1600E aborts the access to the security hardware. If the access lock for the requested security hardware can be changed, then in decision block 1692 the method 1600E requests authorization, such as from a user, to change the access lock for the requested security hardware, in decision block 1693. If the authorization to change the access lock for the requested security hardware is not given, then the method 1600E aborts the access to the security hardware. If the authorization to change the access lock for the requested security hardware is not given, then the method 1600E moves to block 1694 and changes the lock to allow access to the requested security hardware.It is noted that any authorization method described herein may be used in decision block 1693. Any other authorization methods known in the art that have equivalent or better security properties in the presence of an observer may also be used.Turning now to FIG. 16F, the method 1600F includes the processor loading code instructions into SMM space in the RAM memory, in block 1605. For example, loading code instructions into SMM space may occur in response to an SMI#. The access locks to the security hardware are opened in block 1615. The opening of the access locks may be through the SMM code instructions or through a hardware mechanism, or both.The processor processes the code instructions from SMM space in the RAM memory, in block 1620. It is noted that the SMM timing controller 401, described above, may interrupt the processing of the code instructions. The method 1600F includes accessing the security hardware 370, in block 1630. As the computer system is in SMM and the access locks have been opened, in block 1615, the security hardware is available to most or all of the subsystems of the computer system 100 (or 800), as desired.The method 1600F includes closing the access locks to the security hardware 370, in block 1650. The processor reloads the previous state and continues operating, in block 1665. It is noted that the processing of the SMM code instructions, in block 1620, may continue while the actions described in block 1630 occurs. Preferably, the actions described in block 1650 occur after the processing of the SMM code instructions, in block 1620, has ceased. The processing may have finished or have been interrupted.Turning now to FIG. 16G, the method 1600G includes the processor loading code instructions into SMM space in the RAM memory, in block 1605. For example, the loading of code instructions into SMM space may occur in response to an SMI#. The method 1600G next checks if the security hardware is available, in decision block 1607. If the security hardware is not available, then in decision block 1607 the method 1600G aborts the access to the security hardware. If the security hardware is available, then the method 1600G continues with block 1620.The processor executes the code instructions from SMM space in the RAM memory, in block 1620. It is noted that the SMM timing controller 401, described above, may interrupt the processing of the code instructions. The method 1600F includes accessing the security hardware 370, in block 1630. As the computer system is in SMM and the access locks are open, as determined in decision block 1607, the security hardware is available to most or all of the subsystems of the computer system 100 (or 800), as desired.The method 1600G includes closing the access locks to the security hardware 370, in block 1650. The processor reloads the previous state and continues operating, in block 1665. It is noted that the executing of the SMM code instructions, in block 1620, may continue while the actions described in block 1630 occurs. Preferably, the actions described in block 1650 occur after the processing of the SMM code instructions, in block 1620, has ceased. The processing may have finished or have been interrupted.It is noted that other processes of locking and unlocking the security hardware 370, other than the access locks, may be used. The methods 1600A-1600G are intended to extend to those other processes.For the purposes of this disclosure, the computer system is considered to have two operating modes, normal and SMM. There are boot phases that are not in SMM, but they are, by definition, as trusted as SMM, and therefore considered equivalent to SMM herein. The boot code configures and arranges how SMM will work. SMM derives its trustworthiness from the trustworthiness of the boot code. It is contemplated that the standard boot sequence could be varied. Variations include a transition to a setup environment where the user may have the opportunity to input parameters. The input parameters may, for example, modify the BIOS code. Most setup environments return to reset before loading the operating system and operating in normal mode. This is a form of maintenance mode that is an alternative to loading the operating system and is not part of the normal mode. As contemplated, the access locks would not be set in this mode. It would be part of the boot process and as trusted as SMM, although security measures could be used if remote accesses are possible inside the setup environment.FIGS. 17A, 17B, and 17C illustrate block diagrams of embodiments 460A, 460B, and 460C of the access locks 460 shown in FIG. 6. In FIG. 17D, a block diagram of an embodiment of the OAR override register 455, from FIG. 6, is shown. In the embodiment 460A shown in FIG. 17A, the one or more access locks 460 include a sequester bit register 1705. The bit stored in the sequester bit register 1705 may be set or cleared as a flag. In the embodiment 460B shown in FIG. 17B, the one or more access locks 460 include two or more sequester registers configured to store two or more sequestering bits to lock or unlock all of the devices within the security hardware 370. The additional bits beyond the sequester bit stored in the sequester register 1705 allows for flag bits for locking and unlocking of privileges separately. For example, a write privilege could be locked, while a read privilege could be unlocked. In the embodiment of FIG. 17C, the one or more access locks 460 include one or more sequester registers 1715A-1715N for each device within the security hardware 370C.In FIG. 17D, the OAR override 445 includes an OAR-lock override register 1750 that stores at least one OAR-lock override bit, and a change OAR-lock override register 1755 that stores at least one change OAR-lock override bit. According to one embodiment of the present invention, if the OAR-lock override bit is not set, then access to the security hardware 370 is determined by the settings of the access locks 460. If the OAR-lock override bit is set, then the access locks 460 are ignored in favor of the security hardware 370 being either always available or never available, based on the implementation. Preferably, the security hardware is never available when the OAR-lock override bit is set. The setting of the OAR-lock override bit may be changed in SMM (or with authorization) unless the change OAR-lock override bit is set. Preferably, the change OAR-lock override bit is OAR, similar to one embodiment of the access locks 460, and may be set, in various embodiments, with the access locks 460 at boot time, such as in block 1650.FIG. 18A illustrates a prior art flowchart of an SMM program 1800A. The prior art SMM program 1800A starts at 1805, includes one or more instructions for execution in SMM, in block 1810A, and ends at 1895 without interruption. In other words, prior art SMM program 1800A is uninterruptible and has no other entry points than the start at 1805. There are also no reasonable exit points, barring processor failure, other than the stop at 1895.FIG. 18B illustrate a flowchart of an embodiment of operations of an interruptible and re-enterable SMM program 1800B, according to one aspect of the present invention. In contrast to the prior art SMM program 1800A, the interruptible and re-enterable SMM program 1800B includes a start at 1805, one or more instructions for execution in SMM, in block 1810B, an entry/exit point 1815, one or more instructions for execution in SMM, in block 1820, and the stop at 1895.Also in contrast to the prior art SMM program 1800A, FIG. 18C illustrates an embodiment of operation of a computer system running the interruptible and re-enterable SMM program 1800B, according to one aspect of the present invention. The operations 1800C of the computer system includes a start 1805. The operations also include receiving a request to enter SMM, at 1810 and saving the system state at 1815. The method checks, at 1820, for a saved SMM state, as could be found from exiting the SMM program 1800B at 1875. If no saved SMM state is found at 1820, then load the requested default SMM state at 1825. If a saved SMM state is found at 1820, then load the saved SMM state, at 1830.The method 1800C executes the loaded SMM state, at 1835, either the default state from 1825 or the saved state at 1830. If the SMM processing is finished, at 1840, then the method moves to 1855 and exits SMM. If the SMM processing is not finished, then the method continues execution of the SMM state, if no exit request is received at 1845. If the exit request is received at 1845, then the method saves the current SMM state at 1850 and exits SMM at 1855. The saved system state is reloaded at 1860, and the method ends at the stop 1895.While only one entry/exit point 1815 is shown in the embodiment of FIG. 18B, other embodiments may include two or more entry/exit points 1815 in an SMM program 1800B or the operations of the method 1800C shown in FIG. 18C. In these embodiments, each entry/exit point 1815 would have one or more instructions for execution in SMM, similar to blocks 1810B and 1820, both before and after the entry/exit point 1815.For example, in one embodiment, block 1810B includes one instruction for execution in SMM, followed by an entry/exit point 1815A. Entry/exit point 1815A is followed by another single instruction for execution in SMM, in block 1820A. Block 1820A is followed by another entry/exit point 1815B. Entry/exit point 1815B is followed by another single instruction for execution in SMM, in block 1820B. Block 1820B is followed by the stop 1895. While a single instruction in blocks 1810B, 1820A, and 1820B may be small, the concept of regularly spaced Entry/exit points 1815 is illustrated. In other embodiments, two, three or more instructions for execution in SMM may be substituted for the single instructions. In still other embodiments, there may be a variable number of instructions for execution in SMM in blocks 1810B, and 1820. The number of instructions may depend on the execution times for each set of instructions, so that SMM may be interruptible every so often during execution.It is noted that forced exits from SMM, as are taught herein in one aspect of the present invention, for example, with respect to FIG. 10A, and re-entry into SMM, as is also taught herein in another aspect of the present invention, for example, with respect to FIG. 10B, are but two examples of how interruptible, re-enterable SMM code could be implemented or used. Those of skill in the art of computer programming with full appreciation of this disclosure will appreciate that many programming techniques used with non-SMM code that used interruptible, re-enterable code flow will now be available in SMM code.FIGS. 19A, 19B, and 19C illustrate block diagrams of embodiments 3000A, 3000B, and 3000C of computer systems with the BIOS ROM 355 accessible to the processor 805 at boot time and to the south bridge 330 at other times. Common to all three figures are a processor 805, a south bridge 330, control logic 3010, a boot switch 3005, a crypto-processor 305, and BIOS ROM 355. The processor 805 is coupled to the south bridge 330 in a usual fashion at times other than at boot time. At boot time, the control logic 3010 is operable to change the boot switch 3005 such that the processor 805 has access to the BIOS 355 without going through the south bridge 330 in the usual fashion.In FIG. 19A, embodiment 3000A shows the processor 805 coupled to one part (A) of the boot switch 3005. Part A of the boot switch 3005 is open, as would occur after booting. The control logic 3010 is coupled to the boot switch 3005 to control the operations of the boot switch 3005. The south bridge 330 is coupled to Part B of the boot switch 3005. Part B of the boot switch 3005 is closed, again as would occur after booting. The south bridge 330 is shown coupled to the bus to which the BIOS is coupled, shown as being through the crypto-processor 305. Other hardware 3015A and 3015B are also shown coupled to the bus, which may be an LPC bus 118, or another bus.In FIG. 19B, embodiment 3000B shows the processor 805 coupled to one part (A) of the boot switch 3005 through an instance of LPC bus interface logic (BIL) 134D. Part A of the boot switch 3005 is closed, as would occur during booting. The processor 805 is coupled to a north bridge 810 through a local bus 808. The north bridge 810 includes the control logic 3010, coupled to the boot switch 3005 to control the operations of the boot switch 3005. The north bridge 808 is further coupled to the south bridge 330 through a PCI bus 110. The south bridge 330 is coupled to Part B of the boot switch 3005 through another instance of LPC BIL 134D. Part B of the boot switch 3005 is open, again as would occur during booting. The south bridge 330 is shown coupled to an LPC bus to which the BIOS 355 is coupled, shown as being through the crypto-processor 305. Other hardware 3015A and 3015B are not shown in this embodiment, but may be present. The connection between Part A of the boot switch 3005 and Part B of the boot switch 3005 is shown as an LPC bus segment 3018.As illustrated, during the booting process, the processor 805 is operable to use an LPC bus protocol to access the BIOS 355 directly, without going through the north bridge 810 or the south bridge 330. By providing a more direct connection between the processor 805 and the BIOS ROM 355, the computer system 3000B may advantageously boot or reboot faster than using more usual methods of accessing the BIOS ROM 355. After booting, accesses to the BIOS ROM 355 are through the south bridge 330 using the LPC bus 118.In FIG. 19C, embodiment 3000C shows the processor 805 coupled to one part (A) of the boot switch 3005 through the local bus 808. Part A of the boot switch 3005 is closed, as would occur during booting. The processor 805 is also coupled to the north bridge 810 through the local bus 808. The processor 805 includes the control logic 3010, coupled to the boot switch 3005 to control the operations of the boot switch 3005. The north bridge 808 is further coupled to the south bridge 330 through a PCI bus 110. The south bridge 330 is coupled to the LPC bus 118 an instance of LPC BIL 134D. Part B of the boot switch 3005 is coupled to the LPC bus 118. Part B of the boot switch 3005 is open, again as would occur during booting. The BIOS ROM 355 is coupled through the crypto-processor 305 to the local bus 808 when Part A of the boot switch 3005 is closed and to the LPC bus 118 when Part B of the boot switch 3005 is closed. The crypto-processor 305 may include bus interface logic for the local bus 808 and the LPC bus 118, or the crypto-processor 305 may be configured to translate the bus protocols as necessary to pass bus cycles to the BIOS ROM 355. Other hardware 3015A and 3015B are not shown in this embodiment, but may be present.As illustrated, during the booting process, the processor 805 is operable to use the local bus protocol to access the BIOS 355 directly, without going through the north bridge 810 or the south bridge 330. By providing a more direct connection between the processor 805 and the BIOS ROM 355, the computer system 3000C may advantageously boot or reboot faster than using more usual methods of accessing the BIOS ROM 355. After booting, accesses to the BIOS ROM 355 are through the south bridge 330 using the LPC bus 118.It is noted that the control logic 3010 may be signaled to or configured to operate the boot switch 3005 at times other than booting to allow for faster access to the BIOS ROM 355, the crypto-processor 305 (when present), or, for example, other hardware 3015 on the LPC bus.In various embodiments of the present invention, the security of SMM is assumed. It is noted that one or more so-called "backdoors" may exist that could be exploited to compromise the security of SMM. The issues contemplated include misuse of the hardware debug test (HDT) mode of the processor 805 as well as the ability of the processor 805 to load and replace microcode. Illustrated in FIGS. 20A-D are various embodiments 805A, 805B, 805C, 805D of the processor 805, each of which includes various security protections against one or more backdoor attacks.In FIG. 20A, the processor 805A includes HDT control logic 3110A, HDT reset logic 3120A, and one or more registers, including an HDT enable register 3115 and non-volatile random access memory (NVRAM) 3130. As shown, the HDT control logic 3110A is coupled to receive a plurality of input signals through a plurality of HDT pins 3105. The HDT control logic 3110A is further coupled to the HDT enable register 3115. The HDT reset logic 3120A is coupled to receive a RESET signal over a line 3125 and to access (i.e. read and write) the HDT enable register 3115 and the NVRAM 3130.In FIG. 20B, the processor 805B of FIG. 20B includes HDT control logic 3110B, HDT reset logic 3120B, and two registers, including the HDT enable register 3115 and an HDT enable lock register 3135. As shown, the HDT control logic 3110B is coupled to receive a plurality of input signals through the plurality of HDT pins 3105. The HDT control logic 3110B is further coupled to the HDT enable register 3115 and the HDT enable lock register 3135. The HDT reset logic 3120B is coupled to receive the RESET signal over the line 3125 and a signal, such as over a line 3140, through a pull-up (or pull-down) resistor 3145.In FIG. 20C, the processor 805C includes microcode control logic 3155, microcode loader enable reset logic 3165, and one or more registers, including a microcode loader enable register 3160. As shown, the microcode control logic 3155 is coupled to receive a plurality of input signals through a plurality of microcode input pins 3150. The microcode control logic 3155 is further coupled to the microcode loader enable register 3160. The microcode loader enable reset logic 3165 is coupled to receive the RESET signal and to access the microcode loader enable register 3160.In FIG. 20D, the processor 805D includes HDT control logic 3110 integrated with the microcode control logic 3155, the HDT reset logic 3120, and the MLE reset logic 3165 to form control/reset logic 3175. The HDT enable register 3115 and the microcode loader enable register 3160 are integrated into a multibit lock register 3180. A plurality of inputs 3170 are shown to the control/reset logic 3175. The plurality of inputs 3170 may include the HDT inputs 3105, the microcode inputs 3150, and/or the reset signaling means. Other embodiments (not shown) integrate only the HDT control logic 3110 and the microcode control logic 3155, or just the HDT reset logic 3120 and the MLE reset logic 3165.According to various embodiments of the present invention, the registers 3115, 3135, and 3160, as well as the NVRAM 3130 include storage space for one or more bits. In one embodiment, each register is configured to store a single bit. It is noted that the enable registers 3115 and 3160 may also be integrated into a single lock register, and the HDT enable lock register 3135 may be used as a microcode enable lock register. It is contemplated that the registers 3115, 3135, 3160, and/or 3180 could be included in the SMM MSRs 807.In various embodiments, the HDT enable register 3115 is configured to store one or more HDT enable bits signifying whether HDT mode is enabled or disabled. The HDT reset logic 3120 is configured to set the one or more HDT enable bits to a default state upon a reset of the processor 805.Multiple embodiments for controlling the HDT modes are contemplated, such as those illustrated in FIGS. 20A and 20B. In one embodiment, the HDT mode is enabled as the default on non-production processors 805 used for engineering and testing. The HDT mode may be disabled as the default in standard production processors 805. In another embodiment, illustrated in FIG. 20A, the default state may be stored in and read from the NVRAM 3130. In this embodiment, the default state may be changeable, but in the illustrated embodiment, the default state is set to disabled. In still another embodiment, illustrated in FIG. 20B, the default state is set using a strapping option. The default value is provided to the HDT reset logic 3120B through the pull-up (or pull-down) resistor 3145.Multiple embodiments for controlling the microcode loader modes are also contemplated, such as those illustrated in FIGS. 20C and 20D. In one embodiment, not illustrated, the microcode update mode is enabled as the default on non-production processors 805 used for engineering and testing. The microcode update mode may be disabled as the default in standard production processors 805. In another embodiment, similar to that illustrated in FIG. 20A, the default state may be stored in and read from the NVRAM 3130. In this embodiment, the default state may be changeable, but in the illustrated embodiment the default state is set to disabled. In still another embodiment, illustrated in FIG. 20B, the default state is using a strapping option. The default value is provided to the MLE reset logic 3165 through the pull-up (or pull-down) resistor 3145.Turning now to FIG. 21, a method 3200 for initiating the HDT mode is shown. In response to receiving a request to enter the HDT mode (step 3205), the HDT control logic 3110 checks the status of the one or more HDT enable bits to see if the HDT mode is enabled or disabled (step 3210). If the HDT mode is enabled (step 3215), then the HDT control logic 3110 initiates the HDT mode (step 3220). If the HDT mode is disabled (step 3215), then the HDT control logic 3110 will not initiate the HDT mode.Turning now to FIG. 22, a method 3300 for changing the HDT mode enable status, which includes an HDT mode lock, is shown. In response to receiving a request to enter the HDT mode (step 3305), the HDT control logic 3110 checks the status of the one or more HDT enable lock bits to determine if the HDT lock mode is locked or unlocked (step 3310). If the HDT lock mode is unlocked (step 3315), then the HDT control logic 3110 initiates HDT mode (step 3335). If the HDT lock mode is locked (step 3315), then the HDT control logic 3110 requests authorization to change the HDT lock mode status (step 3320). If the change is authorized (step 3325), then the HDT control logic 3110 changes the HDT mode lock bit to unlocked (step 3330). If the change is not authorized (step 3325), then the HDT control logic 3110 does not change the HDT mode lock bit.In various embodiments, the HDT enable status may be changed by setting or resetting the one or more HDT enable status bits. For example, the HDT mode may be disabled, but inside SMM, a predetermined input to the HDT control logic 3110 may signal the HDT control logic 3110 to change the HDT mode status to enabled. In the embodiment of FIG. 20A, for example, once signaled, the HDT control logic 3110 would change the status of the HDT enable bit from disabled to enabled.Referring back to the embodiment of FIG. 20B, for example, in response to receiving a request to change the HDT mode status, the HDT control logic 3110 checks the status of the one or more HDT enable lock bits to see if the HDT lock mode is enabled or disabled. If the HDT lock mode is disabled, then the HDT control logic 3110 may change the HDT mode status. If the HDT lock mode is enabled, then the HDT control logic 3110 will not change the HDT mode status.It is noted that the method 3300 may alternatively terminate if the microcode update lock status is locked (step 3315), instead of requesting authorization to change the microcode update lock status (step 3320). The method 3300 may also include receiving a request to change the microcode update lock status (not shown) prior to the method 3300 requesting authorization (step 3320).Turning now to FIG. 23, a method 3400 for initiating the microcode loader is shown. In response to receiving a request to initiate the microcode update mode (step 3405), the microcode control logic 3155 checks the status of the one or more microcode enable bits to see if microcode update mode is enabled or disabled (step 3410). If the microcode update mode is enabled (step 3215), then the microcode control logic 3110 initiates the microcode update mode (step 3220). If the microcode update mode is disabled (step 3215), then the microcode control logic 3110 will not initiate the microcode update mode.Turning now to FIG. 24, a method 3500 for changing the microcode update mode enable status, which includes a microcode mode lock, is shown. In response to receiving a request to enter the microcode mode (step 3505), the microcode control logic 3110 checks the status of the one or more microcode enable lock bits to see if the microcode mode is locked or unlocked (step 3510). If the microcode lock mode is unlocked (step 3515), then the microcode control logic 3110 initiates the microcode mode (step 3535). If the microcode lock mode is locked (step 3515), then the microcode control logic 3110 requests authorization to change the microcode mode lock status (step 3520). If the change is authorized (step 3525), then the microcode control logic 3110 changes the microcode mode lock bit to unlocked (step 3530). If the change is not authorized (step 3525), then the microcode control logic 3110 does not change the microcode mode lock bit.In various embodiments, the microcode enable status may be changed by setting or resetting the one or more microcode enable status bits. For example, the microcode mode may be disabled, but inside SMM, a predetermined input to the microcode control logic 3110 may signal the microcode control logic 3110 to change the microcode mode status to enabled. In the embodiment of FIG. 20C, for example, once signaled, the microcode control logic 3110 will change the status of the one or more microcode enable bits from disabled to enabled.In response to receiving a request to change the microcode mode status, the microcode control logic 3110 may check the status of the one or more microcode enable lock bits to determine if the microcode lock mode is enabled or disabled. If the microcode lock mode is disabled, then the microcode control logic 3110 may change the microcode mode status. If the microcode lock mode is enabled, then the microcode control logic 3110 will not change the microcode mode status.It is noted that the method 3500 may alternatively terminate if the microcode update lock status is locked (step 3515), instead of requesting authorization to change the microcode update lock status (step 3520). The method 3500 may also include receiving a request to change the microcode update lock status (not shown) prior to the method 3500 requesting authorization (step 3520).FIGS. 25A, 25B, 26, and 27 illustrate flowcharts of embodiments of methods 3600A, 3600B, 3610A, and 3620 for secure access to storage, according to various aspects of the present invention. FIG. 25A shows a flowchart of the method 3600A where a security device maintains secure access to a storage device, according to one aspect of the present invention. FIG. 25B shows a flowchart of the method 3600B where a crypto processor maintains secure access to a memory, according to one aspect of the present invention. FIG. 26 shows a flowchart of the method 3610A where a security device provides secure access control to a storage device using a challenge-response authentication protocol, according to one aspect of the present invention. FIG. 27 shows a flowchart of the method 3620 where a secret is used to unlock data access to a secure storage device.Turning to FIG. 25A, the method 3600A includes the security device receiving a transaction request for a storage location associated with the storage device connected to the security device (block 3605A). The security device provides access control for the storage device (block 3610A). One embodiment of the access control shown in block 3610A is illustrated by the method 3600B shown in FIG. 26.According to the method 3600A, the security device maps the storage location in the transaction request according to the address mapping of the storage device (block 3615A). The security device provides the transaction request to the storage device (block 3620A). Under normal circumstances, the storage device will perform the requested transaction (block 3625A).In various embodiments, the security device associated with the method 3600A may include a crypto processor or a block of logic configured to provide security for the storage device. The storage device may include an electronic storage medium like a memory or a magnetic or optical storage medium like a hard drive or an optical drive. The memory may include a RAM, a ROM, or a flash memory. The hard drive or optical drive may be fixed or removable. The transaction request may include, for example, a read request, a write request, or a combination of read and write requests.It is noted that in various embodiments, the memory (or the storage device) may include further security hardware of its own. The further security hardware may include access logic, a random number generator, and a secret, such as is illustrated above in FIG. 7C or 7D.Turning to FIG. 25B, the method 3600B includes the crypto-processor receiving a transaction request for a memory location associated with the memory connected to the crypto-processor (block 3605B). The crypto-processor provides access control for the memory (block 3610B). One embodiment of the access control shown in block 3610B is illustrated in FIG. 26.According to the method 3600B, the crypto-processor maps the memory location in the transaction request according to the address mapping of the memory (block 3615B). The crypto-processor provides the transaction request to the memory (block 3620B). Under normal circumstances, the memory will perform the requested transaction (block 3625B).Turning to FIG. 26, the method 3610A includes the security device determining if a lock is in place for the storage location (block 3705). A transaction request may have been received for the storage location. If the lock is not in place (block 3710), then the method 3610A moves past the authentication portion. If the lock is in place (block 3710), then the security device provides a challenge for the storage location (block 3715). The challenge may be associated with the storage location or with the storage device that includes the storage location. The challenge may be in response to the transaction request. Next, the security device receives a response to the challenge (block 3720). The security device evaluates the response by comparing the response to an expected response (block 3725). If the evaluation is not correct (block 3730), then the method ends. If the evaluation is correct (block 3730), then the method proceeds with the security device providing the transaction request to the storage device (block 3735).In various embodiments, the security device associated with the method 3610A may include a crypto processor or a block of logic configured to provide security for the storage device. The storage device may include an electronic storage medium like a memory or a magnetic or optical storage medium like a hard drive or an optical drive. The memory may include a RAM, a ROM, or a flash memory. The hard drive or optical drive may be fixed or removable. The transaction request may include, for example, a read request, a write request, or a combination of read and write requests.Turning to FIG. 27, the method 3620 includes storing a secret in a storage device (block 3805). The storage device may include only a portion of a physical device. The storage device itself may be embodied as any storage device known in the art. The method 3620 may also include storing data in the storage device (block 3810) and storing code in the storage device (block 3815). The method 3620 may also include providing a lock (e.g. a lock bit or bits) to secure data stored in the storage device or the storage device itself (block 3815). Note that the above steps of method 3620 (blocks 3805-3820) may be performed relatively proximate in time, such as when the storage device is manufactured, installed, or initialized.The method 3620 also includes reading the secret from the storage device (block 3825), such as, for example, when the computer system including the storage device or coupled to communicate with the storage device is booted. For the secret to remain secure, the reading of the secret preferably occurs when the storage device is in a secure or trusted configuration. The method 3620 may also read the code from the storage device (block 3830). The method 3620 stores the secret in a secure location (block 3825) and also may store the code in the secure location (block 3830). The secure location may be in the SMM memory space previously described, or in a secure memory, register, or other storage location in the computer system 100, such as in the processor 805 or in the south bridge 330.In various embodiments, the storage device associated with the method 3620 may include an electronic storage medium like a memory or a magnetic or optical storage medium like a hard drive or an optical drive. The memory may include a RAM, a ROM, or a flash memory. The hard drive or optical drive may be fixed or removable. A read in method 3620 may describe any transaction request, such as, for example, a read request, a write request, or a combination of read and write requests.FIG. 28 illustrates a prior art challenge-response method 3900 for authentication. The method has a requestor making an access request, in block 3905. In block 3910, a gatekeeper receives the access request and provides a challenge to the requestor to authenticate the requestor's authority to make the access request. In block 3915, the requestor receives the challenge and provides a response to the challenge to authenticate the requestor's authority to make the access request. In block 3920, the gatekeeper receives the response to the challenge and compares the response to an expected response.In decision block 3925, the gatekeeper determines if the response is equal to the expected response. If the response is not equal to the expected response, in decision block 3925, then the method ends, preventing the requester from completing the access request. If the response is equal to the expected response, in decision block 3925, then the method continues with block 3930. In block 3930, the gatekeeper approves the access request. Typically, a sha1 hash, well known in the art, of the secret and a number known to both the gatekeeper and the requestor is used to demonstrate knowledge of the secret.Turning to FIGS. 29A, 29B, 29C, 29D, and 29E, an embodiment of computer subsystem 4000A, including a south bridge 330D and I/O devices, an embodiment of a processor 805E, an embodiment of a processor 805F, an embodiment of a computer subsystem 4000B, including a processor 805 and other system devices, and an embodiment of a computer system 4000C, including an embodiment of a processor 805 and various devices are shown, including Globally Unique IDentifiers (GUIDs) 4099 and/or a stored secret 4095 and/or a system GUID 4085.In FIG. 29A, the south bridge 330D includes an embodiment of the security hardware 370 coupled to the LPC BIL 134D and the USB interface logic 134C. The embodiment of the security hardware 370 includes the random number generator (RNG) 455, a storage location storing a secret 4095, and storage locations for storing a GUID table 4098. The GUID table 4098 may include a GUID for the south bridge 330D itself. The south bridge 330D is coupled through the USB interface logic 134C to a USB hub 4015 including a GUID 4099B. Coupled to the USB hub 4015 are a biometric device 4020 and a smart card reader 4025. The biometric device 4020 includes the secret 4095 and a storage location for storing a GUID 4099A. The smart card reader 4025 includes the secret 4095 and a storage location for storing a GUID 4099D. Coupled through the LPC bus 118 to the LPC BIL 134D are the Super I/O chip 120 and a keyboard 4019, including a GUID 4099C.In FIG. 29B, the processor 805E includes a GUID 4099E. In FIG. 29C, the processor 805F includes the GUID table 4098, either in place of or in addition to the GUID table 4098 shown in the south bridge 330D, shown in FIG. 29A. The GUID table 4098 of the processor 805F may include a GUID for the processor 805F itself.In FIG. 29D, the computer subsystem 4000B includes the processor 805, which may represent any of the embodiments of the processor 805, such as the processor 805E shown in FIG. 29B or the processor 805F shown in FIG. 29C, coupled to a north bridge 810 including a GUID 4099F through the local bus 808. The north bridge 810 is shown coupled to an AGP device 4008 including a secret 4095 (could also include a GUID 4099G) and a memory 4006 including a plurality of memory modules, shown as DIMMs (Dual In-line Memory Modules) 4060A-4060C. Each of the DIMMs 4060A-4060C includes a GUID 4099H-4099K, respectively. In alternative embodiments, the GUIDs 4099 may be replaced by a storage location to store the secret 4095 (such as shown the AGP 4008 and as in FIG. 29A) or augmented by the storage location to store the secret 4095 and the GUID 4099. Note that the computer system 4000A and 4000B may connect between the north bridge 810 and the south bridge 330D.According to one embodiment of the present invention, at boot time or during some other trusted set-up, the south bridge 330D and/or the processor 805F or other master device transmits the secret 4095 to each of the devices coupled to the master device capable of storing the secret 4095. Thus, in the illustrated embodiment of FIG. 29A, the USB hub 4015, the biometric device 4020, and the smart card reader 4025 would each store the secret 4095. In other words, during the trusted set-up, the device or devices become known to the master device through an authentication routine, and the master device communicates the secret 4095 to those devices that authenticate properly as a trusted component of the computer subsystem 4000 or some part of the computer system. During data requests or transfers, the master device transmits a random number (or at least a nonce, a number that is used only once) to the device along with the data request. The device may encrypt the data using the random number (or the nonce) and the secret before transmitting the data to the master device. Whether or not the data is encrypted, the device returns the random number (or the nonce) with the data as an authenticator of the data.As an example of this embodiment, consider the biometric device 4020 of FIG. 29A as a fingerprint scanner 4020. Placing a finger on the fingerprint scanner 4020 may cause the fingerprint scanner 4020 to send an interrupt to the system. The fingerprint scanner 4020 scans the fingerprint of the finger on the fingerprint scanner 4020 to create fingerprint data. The system notifies the south bridge 330D, which sends the nonce to the fingerprint scanner 4020. The fingerprint scanner 4020 receives the nonce and returns the fingerprint data and the nonce to the south bridge 330D in response to receiving the nonce. The fingerprint scanner 4020 may also encrypt the fingerprint data using the nonce in lieu of sending the fingerprint data in the clear (i.e. not encrypted).According to another embodiment of the present invention, at boot time or during some other trusted set-up, the south bridge 330D and/or the processor 805F or other master device reads the GUIDs from each device coupled to the south bridge 330D (i.e. the master device) capable of storing or actually storing a GUID 4099. Thus, in the illustrated embodiment of FIG. 29A, the USB hub 4015, the biometric device 4020, the smart card reader 4025, and the keyboard 4019 each have GUIDs 4099B, 4099A, 4099D, and 4099C, respectively. The south bridge 330D stores the GUIDs for each device in the GUID table 4098. In other words, during the trusted set-up, the device or devices become known to the south bridge 330D through an authentication routine, and the devices communicate their respective GUIDs 4099 to the south bridge 330D that authenticates them as a trusted component of the computer subsystem 4000 or some part of the computer system.During data requests or transfers, the south bridge 330D or other master device (e.g. the processor 805E or 805F) transmits a random number (or at least a nonce) to the device along with the data request. The device may encrypt the data using the random number (or the nonce) and the GUID before transmitting the data to the south bridge 330D. Whether or not the data is encrypted, the device returns the random number (or the nonce) with the data as an authenticator of the data.As an example of this embodiment, consider a request from the system (e.g. the master device) to the keyboard 4019 for data. The system may request the keyboard 4019 to submit the GUID 4099C with the data. The GUID 4099C in this case may be combined with the data using a hash function (i.e. a one way function well known in the art). The data are transmitted from the keyboard 4019 to the system along with the GUID 4099C. The master device, such as the security hardware 370 (alternatively the crypto-processor 305, such as shown in FIG. 4) authenticates the data.In another embodiment of the present invention, one or more devices (such as 4035 shown in FIG. 29E) include both the GUID 4099 and the storage location for the secret 4095. In this embodiment, the system master, e.g. the south bridge 330D, and the devices 4120 use the GUID 4099, the secret 4095, or both to authenticate data transmissions.It is noted that other I/O buses besides the USB 116 and the LPC bus 118 may be used in various embodiments of the present invention. For example, a hard drive (not shown) including a GUID 4099 and/or storage locations for the secret 4095 may be coupled to the IDE interface 114 (shown in FIG. 1A). In another example, the biometric device 4020 may couple to the computer subsystem 4000 through the PCI bus 110 or a serial port or a parallel port, such as through the Super I/O chip 120. Other I/O buses and connections are contemplated.As currently implemented by some manufacturers, using 128 bits for the GUID 4099, up to 10<36 > possible values are available for any GUID 4099. The sheer number of possible values allows for a device without a GUID 4099 to be assigned a random GUID 4099 with a very low possibility of duplication. The use of the random number or the nonce may prevent a replay attack using a device, such as the biometric device 4020. Note that devices without GUIDs 4099 established during manufacturing may create a random GUID 4099, either for each boot or reset or for each data transmission.It is contemplated that, for example, a part of the memory, such as a memory controller (e.g. see memory 4006 in FIG. 29D) could include a GUID table 4098 and be the master device for the memory modules, such as DIMMs 4060A-4060C. The memory controller could register the GUIDs 4099 for the DIMMs 4060. The memory controller could then give its own GUID 4099 to another master device (e.g. north bridge 810 or processor 805). In this way, transmissions between and among system devices could be registered as being from known devices. Other subsystem master device arrangements are also contemplated, such as the north bridge 810 and the south bridge 330D as local masters, with the processor 805 being the system master. Additional master devices could include the USB hub 4015 for the other USB devices and a drive controller for its attached storage drives (e.g. hard drives or optical drives).Turning now to FIG. 29E, an embodiment of the computer system 4000C is illustrated with a further embodiment of system components that are recognized by the computer system. As shown, an embodiment of the processor 805 is coupled to an embodiment of the north bridge 810. A memory subsystem 4006 and an embodiment of a south bridge 330E are also coupled to the north bridge 810. A generic device 4035 and an embodiment of the crypto-processor 305 are coupled to the south bridge 330E. The south bridge 330E includes security hardware 370, including a storage location for a system GUID 4085 and the GUID table 4098 described above. In the illustrated embodiment of the computer system 4000C, each of the processor 805, memory 4006, the north bridge 810, the device 4035, and the crypto-processor 305 includes logic 4080, a storage location for the system GUID 4085, a storage location for an introduced bit 4090, and a respective GUID 4099, such as GUIDs 4099P, 4099F, 4099M, or 4099L. Note that the logic 4080 of FIG. 29E may be implied in FIGS. 29A-29D.In one embodiment, upon first being placed in the computer system 4000C, a system master introduces each device 4035 to the computer system 4000C. For the purposes of this aspect of the present invention, a "device" may be any component or subsystem or master device that may be a part of the computer system 4000C. Examples include the processor 805, the north bridge 810, the memory controller 4006 or memory modules (not shown), the south bridge 330, USB devices (shown elsewhere), other I/O devices, and the crypto-processor 305. For the purposes of this discussion, reference will be made to device 4035, but device 4035 is intended to be generic. In particular, the device 4035 may be removable from the computer system 4000C and normally usable in another computer system (not shown) other than computer system 4000C, including data drives and I/O devices. The system master shown in FIG. 29E is the south bridge 330E. The processor 805 may alternatively be the system master. A logic circuit (not shown) on or a part of a motherboard (not shown) for the computer system 4000C, or on a daughter card (not shown), may also be the system master.As each device 4035, 805, 4006, 330E, 305, etc. is introduced to the computer system 4000C, the system master provides the system GUID 4085 to the device 4035. The device 4035 stores the system GUID 4085. The device 4035 provides the system master with its GUID 4099M and the system master stores the GUID 4085M of the device in the GUID table 4098. Upon exchanging GUIDs, the device 4035 sets the introduced bit 4090. While the introduced bit 4090 is set, the device 4035 is "married" to the computer system 4000C and will only exchange data with the computer system 4000C. The device 4035 and the computer system 4000C may also "divorce by mutual consent" by authenticating their respective GUIDs and having the device 4035 reset the introduced bit.Each data transfer in the computer system 4000C may involve the exchange of the GUID 4099 and/or the system GUID 4085. A failure to authenticate the system GUID 4085 results in the device 4035 not responding with the requested data or simply not responding to the data request. Should the device 4035 request data from another device in the computer system 4000C without providing or authenticating its own GUID 4099M, the computer system 4000C will not respond with the requested data or simply does not respond to the data request from the device 4035.To prevent complete loss of data or use of the device 4035 and the computer system 4000C, a maintenance mode or "divorce court" may be available to force the introduced bit 4090 to be reset. For example, a manufacturer may place a master ID value in each of a batch of components to allow for a repair facility to reset the introduced bit 4090.In various embodiments, the logic 4080 may be configured to provide requested data using a hash function on the GUID 4099M and either a nonce, a random number, or the requested data. For example, the processor 805 may request data from the memory 4006. The processor 805 may provide a random number and the result of a hash of the random number and either the GUID 4099N for the memory 4006 or the system GUID 4085. The memory 1406 compares the result of the hash from the processor 805 with its own calculation of the hash value before responding to the data request from the processor 805.In another embodiment, the device 4035 (as well as other system devices) does not store the system GUID 4085. In this embodiment, the device 4035 only responds to a data transaction when its GUID 4099M is provided with the data transaction. To initiate a data transaction, the device 4035 demonstrates its own GUID 4085 to the system master 330E, which authenticates the device 4035 as being introduced to the computer system 4000C and thus trusted. Note that the secret 4095 may be substituted for the system GUID 4085 and used in place of the respective GUIDs 4099. Note also that the device 4035 may be used in other computer systems other than computer system 4000C so long as the device 4035 has not been introduced to the computer system 4000C. After the device 4035 has been introduced to the computer system 4000C and the introduced bit 4090 has been set, the device is only usable in the computer system 4000C until the introduced bit 4090 has been reset. Note that the introduced bit 4090 is preferably stored in non-volatile memory.Turning now to FIGS. 30A and 30B, flowcharts of embodiments of methods 4100A and 4100B for operating a computer system including a biometric device, such as the biometric device 4020 shown in FIG. 29A. In FIG. 30A, the method 4100A includes the biometric data being sent in the clear along with the result of a hash program using a secret and a nonce or random number. In FIG. 30B, the method 3100B includes the biometric data being sent in encrypted form and an indication of the nonce or random number is sent as the result of the hash using the secret and the nonce or random number. The nonce or random number may be sent in the clear in all or only some of transmissions in the data transaction. Note that the secret may be an individual secret, such as a GUID of a device, or a group secret, such as a system GUID, a sub-system GUID, or both the individual secret and the group secret. The secret may be programmed at manufacture, established at boot time, or a random number picked during a trusted set-up, or a combination thereof.In FIG. 30A, the method 4100A includes a biometric data transaction being requested involving a biometric device, in step 4110. A nonce or random number is provided to the biometric device, in step 4115. The biometric device responds to the biometric data transaction request with the requested biometric data and the result of the hash function using the secret and the nonce or random number, in step 4120A. The result of the hash function is compared to an expected value for the hash function, in step 4125A. If the result of the hash function is not the same as the expected value, in the decision block 4130, then the transmitted biometric data are rejected, in step 4135. If the result of the hash function is the same as the expected value, in the decision block 4130, then the transmitted biometric data are accepted as the requested biometric data, in step 4140.In FIG. 30B, the method 4100B includes a biometric data transaction being requested involving a biometric device, in step 4110. A nonce or random number is provided to the biometric device, in step 4115. The biometric device responds to the biometric data transaction request with the requested biometric data in encrypted form and the result of the hash using a secret and the nonce or random number, in step 4120B. The result of the hash is compared to an expected value for the hash of the secret and the nonce or random number, in step 4125B. If the result of the hash for is not the same as the expected value for the result of the hash, in the decision block 4130, then the transmitted biometric data are rejected, in step 4135. If the result of the hash is the same as the expected value for the result of the hash, in the decision block 4130, then the transmitted biometric data in encrypted form are accepted as the requested biometric data, in step 4140.Another embodiment of the method 4100 includes providing a nonce or random number, receiving biometric data, transmitting the biometric data and the nonce or random number or the random number, and authenticating the biometric data using the nonce or random number. In still another embodiment, the method 4100 may further include encrypting the biometric data, receiving the encrypted biometric data and the nonce or random number, and decrypting the encrypted biometric data. This embodiment may only transmit the encrypted biometric data and the nonce or random number. In still another embodiment, the method 4100 may include encrypting the biometric data using the nonce or random number and decrypting the encrypted biometric data using the nonce or random number.The method 4100 may also include receiving a secret, storing the secret, transmitting at least an indication of the secret with the biometric data, receiving at least the indication of the secret, and authenticating the biometric data using at least the indication of the secret. In a further embodiment, the method 4100 may include encrypting the biometric data using the secret, and decrypting the encrypted biometric data using the secret. In still another embodiment, the method 4100 may include encrypting the biometric data using the secret and the nonce or random number, and decrypting the encrypted biometric data using the secret and the nonce or random number. In one embodiment, the secret may include a system GUID. The method 4100 may also include providing a GUID, encrypting the biometric data using the GUID, the secret, and the nonce or random number, and decrypting the encrypted biometric data using the GUID, the secret, and the nonce or random number.It is noted that in various embodiments, receiving the biometric data may occur in response to providing the nonce or random number. In other embodiments, receiving the biometric data may occur only in response to providing the nonce or random number. Various steps of various embodiments of the method may be performed by different entities, including, but not limited to, the biometric device, the master device, and the system master.Turning now to FIGS. 31A, 31B, 32A, 32B, 32C, and 33, flowcharts of embodiments of methods 4200A, 4200B, 4300A, 4300B, 4300C, and 4400 for authenticating a device in a computer system, such as computer systems including computer subsystems 4000A, 4200B, and 4000C of FIGS. 29A, 29D, and 29E, are illustrated. In the method of FIG. 31A, a secret is passed in encrypted form for authentication, but the data are transmitted in the clear. In the method of FIG. 31B, the secret and data are both passed in encrypted form. In the method of FIG. 32A, a device GUID is passed in encrypted form for authentication, but the data are transmitted in the clear. In the method of FIG. 32B, the device GUID and data are both passed in encrypted form. In the method of FIG. 32C, the secret, the device GUID, and the data are passed in encrypted form. In the method of FIG. 33, the device and the computer system are authenticated to each other as the device is united to the computer system using the introduced bit 4090 shown in FIG. 29E.In the method 4200A of FIG. 31A, a master device in the computer system transmits a secret to a device in the computer system during a trusted set-up, in block 4205. As noted elsewhere, the trusted set-up may occur, as examples, when the device is first introduced to the computer system or during a boot sequence of the computer system. A data transaction is requested involving the device in the computer system that knows the secret, in block 4210. It is contemplated that one or more or all of the devices in the computer system will follow the method 4200A and know the secret. A nonce or random number is provided to the device in the computer system that knows the secret, in block 4215.If the data transaction request is a read of data from the device, in block 4220A, the device responds to the data transaction request with the requested data and a result of a hash using the secret and the nonce or random number. If the data transaction request is a write of data to or through the device, in block 4220A, the device responds to the data transaction request with the result of the hash using the secret and the nonce or random number. Thus, in block 4220A, the device responds to the data transaction request and verifies its authorization to complete the data transaction request.The method 4200A continues with the result of the hash using the secret and the nonce or random number being compared to an expected value for the result of the hash using the secret and the nonce or random number, in block 4225. If the comparison results are not the same, in decision block 4230, then the method continues by rejecting the transmitted data from the read or by not sending the data for the write, in block 4235. If the comparison results are the same, in decision block 4230, then the method continues by accepting the transmitted data from the read or by sending the data for the write, in block 4240A.In the method 4200B of FIG. 31B, a master device in the computer system transmits a secret to a device in the computer system during a trusted set-up, in block 4205. A data transaction is requested involving the device in the computer system that knows the secret, in block 4210. It is contemplated that one or more or all of the devices in the computer system will follow the method 4200B and know the secret. A nonce or random number is provided to the device in the computer system that knows the secret, in block 4215.If the data transaction request is a read of data from the device, in block 4220B, the device responds to the data transaction request by encrypting the requested data using the secret and the nonce or random number and a result of a hash using the secret and the nonce or random number. If the data transaction request is a write of data to or through the device, in block 4220B, the device responds to the data transaction request with the result of the hash using the secret and the nonce or random number. Thus, in block 4220B, the device responds to the data transaction request and verifies its authorization to complete the data transaction request.The method 4200B continues with the result of the hash using the secret and the nonce or random number being compared to an expected value for the result of the hash using the secret and the nonce or random number, in block 4225. If the comparison results are not the same, in decision block 4230, then the method continues by rejecting the transmitted data from the read or by not sending the data for the write, in block 4235. If the comparison results are the same, in decision block 4230, then the method continues by accepting the transmitted data from the read or by encrypting the data using the secret and the nonce or random number and sending the encrypted data for the write, in block 4240B.In the method 4300A of FIG. 32A, a master device in the computer system reads the GUID for a device in the computer system during a trusted set-up, in block 4305. A data transaction is requested involving the device in the computer system with the known GUID, in block 4310. It is contemplated that one or more or all of the devices in the computer system will follow the method 4300A and have their GUIDs known to the computer system. A nonce or random number is provided to the device in the computer system with the known GUID, in block 4315.If the data transaction request is a read of data from the device, in block 4320A, the device responds to the data transaction request with the requested data and a result of a hash using the GUID and the nonce or random number. If the data transaction request is a write of data to or through the device, in block 4320A, the device responds to the data transaction request with the result of the hash using the GUID and the nonce or random number. Thus, in block 4320A, the device responds to the data transaction request and verifies its identity and authorization to complete the data transaction request.The method 4300A continues with the result of the hash using the GUID and the nonce or random number being compared to an expected value for the result of the hash using the GUID and the nonce or random number, in block 4325. If the comparison results are not the same, in decision block 4330, then the method continues by rejecting the transmitted data from the read or by not sending the data for the write, in block 4335. If the comparison results are the same, in decision block 4330, then the method continues by accepting the transmitted data from the read or by sending the data for the write, in block 4340A.In the method 4300B of FIG. 32B, a master device in the computer system reads the GUID for a device in the computer system during a trusted set-up, in block 4305. A data transaction is requested involving the device in the computer system with the known GUID, in block 4310. It is contemplated that one, more than one, or all of the devices in the computer system will follow the method 4300B and have their GUIDs known to the computer system. A nonce or random number is provided to the device in the computer system with the known GUID, in block 4315.If the data transaction request is a read of data from the device, in block 4320B, the device responds to the data transaction request by encrypting the requested data using the GUID and the nonce or random number and a result of a hash using the GUID and the nonce or random number. If the data transaction request is a write of data to or through the device, in block 4320B, the device responds to the data transaction request with the result of the hash using the GUID and the nonce or random number. Thus, in block 4320B, the device responds to the data transaction request and verifies its identity and authorization to complete the data transaction request.The method 4300B continues with the result of the hash using the GUID and the nonce or random number being compared to an expected value for the result of the hash using the GUID and the nonce or random number, in block 4325. If the comparison results are not the same, in decision block 4330, then the method 4300B continues by rejecting the transmitted data from the read or by not sending the data for the write, in block 4335. If the comparison results are the same, in decision block 4330, then the method 4300B continues by accepting the transmitted data from the read or by encrypting the data using the GUID and the nonce or random number and sending the encrypted data for the write, in block 4340B.In the method 4300C of FIG. 32C, a master device in the computer system reads the GUID for a device in the computer system and transmits a secret to the device during a trusted set-up, in block 4306. A data transaction is requested involving the device in the computer system with the known GUID that knows the secret, in block 4311. It is contemplated that one or more or all of the devices in the computer system will follow the method 4300C and have their GUIDs known to the computer system and know the secret. A nonce or random number is provided to the device in the computer system with the known GUID that knows the secret, in block 4316.If the data transaction request is a read of data from the device, in block 4320C, the device responds to the data transaction request by encrypting the requested data using the secret, the GUID, and the nonce or random number and a result of a hash using the secret, the GUID, and the nonce or random number. If the data transaction request is a write of data to or through the device, in block 4320C, the device responds to the data transaction request with the result of the hash using the secret, the GUID, and the nonce or random number. Thus, in block 4320C, the device responds to the data transaction request and verifies its identity and authorization to complete the data transaction request.The method 4300C continues with the result of the hash using the secret, the GUID, and the nonce or random number being compared to an expected value for the result of the hash using the secret, the GUID, and the nonce or random number, in block 4326. If the comparison results are not the same, in decision block 4330, then the method 4300C continues by rejecting the transmitted data from the read or by not sending the data for the write, in block 4335. If the comparison results are the same, in decision block 4330, then the method 4300C continues by accepting the transmitted data from the read or by encrypting the data using the secret, the GUID, and the nonce or random number and sending the encrypted data for the write, in block 4340C.In the method 4400 of FIG. 33, a master device in the computer system reads the GUID for a device in the computer system and records the GUID in a GUID table during a trusted set-up where the device joins the computer system, in block 4405. The device may receive a system GUID from the master device and store the system GUID, in block 4410. The device sets an introduced bit in response to joining the computer system, in block 4415. The device is now considered to be "married" to the computer system. It is contemplated that one, more than one, or all of the devices in the computer system will follow the method 4400 and be "married" to the computer system.The device receives a transaction request from the computer system, and the device checks if the introduced bit is set, in block 4420. If the introduced bit is not set, in decision block 4425, then the method 4400 continues by not fulfilling the transaction request or by not responding to the transaction request, in block 4430. If the introduced bit is set, in decision block 4425, then the method 4400 may continue with the device requesting authentication from the computer system using the GUID before responding to the transaction request, in block 4435.If the device requests authorization, or if the computer system authenticates directly, a nonce or random number may be provided to the device. If the transaction request is a read of data from the device, the device may respond to the transaction request by encrypting the requested data using the GUID and the nonce or random number and a result of a hash using the GUID and the nonce or random number. If the data transaction request is a write of data to or through the device, the device may respond to the data transaction request with the result of the hash using the GUID and the nonce or random number.The method 4400 continues with the result of the authentication, in decision block 4440. If the authentication is not successful, in decision block 4440, then the method 4400 continues by not fulfilling the transaction request, in block 4430. If the authentication is successful, in decision block 4440, or if authentication is not used for the transaction request then the method 4400 continues by fulfilling the transaction request, in block 4445.In alternative embodiments, the authentication may be performed by different methods. As an example, the master device may authenticate itself to the device by providing at least an indication of the system GUID to the device. Additional authentication methods, known in the art, may also be used other than challenge-response.Turning now to FIGS. 34 and 35, flowcharts of embodiments of methods 4500 and 4600 for removing the device from the computer system once the device has been united with ("married to") the computer system using the introduced bit 4090 shown in FIG. 29E are illustrated. In the method 4500 of FIG. 34, the removal of the device from the computer system is by joint consent, a "no-fault divorce." In the method 4600 of FIG. 35, the removal of the device from the computer system is forced in a maintenance mode using a maintenance (backdoor) key, a "court-ordered divorce."The method 4500 of FIG. 34 includes the device or the master device initiating a request for the device to leave the computer system, in block 4505. The device and the master device authenticate themselves to each other using the GUID and/or the system GUID, in response to the request for the device to leave the computer system, in block 4510. The device resets the introduced bit in response to the device and the master device successfully authenticating each other, in block 4515.The method 4500 of FIG. 34 may advantageously allow for easy removal of a device married to the computer system while maintaining system security. Authentication between the device and the master device may include any combination of the device providing at least an indication of the GUID to the master device, the device providing at least an indication of the system GUID to the master device, the master device providing at least an indication of the GUID to the device, and the master device providing at least an indication of the system GUID to the device. Any appropriate mechanism may be used for providing at least the indication, including the challenge-response method or other authentication method known in the art.The method 4600 of FIG. 35 includes the device receiving a command for the device to leave the computer system, in block 4605. The device also receives at least an indication of a maintenance key that the device can successfully authenticate, in block 4610. The device resets the introduced bit in response to the device receiving at least the indication of the maintenance key that the device can successfully authenticate, in block 4615.The method 4600 of FIG. 35 may advantageously allow for easy removal of a device married to the computer system when the computer system is unresponsive or the device must be removed from the computer system for repair, while maintaining system security. The maintenance key may be programmed by the manufacturer of the device for each device, or for a class of devices. Authorized, trusted repair facilities are preferably the only ones with access to the maintenance key. A purchaser of a large number of similar devices could request a single maintenance key for all devices purchased.Turning now to FIG. 36, a block diagram of an embodiment of a computer subsystem 4700 including bus interface logics 134B, 134C, 134D, and 134E with master mode capabilities in an embodiment of the south bridge 330F, according to one aspect of the present invention, is illustrated. In the embodiment shown, the south bridge 330F is coupled through the LPC bus 118 to an embodiment of a crypto-processor 305, including master mode logic 4790. The crypto-processor 305 is coupled to secure a protected storage 605. The bus interface logics 134B, 134C, 134D, and 134E of the south bridge 330F include IDE interface logic 134B, USB interface logic 134C, LPC bus interface logic 134D, and SMBus bus interface logic 134E. Each bus interface logic 134B, 134C, 134D, and 134E include a master mode register 4799 including a master mode bit. Coupled to the USB interface logic 134C are the USB hub 315, the biometric device 320, and the smart card reader 325.Master mode operations of the computer subsystem 4700 may advantageously allow for secure input of data, such as biometric data or smart card data, without the unencrypted data being accessible to the operating system. Master mode creates a secure communications channel between the master mode logic 4790 and the data input device.Although the illustrated embodiment of FIG. 36 shows the master mode logic 4790 in the crypto-processor 305, it is contemplated that the master mode logic 4790 may also be incorporated into other devices in the computer system, such as in the security hardware 370 shown above. It is also contemplated that other devices, such as the USB hub 315, that pass-through data may also include the master mode register 4799. In various embodiments, secure data input devices; such as the biometric device 320, the smart card reader 325, or a keyboard, also include the master mode register 4799.Note that the storage location or locations for storing the master mode bit may also include space for storing one or more addresses in an appropriate format for the bus interface logic. The one or more addresses may be used by the bus interface logics to provide data to and from only those addresses, only within the address range defined by those addresses, or to exclude data from or to those addresses or the address range the addresses define. The crypto-processor or security hardware may store the one or more addresses or the crypto-processor or security hardware may indicate to the bus interface logic or logics to store the addresses themselves.Turning now to FIG. 37, a flowchart of an embodiment of a method 4800 for operating in a master mode outside the operating system is illustrated. The master mode operation may advantageously allow for user authentication, such as via a biometric device or a smart card reader, without the operating system or a program running under the operating system from snooping on the authentication data stream.The method 4800 shown in FIG. 37 includes transmitting a master mode signal to one or more bus interface logics or other devices that include a master mode register, in block 4805. The method 4800 also includes setting a master mode bit in the master mode register of each of the one or more bus interface logics or other devices that include the master mode register to establish a secure transmission channel between the master mode logic and the data input device, in block 4810. The master mode logic and the data input device exchange data outside the operating system of the computer system through the bus interface logics or other devices that include the master mode register, in block 4815.The master mode logic flushes, or signals the bus interface logics or other devices that include the master mode register to flush, the buffers of the bus interface logics or other devices that include the master mode register after concluding the data transmissions, in block 4820. The master mode logic finally signals the bus interface logics or other devices that include the master mode register to reset the master mode bits after flushing the buffers of the bus interface logics or other devices that include the master mode register so that the operating system can again access the bus interface logics or other devices that include the master mode register, in block 4825.As used herein, operating outside the operating system means that programs running under the operating system are unable to access the bus interface logics or other devices including a master mode register when the master mode bit is set. This may advantageously allow for a program running under the operating system to request the crypto-processor or other master device including the master mode logic to perform a secure data read. The master mode logic is configured to read secure data from an input device such as a biometric device, a smart card reader, a signature verification reader, or a keyboard. As described herein, the biometric device may measure any one or more of any number of physiological and/or behavioral features, including but not limited to fingerprints, hand geometry, voice prints, retinal scans, facial scans, body odor, ear shape, DNA profile, keystroke dynamics, and vein checking.Turning now to FIGS. 38A and 38B, flowcharts of embodiments of methods 4900A and 4900B for booting a computer system including authentication via the master mode logic are shown. In FIG. 38A, the crypto-processor is used to control the master mode logic, while in FIG. 38B, the security hardware is used to control the master mode logic.In FIG. 38A, the processor executes BIOS code instructions from SMM space, in 4920. After optionally accessing the security hardware, in 4930, the method 4900A requests authentication from the crypto-processor, preferably using the master mode logic, in 4835A. The method 4900A places the bus interface logics in master mode, in 4938. The bus interface logics would typically be between the crypto-processor and the authentication device. The method 4900A receives the authentication data while the bus interface logics are in master mode, in 4940. The method 4900A exits master mode and flushes the buffers of the bus interface logics, in 4942. The method 4900A next verifies the authentication data, in 4944. Verifying the authentication data may include the crypto-processor providing an indication of the authentication data to a remote security device. If the authentication data are verified in 4948, then the method 4900A continues the boot process, in 4990. If the authentication data are not verified in 4948, then the method 4900A returns to 4935A and again requests authentication.In FIG. 38B, the processor executes BIOS code instructions from SMM space, in 4920. After optionally accessing the security hardware, in 4930, and optionally entering a BIOS management mode, in 4932, the method 4900B requests authentication from the security hardware, using the master mode logic, in 4935B. The method 4900B places the bus interface logics in master mode, in 4938. The bus interface logics would typically be between the security hardware, e.g. the south bridge, and the authentication device. The method 4900B receives the authentication data while the bus interface logics are in master mode, in 4940. The method 4900B exits master mode and flushes the buffers of the bus interface logics, in 4942. The method 4900B next verifies the authentication data, in 4944. Verifying the authentication data may include the security hardware providing an indication of the authentication data to a remote security device. If the authentication data are verified in 4948, then the method 4900B continues the boot process, in 4990. If the authentication data are not verified in 4948, then the method 4900B returns to 4935A and again requests authentication.Note that the relative position of steps of the methods 4900A and 4900B in the boot process (or sequence), such as shown in FIG. 1A would typically be prior to step 152. The relative position of various steps of the methods 4900A and 4900B in the boot process may also be between steps 1632 and 1650 of FIGS. 16A and 16B. Various BIOS code segments may be necessary for correct response of various devices in the computer system, such as the south bridge and authentication devices coupled thereto.Turning now to FIGS. 39A, 39B, and 39C, block diagram of embodiments of systems 5000A, 5000B, and 5000C for securing a device, a computer subsystem, and/or a computer system using timers to enforce periodic authentication. In FIG. 39A, the system 5000A includes each of a computer system 5005, a computer subsystem 5020, and a device 5040 as well as a network security authenticator 5070. In FIG. 39B, the system 5000B includes a portable computer 5003 coupled to a server 5004 for authentication. In FIG. 39C, the system 500C includes two computer systems 5003A and 5003B coupled to the server 5004 including the network security authenticator 5070.In FIG. 39A, the system 5000A, as shown, includes the computer system 5005 coupled to the network security authenticator 5070 through a network 5065. The computer system 5005 includes logic 5007, a timer 5009, a security authenticator 5010, and the computer system 5020. The computer subsystem 5020 includes logic 5027, a timer 5029, a security authenticator 5030, and the device 5040. The device 5040 includes logic 5047 and a timer 5049.In one embodiment, the device 5040 authenticates to the computer subsystem 5020, using the security authenticator 5030, and the logic 5047 sets and monitors the timer 5049. In another embodiment, the device 5040 authenticates to the computer system 5005, using the security authenticator 5010, and the logic 5047 sets and monitors the timer 5049. In still another embodiment, the device 5040 authenticates to the network security authenticator 5070 over the network 5065, and the logic 5047 sets and monitors the timer 5049.In one embodiment, the computer subsystem 5020 authenticates to the computer system, using the security authenticator 5010, and the logic 5027 sets and monitors the timer 5029. In another embodiment, the computer subsystem 5020 authenticates to the network security authenticator 5070 over the network 5065, and the logic 5027 sets and monitors the timer 5029. In another embodiment, the computer system 5005 authenticates to the network security authenticator 5070 over the network 5065, and the logic 5007 sets and monitors the timer 5009. Note that not all of these embodiments are mutually exclusive.In FIG. 39B, the system 5000B includes the portable computer coupled over a remote connection to the server 5004. The operations of the system 5000B may be given in FIG. 40B below. The portable computer 5003 may include the logic 5007 and the timer 5009 shown in FIG. 39A. The server 5004 may include the network security authenticator 5070.In FIG. 39C, the system 500C includes two computer systems 5003A and 5003B coupled over the network 5065 to the server 5004 including the network security authenticator 5070. The computer system 5003A includes a south bridge 330G that includes security hardware 370. The security hardware 370, as shown, includes the logic 5047 and the timer 5049. The computer system 5003B includes a crypto-processor 370, in place of the logic 5047, coupled to the timer 5049. FIG. 39C illustrates that the security hardware 370 or the crypto-processor 370 may control the timer 5049 and the interactions with the network security authenticator 5070.Turning now to FIGS. 40A and 40B, flowcharts of embodiments of methods 5100A and 5100B for securing a device, a computer subsystem, or a computer system, such as a portable computer, by limiting use to finite periods of time between successive authorizations are illustrated. The methods 5100A and 5100B may advantageously discourage theft of the device, the computer subsystem, or the computer system as its usefulness is limited outside of or without its authorizing computer subsystem, computer system, or network security connections. While the method 5100A of FIG. 40A is a general method applicable to any of device, computer subsystem, or computer system, the method 5100B of FIG. 40B is an example of a specific method applicable to a portable computer adapted to communicate over a computer network.In FIG. 40A, the method 5100A authenticates the device, the computer subsystem, or the computer system to the computer subsystem, the computer system, or the network security device, in 5105. Typically, the device will authenticate to the computer subsystem or the computer system, while the computer subsystem will authenticate to the computer system or the network security device, and the computer system will authenticate to the network security device. Deviations from this typical behavior may include a device authenticating to the network security device, or the computer system authenticating to another computer system.The method 5100A sets a starting value on a timer in response to successfully authenticating the device, the computer subsystem, or the computer system, in 5110. The timer is updated in a periodic fashion, in 5115. The method 5100A checks in 5120 if the timer has expired. If the timer has not expired, in 5120, then the method 5100A continues the normal operation of the device, the computer subsystem, or the computer system in 5125, and returns to 5115. If the timer has expired, in 5120, then the method 5100A attempts to re-authenticate the device, the computer subsystem, or the computer system to the appropriate master, in 5130. If the re-authentication in 5130 is successful, in 5135, then the method 5100A returns to 5110 and resets the starting value on the timer. If the re-authentication in 5130 is not successful, in 5135, then the method 5100A shuts down the device, the computer subsystem, or the computer system until the device, the computer subsystem, or the computer system can be re-authenticated, such as during the boot process.Note that the timer may be implemented as a count down timer running from a set value down to the expired value of zero or a counting timer running from zero up to a predetermined value as the expired value. The set value or the predetermined value may be a constant or may be randomly selected. The set value or the predetermined value may also vary according to a predetermined algorithm, if desired. Updating the timer may occur with each increment of the system clock or a local clock, or only while the device, the computer subsystem or the computer system is operating.The method 5100B established a network connection to the network security device (or system) in 5104. The method 5100B authenticates a portable computer to the network security system, in 5106. The authentication may occur during the boot process. The method 5100B sets a starting value on a timer in response to successfully authenticating the portable computer, in 5110. The timer is updated in a periodic fashion, in 5115. The method 5100B checks in 5120 if the timer has expired. If the timer has not expired, in 5120, then the method 5100B continues the normal operation of the device, the computer subsystem, or the computer system in 5126, and returns to 5115. If the timer has expired, in 5120, then the method 5100B attempts to establish network connection to the network security system, in 5129, and to re-authenticate the portable computer to the network security system, in 5131. If the re-authentication, in 5131, is successful, in 5135, then the method 5100B returns to 5110 and resets the starting value on the timer. If the re-authentication, in 5131, is not successful, in 5135, then the method 5100B shuts down the portable computer and requires authentication during the boot process, in 5141, before normal operations of the portable computer are allowed to resume.Note that the device 5040 may represent any device 5040 in the computer system 5003 or 5005. The computer subsystem 5020 may represent any computer subsystem 5020 in the computer system 5003 or 5005. Also note that code for the authentication and timer settings may be stored in the security hardware 370 or the secure storage shown elsewhere in this disclosure, such as the BIOS ROM 365, the SMM ROM 520, the extended BIOS 555, or the protected storage 605.Turning now to FIG. 41, a flowchart of an embodiment of a method 5200 for booting a computer system including initializing a timer to enforce periodic authentication and authorization is shown. The method includes the processor executing BIOS code instructions from SMM space, in 5220. The method 5200 may also access the security hardware, in 5230. The method 5200 may also optionally enter BIOS management mode, in 5232. The method 5200 authenticates the computer system through the security hardware, in 5235. Authentication data are provided to the security hardware, in 5240. If the authentication is not successful, in 5248, then the method 5200 shuts down the computer system until successful authentication is provided, in 5195. If the authentication is successful, in 5248, then the method 5200 sets a starting value on the timer, in response to successfully authenticating, in 5280. The method 5200 then continues the boot process, in 5290.Turning now to FIGS. 42A and 42B, block diagrams of embodiments of the system management registers 470A and 470B are illustrated. In the embodiment shown in FIG. 42A, the secure system management registers 470A include one or more ACPI lock bits 5310A through 5310N to secure various ACPI or related functions against unauthorized changes. The ACPI lock bits 5310, once set, prevent changes to the ACPI or related functions. A request to change one of the ACPI or related functions requires that a respective ACPI lock bit 5310N be released before the respective one of the ACPI or related functions is changed.In the embodiment shown in FIG. 42B, the secure system management registers 470 include one or more ACPI range registers 5320 and/or one or more ACPI rule registers 5330. Each of the one or more ACPI range registers 5120 may be configured to store a value or values that define allowable or preferred values for a specific ACPI or related function. Each of the one or more ACPI rule registers 5330 may be configured to store part or all of a rule for determining if a change to one of the ACPI or related functions should be allowed. Each of the one or more ACPI rule registers 5330 may also be configured to store code for evaluating the rules for determining if a change to one of the ACPI or related functions should be allowed or comparing a requested value or change to the value or values that define allowable or preferred values for a specific ACPI or related function stored in one of the ACPI range registers 5320.Examples of ACPI or related functions include changing a voltage, changing a frequency, turning on or off a cooling fan, and a remote reset of the computer system. It is contemplated that other ACPI or related functions may also be used. It is noted that the voltage may be a processor voltage, the frequency may be a processor operating frequency or a bus or interface frequency, the cooling fan may be operable or intended to cool any component in the computer system, including devices or subsystems not described herein, such as a power supply. It is noted that in various embodiments, the SMM access filters 410, such as shown in FIG. 5A, may include address range traps for directing access requests to evaluate the contents of the ACPI management registers 470A or 470B.For the purposes of this disclosure, references to ROM are to be construed as also applying to flash memory and other substantially non-volatile memory types. Note that while the methods of the present invention disclosed herein have been illustrated as flowcharts, various elements of the flowcharts may be omitted or performed in different order in various embodiments. Note also that the methods of the present invention disclosed herein admit to variations in implementation.Some aspects of the invention as disclosed above may be implemented in hardware or software. Thus, some portions of the detailed descriptions herein are consequently presented in terms of a hardware implemented process and some portions of the detailed descriptions herein are consequently presented in terms of a software-implemented process involving symbolic representations of operations on data bits within a memory of a computing system or computing device. These descriptions and representations are the means used by those in the art to convey most effectively the substance of their work to others skilled in the art using both hardware and software. The process and operation of both require physical manipulations of physical quantities. In software, usually, though not necessarily, these quantities take the form of electrical, magnetic, or optical signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantifies. Unless specifically stated or otherwise as may be apparent, throughout the present disclosure, these descriptions refer to the action and processes of an electronic device, that manipulates and transforms data represented as physical (electronic, magnetic, or optical) quantities within some electronic device's storage into other data similarly represented as physical quantities within the storage, or in transmission or display devices. Exemplary of the terms denoting such a description are, without limitation, the terms "processing," "computing," "calculating," "determining," "displaying," and the like.Note also that the software-implemented aspects of the invention are typically encoded on some form of program storage medium or implemented over some type of transmission medium. The program storage medium may be magnetic (e.g., a floppy disk or a hard drive) or optical (e.g., a compact disk read only memory, or "CD ROM"), and may be read only or random access. Similarly, the transmission medium may be twisted wire pairs, coaxial cable, optical fiber, or some other suitable transmission medium known to the art. The invention is not limited by these aspects of any given implementation.The particular embodiments disclosed above are illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the invention. Accordingly, the protection sought herein is as set forth in the claims below.
A passive device having a portion in the package substrate (1372, 582) and a portion in the system board (374, 584) such that the portions of the device are electromagnetically coupled. A transformer including inductors in the package substrate (372) and system board (374) electromagnetically coupled across a space between the substrate and board (370) that is surrounded by solder balls (320, 322) coupling the substrate and board. A capacitor including plates in the substrate (582) and board (584) electromagnetically coupled across a space between the substrate and board (580) that is surrounded by solder balls (520, 522) coupling the substrate and board. A core material (586, 376) can at least partially fill the space between the substrate and board. The solder balls surrounding the space can be coupled to ground. Metal shielding can be put in the substrate and/or board surrounding the device. The metal shielding can be coupled to the solder balls. The metal shielding can be coupled to ground.
CLAIMS We claim: 1. A passive device comprising: a first portion of the passive device in a package substrate; a second portion of the passive device in a system board, the second portion of the passive device being electromagnetically coupled to the first portion of the passive device across a coupling space between the package substrate and the system board; wherein the coupling space between the package substrate and the system board is surrounded by a plurality of solder balls that couple the package substrate to the system board. 2. The passive device of claim 1, further comprising a core material that at least partially fills the coupling space between the package substrate and the system board. 3. The passive device of claim 1, wherein the plurality of solder balls surrounding the coupling space between the package substrate and the system board are coupled to ground. 4. The passive device of claim 1, further comprising metal shielding in the package substrate to reduce electromagnetic interference from the passive device. 5. The passive device of claim 1, further comprising metal shielding in the system board to reduce electromagnetic interference from the passive device. 6. The passive device of claim 1, further comprising metal shielding in the package substrate and metal shielding in the system board surrounding the passive device. 7. The passive device of claim 6, wherein the metal shielding in the package substrate and the metal shielding in the system board are coupled to ground. 8. The passive device of claim 6, wherein the plurality of solder balls surrounding the coupling space between the package substrate and the system board are coupled to the metal shielding in the package substrate and the metal shielding in the system board. 9. A transformer comprising: a first inductor in a package substrate; a second inductor in a system board, the second inductor being electromagnetically coupled to the first inductor across a coupling space between the package substrate and the system board; wherein the coupling space between the package substrate and the system board is surrounded by a plurality of solder balls that couple the package substrate to the system board. 10. The transformer of claim 9, further comprising a ferromagnetic material that at least partially fills the coupling space between the package substrate and the system board. 11. The transformer of claim 9, wherein the first inductor and the second inductor are coupled to one of the plurality of solder balls surrounding the coupling space between the package substrate and the system board, and the one of the plurality of solder balls is coupled to ground. 12. The transformer of claim 9, wherein the plurality of solder balls surrounding the coupling space between the package substrate and the system board are coupled to ground. 13. The transformer of claim 9, further comprising metal shielding in the package substrate and metal shielding in the system board surrounding the transformer. 14. The transformer of claim 13, wherein the metal shielding in the package substrate and the metal shielding in the system board are coupled to ground. 15. The transformer of claim 9, wherein the first inductor is coupled to a radio- frequency circuit and the second inductor is coupled to an antenna. 16. The transformer of claim 15, wherein the transformer is adapted to be the only passive circuit in the path between the radio-frequency circuit and the antenna. 17. A capacitor comprising : a first capacitive plate in a package substrate; a second capacitive plate in a system board, the second capacitive plate being electromagnetically coupled to the first capacitive plate across a coupling space between the package substrate and the system board; wherein the coupling space between the package substrate and the system board is surrounded by a plurality of solder balls that couple the package substrate to the system board. 19. The capacitor of claim 18, further comprising a dielectric material that at least partially fills the coupling space between the package substrate and the system board. 20. The capacitor of claim 18, wherein the plurality of solder balls surrounding the coupling space between the package substrate and the system board are coupled to ground. 21. The capacitor of claim 18, further comprising metal shielding in the package substrate and metal shielding in the system board surrounding the capacitor. 22. The capacitor of claim 18, wherein the metal shielding in the package substrate and the metal shielding in the system board are coupled to ground.
PASSIVE COUPLER BETWEEN PACKAGE SUBSTRATE AND SYSTEM BOARD FIELD OF DISCLOSURE [0001] The present disclosure relates generally to integrated circuit packaging and more specifically to placing portions of a passive device in each of the package substrate and the system board such that the portions of the device are connected by electromagnetic coupling. BACKGROUND [0002] Impedance matching between a signal processing system and an antenna can be an issue in radio-frequency (RF) integrated circuit applications, like power amplifiers and high frequency transceivers. Impedance matching is accomplished by designing the input impedance of an electrical load to be equal to the output impedance of the signal source to which it is ultimately connected. Impedance matching is usually done to maximize the power transfer and minimize reflections from the load. Lack of impedance matching can cause undesirable power losses, thermal heating, echoes and other issues. [0003] Additional elements are often required inside or outside of the package to obtain the desired matching. External discrete passives and integrated passive devices external or internal to the package can be used as matching elements. These additional elements can consume valuable package area or system board area which can add to the system cost and can also add to the size of the system package. However, it is desirable to minimize both the system cost and size. [0004] Capacitive coupling can be an issue when it is desirable to remove the constant DC components of a signal while transmitting the varying AC components. The resulting signals are sometimes called DC balanced signals. Capacitive coupling is the transfer of energy within an electrical network by means of the capacitance between circuit nodes, and is usually done by placing a capacitor in series in the signal path. The capacitor allows the AC component of the signal to pass across the capacitor but blocks the DC component of the signal. The resulting DC-balanced signals can be useful in communications systems, since they can be used over AC-coupled electrical connections to avoid voltage imbalance problems and charge accumulation between connected systems or components.[0005] It would be desirable to have a methodology to implement electromagnetic coupling in integrated circuit design, such as impedance matching or capacitive coupling, that has a minimal impact on system cost, system size and other desirable factors. SUMMARY [0006] A novel electromagnetic coupling method is disclosed that makes use of the available space on an integrated circuit package to implement passive devices, such as a transformer or a capacitor, as part of the package construction process. This method can use existing integrated circuit manufacturing processes. The transformer or capacitor can be used for purposes other than impedance matching or capacitive coupling, such as voltage conversion and other applications. [0007] An embodiment of the passive device can include a first portion of the passive device in a package substrate, and a second portion of the passive device in a system board where the second portion of the passive device is electromagnetically coupled to the first portion of the passive device across a coupling space between the package substrate and the system board. The coupling space between the package substrate and the system board is surrounded by a plurality of solder balls that couple the package substrate to the system board. The passive device can also include a core material that at least partially fills the coupling space between the package substrate and the system board. The plurality of solder balls surrounding the coupling space between the package substrate and the system board can be coupled to ground. [0008] Embodiments of the passive device can also include metal shielding in the package substrate and/or the system board to reduce electromagnetic interference from the passive device. The plurality of solder balls surrounding the coupling space between the package substrate and the system board can be coupled to the metal shielding in the package substrate and/or the metal shielding in the system board. All or parts of the metal shielding can be coupled to ground. [0009] An embodiment of the passive device can be a transformer that includes a first inductor in a package substrate and a second inductor in a system board, where the second inductor is electromagnetically coupled to the first inductor across a coupling space between the package substrate and the system board. The coupling space between the package substrate and the system board is surrounded by a plurality of solder balls that couple the package substrate to the system board. A ferromagnetic material can atleast partially fills the coupling space between the package substrate and the system board. The he first inductor and the second inductor can be coupled to one of the plurality of solder balls surrounding the coupling space between the package substrate and the system board, where that one of the plurality of solder balls is coupled to ground. The plurality of solder balls surrounding the coupling space between the package substrate and the system board can be coupled to ground. The passive device can also include metal shielding in the package substrate and/or the in the system board surrounding the transformer. All or part of the metal shielding can be coupled to ground. The first inductor can be coupled to a radio-frequency circuit and the second inductor can be coupled to an antenna. The transformer can be adapted to be the only passive circuit in the path between the radio-frequency circuit and the antenna. [0010] An embodiment of the passive device can be a capacitor that includes a first capacitive plate in a package substrate and a second capacitive plate in a system board, where the second capacitive plate is electromagnetically coupled to the first capacitive plate across a coupling space between the package substrate and the system board. The coupling space between the package substrate and the system board is surrounded by a plurality of solder balls that couple the package substrate to the system board. The capacitor can also include dielectric material that at least partially fills the coupling space between the package substrate and the system board. The plurality of solder balls surrounding the coupling space between the package substrate and the system board can be coupled to ground. The passive device can also include shielding in the package substrate and/or the system board surrounding the capacitor. All or part of the metal shielding can be coupled to ground. [0011] The passive device can include metal shielding in the package substrate on the opposite side of the first portion of the passive device from the second portion of the passive device and/or metal shielding in the system board on the opposite side of the second portion of the passive device from the first portion of the passive device. The metal shielding in the package substrate and the metal shielding in the system board can be coupled to ground. The plurality of solder balls surrounding the coupling space between the package substrate and the system board can be coupled to the metal shielding in the package substrate and the metal shielding in the system board. [0012] A transformer is disclosed that includes a first inductor in a package substrate and a second inductor in a system board, where the second inductor is electromagnetically coupled to the first inductor across a coupling space between thepackage substrate and the system board. The coupling space between the package substrate and the system board is surrounded by a plurality of solder balls that couple the package substrate to the system board. The transformer can also include a ferromagnetic material that at least partially fills the coupling space between the package substrate and the system board. The first and second inductors can be coupled to one of the plurality of solder balls surrounding the coupling space where that one of the plurality of solder balls is coupled to ground. The plurality of solder balls surrounding the coupling space can be coupled to ground. [0013] The transformer can also include metal shielding in the package substrate and/or the system board surrounding the transformer. The metal shielding in the package substrate and/or the metal shielding in the system board can be coupled to ground. [0014] The transformer can be part of system wherein the first inductor is coupled to a radio-frequency circuit and the second inductor is coupled to an antenna. The transformer can be adapted to be the only passive circuit in the path between the radio-frequency circuit and the antenna. [0015] A capacitor is disclosed that includes a first capacitive plate in a package substrate and a second capacitive plate in a system board, where the second capacitive plate is electromagnetically coupled to the first capacitive plate across a coupling space between the package substrate and the system board. The coupling space between the package substrate and the system board is surrounded by a plurality of solder balls that couple the package substrate to the system board. The capacitor can also include a dielectric material that at least partially fills the coupling space. The plurality of solder balls surrounding the coupling space can be coupled to ground. [0016] The capacitor can also include metal shielding in the package substrate and metal shielding in the system board surrounding the capacitor. The metal shielding in the package substrate and the metal shielding in the system board can be coupled to ground. [0017] For a more complete understanding of the present disclosure, reference is now made to the following detailed description and the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS [0018] Fig. 1 is an exemplary radio-frequency (RF) integrated circuit system; [0019] Fig. 2 is an exemplary diagram of a transformer;[0020] Fig. 3 is an exemplary implementation of a transformer between a substrate and a system board; [0021] Fig. 4 is an exemplary RF integrated circuit system utilizing a transformer implemented between a substrate and a system board; [0022] Fig. 5 is an exemplary implementation of a capacitor between a substrate and a system board; [0023] Fig. 6 is a vertical cross-section of an exemplary implementation of a passive coupler between a substrate and a system board; [0024] Fig. 7 is a horizontal cross-section between the substrate and the system board of the exemplary implementation of the passive coupler in Figure 6; [0025] Fig. 8 is a vertical cross-section of an exemplary implementation of a passive coupler between a substrate and a system board with shielding layers; [0026] Fig. 9 is a vertical cross-section of a perspective view of the substrate and system board of the exemplary implementation of the passive coupler in Figure 8 showing shielding conductors in the substrate and system board; [0027] Fig. 10 is a perspective view of the area surrounding the passive coupler in Figure 8 with invisible substrate and system board to show underlying structure; and [0028] Fig. 11 is a block diagram showing an exemplary wireless communication system in which an electromagnetically coupled device between the package substrate and the system board may be advantageously employed. DETAILED DESCRIPTION [0029] Figure 1 shows an exemplary radio-frequency (RF) integrated circuit system 100. The RF integrated circuit system 100 includes a system printed circuit board 110, a packaging substrate 130, a complementary metal-oxide semiconductor (CMOS) die 150 and an antenna 160. [0030] The CMOS die 150 includes a plurality of micro-bumps 140 that couple the CMOS die 150 to the packaging substrate 130. The CMOS die 150 also includes an RF power amplifier 152 that is coupled to the antenna 160. The power amplifier 152 is implemented in the metal layers and silicon substrate of the CMOS die 150. Figure 1 shows the power amplifier 152 coupled to a metal layer 154 which is coupled to a through-silicon via (TSV) 156 which is coupled to one of the plurality of micro-bumps 140. Note that the power amplifier 152 could be implemented at the top (as shown in Figure 1) or at the bottom of the CMOS die 150. For example, in a flip-chiparrangement, the CMOS die 150 would be flipped so that the power amplifier 152 would be at the bottom (closest to the package substrate 130), thus the power amplifier 152 could be coupled to a metal layer that is coupled to one of the plurality of micro- bumps without using a TSV. [0031] The packaging substrate 130 includes a plurality of solder balls 120 that couple the packaging substrate 130 to the system board 110. The packaging substrate 130 can also include metal layers and interconnects coupling various circuits and elements. Figure 1 shows the micro-bump 140, to which the power amplifier 152 is coupled, coupled to an interconnect 132 which is coupled to one of the plurality of solder balls 120. [0032] The system board 110 can include a plurality of metal layers and signal paths to couple various circuits and elements. Figure 1 shows the solder ball 120, to which the power amplifier 152 is coupled, coupled to a first conductive trace 112 which is coupled to a board inductor 114, which is coupled to a board capacitor 116, which is coupled to a second conductive trace 118, which is coupled to the antenna 160. The board inductor 114 and the board capacitor 116 can be used as matching components to match the impedance along the path between the power amplifier 152 and the antenna 160. [0033] Thus, the signal path between the power amplifier 152 and the antenna 160 in the exemplary RF integrated circuit system 100 shown in Figure 1 includes the following couplings: the power amplifier 152 on the CMOS die 150 is coupled to the metal layer 154, which is coupled to the through silicon via (TSV) 156, which is coupled to one of the plurality of micro-bumps 140 coupling the CMOS die 150 to the packaging substrate 130. The micro-bump 140 to which the power amplifier 152 is coupled, is coupled to the interconnect 132, which is coupled to one of the plurality of solder balls 120 coupling the packaging substrate 130 to the system board 110. The solder ball 120 to which the power amplifier 152 is coupled, is coupled to the first conductive trace 112, which is coupled to the board inductor 114 and the board capacitor 116, which is coupled to a second conductive trace 118, which is coupled to the antenna 160. The board inductor 114 and the board capacitor 116 can be used as matching components to match the impedance between the antenna 160 and the circuit path leading to the power amplifier 152. Additional matching components can be implemented along the signal path between the power amplifier 152 and the antenna 160.[0034] Figure 1 also shows a board-substrate distance 122 between the top of the system board 110 and the bottom of the packaging substrate 130, and a substrate-die distance 142 between the top of the packaging substrate 130 and the bottom of the CMOS die 150. A typical measurement for the board-substrate distance 122 is about 200 μιη, and for the substrate-die distance 142 is about 50 μιη. The board- substrate distance 122 between the top of the system board 110 and the bottom of the packaging substrate 130 is large enough to be used for situating passive components, such as a transformer or capacitor, that perform matching or other desired functions in the integrated circuit system 100. [0035] Figure 2 is an exemplary diagram of a transformer 200. The transformer 200 includes a first inductor 210 and a second inductor 220 separated by a gap between the two inductors 210, 220. An inductor can be a conductor shaped as a coil which includes one or more "turns." The turns concentrate the magnetic field flux induced by current flowing through each turn of the conductor in an "inductive" area within the inductor turns. The number of turns and the size of the turns affect the inductance. In this exemplary embodiment, the first inductor 210 has Nl turns and is coupled between a first node 212 and ground. The second inductor 220 has N2 turns and is coupled between a second node 222 and ground. [0036] Two or more inductors which have coupled magnetic flux form a transformer, which is a device that transfers electrical energy from one circuit to another. A varying current in the first inductor 210 will induce a varying voltage in the second inductor 220. A signal source can be coupled to the first node 212 and a load can be coupled to the second node 222. A varying current from the signal source flowing through the first inductor 210 will, through inductive coupling, induce a varying voltage in the second inductor 220 which will cause a current to flow through the second inductor 220 and electrical energy to flow from the source circuit coupled at the first node 212 through the transformer 200 to the load coupled at the second node 222. [0037] A transformer can be used for impedance matching. The relationship between the impedance Zl at the first node 212 and the impedance Z2 at the second node 222 is Z2 = (N2/N1)2 * Zl, where Nl and N2 are the number of turns in the first inductor 210 and the second inductor 220, respectively. For example, if Nl=l turn, N2=4 turns and Zl=50 Ω; then Z2=16*50 Ω =800 Ω. Thus, in this example, the transformer 200 could match a 50 Ω source output impedance to a 800 Ω load input impedance.[0038] Figure 3 is an exemplary implementation of an RF integrated circuit system 300 that includes a transformer 370 between a substrate 330 and a system board 310. The transformer 370 includes a substrate inductor 372 and a board inductor 374. A core material 376, such as a ferromagnetic material, can be placed between the substrate inductor 372 and the board inductor 374 in the coupling space between the substrate 330 and the system board 310. Some exemplary ferromagnetic materials include nickel, cobalt, iron and mumetal. Alternatively, the coupling space between the substrate inductor 372 and the board inductor 374 that is between the substrate 330 and the system board 310 can be left open and the inductive coupling can be across the resulting gap. One or more solder bumps may be omitted in the array of solder bumps on the substrate 330 to leave room for the electromagnetic coupling between the substrate inductor 372 and the board inductor 374. If the implementation includes the core material 376, it can be placed in the coupling space between the substrate 330 and the system board 310 during ball mount or during board assembly of the package. The core 376 can include solderable areas for affixing the core 376 to the substrate 330 during the ball mount process, or to the substrate 330 and/or the system board 310 during board assembly of the package. [0039] The exemplary RF system 300 includes an RF die 350 which sends signals to and/or receives signals from an antenna 360. The RF die 350 includes a plurality of micro-bumps coupling the RF die 350 to the substrate 330 of which one micro-bump 340 is shown. The substrate 330 includes a plurality of solder bumps coupling the substrate 330 to the system board 310, of which two solder bumps 320 and 322 are shown. The substrate inductor 372 is located in a substrate signal path 332 running through the substrate 330 between the micro-bump 340 and the solder ball 320. The board inductor 374 is located in a board signal path 312 running through the system board 310 between the solder bump 320 and the antenna 360. The solder bump 320 is coupled to ground. [0040] Thus, the signal path between the RF die 350 and the antenna 360 in the exemplary RF integrated circuit system 300 shown in Figure 3 includes the following couplings: the RF die 350 is coupled to the micro-bump 340 which is one of the micro- bumps coupling the RF die 350 to the substrate 330. The micro-bump 340 is coupled to the substrate signal path 332 which includes the substrate inductor 372. The substrate inductor 372 is inductively coupled across the transformer 370 to the board inductor 374 which is in the board signal path 312 that is coupled to the antenna 360.[0041] This embodiment integrates impedance matching between the RF die 350 and the antenna 360 into the package construction. Conventionally, implementing an inductor in the system board is relatively inexpensive, implementing an inductor in the packaging substrate is more expensive, and implementing an inductor in the die, such as CMOS dies, is even more expensive. Thus, placing the transformer between the system board and the packaging substrate uses existing board area, reduces or eliminates the need for additional or external matching elements, and reduces or eliminates the cost of placing matching elements in the die. The transformer can also be used in other applications; for example, voltage conversion. [0042] Figure 3 also shows both of the solder balls 320 and 322 coupled to ground. Coupling the solder balls surrounding the transformer 370 to ground helps prevent the electromagnetic field in the transformer 370 from interfering with other circuits in the system 300. In an actual implementation, the solder balls are distributed in a two-dimensional array between the substrate 330 and the system board 310. The solder balls on all sides of the transformer 370 can be grounded to shield the rest of the package from the resulting electromagnetic field. If additional shielding in the substrate 330 or the system board 310 is desired, a metal layer or metal mesh can be implemented in the substrate 330 and/or the system board 310 to surround the top and bottom of the transformer 370. Even further shielding can be implemented by placing metal pillars in the substrate 330 and/or the system board 310 to block the electromagnetic field from the sides of the transformer 370 above and below the solder balls. The metal pillars can be coupled to the solder balls and to a metal layer or mesh in the substrate 330 and system board 310 to form a shielding cage around the transformer 370. The shielding cage can be coupled to ground. The spacing between "bars" in this shielding cage can be a function of the expected wavelength of the electromagnetic energy in the transformer. Any such shielding should take into account the space for the transformer inputs and outputs to be coupled to the desired circuitry. [0043] Figure 4 shows an exemplary radio-frequency (RF) integrated circuit system 400, similar to the system 100 of Figure 1. The RF integrated circuit system 400 includes a system printed circuit board 410, a packaging substrate 430, a CMOS die 450 and an antenna 460. The RF system 400 also includes an inductor 470 similar to the inductor 370 of Figure 3. The inductor 470 includes a substrate inductor (not shown) in the substrate 430, a board inductor (not shown) in the system board 410 and a transformer core between the substrate inductor and the board inductor.[0044] The CMOS die 450 includes a plurality of micro-bumps that couple the CMOS die 450 to the packaging substrate 430. The CMOS die 450 also includes an RF power amplifier 452 that is coupled to the antenna 460. The packaging substrate 430 includes a plurality of solder balls that couple the packaging substrate 430 to the system board 410. The antenna 460 is coupled to the system board 410. [0045] The power amplifier 452 is coupled to a metal layer 454 which is coupled to a through-silicon via (TSV) 456 which is coupled to a micro-bump 440 which is coupled to the packaging substrate 430. The micro-bump 440 is coupled to an interconnect 432 which is coupled to the substrate inductor of the transformer 470. The substrate inductor is inductively coupled to the board inductor of the transformer 470. The board inductor of the transformer 470 is coupled to a first conductive trace 412 which is coupled to a second board inductor 414, which is coupled to a board capacitor 416, which is coupled to a second conductive trace 418, which is coupled to the antenna 460. Depending on the properties of the transformer 470 and the desired level of matching, the second board inductor 414 and/or the board capacitor 416 can be omitted due to the matching provided by the transformer 470. [0046] As explained above, the transformer 470 can be surrounded by metal shielding that leaves room for the transformer circuit connections, the metal shielding blocking electromagnetic interference between the transformer 470 and other circuitry of the system 400. The metal shielding can be left floating or can be coupled to ground. The metal shielding can include one or more of: the solder balls surrounding the transformer 470; metal layers or mesh in the substrate 430 above the substrate inductor of the transformer 470; metal layers or mesh in the system board 410 below the board inductor of the transformer 470; and vertical metal pillars in the substrate and/or system board surrounding the transformer 470. The metal pillars can couple the solder balls to the metal layers or mesh. Some exemplary shielding embodiments are explained in more detail below. [0047] Figure 5 is an exemplary implementation of an integrated circuit system 500 that includes a capacitor 580 between a substrate 530 and a system board 510. The capacitor 580 includes a substrate plate 582 in the substrate 530, a board plate 584 in the system board 510 and may include a dielectric core 586 between the substrate plate 582 and the board plate 584 in the coupling space between the substrate 530 and the system board 510. Some exemplary dielectric materials include silicon dioxide, epoxy, glass and quartz. Alternatively, the coupling space between the substrate plate 582 and theboard plate 584 that is between the substrate 530 and the system board 510 can be left open and the capacitive coupling can be across the resulting gap. One or more solder bumps may be omitted in the array of solder bumps on the substrate 530 to leave room for the capacitive coupling between the substrate plate 582 and the board plate 584. If the implementation includes the dielectric core 586, it can be placed in the coupling space between the substrate 530 and the system board 510 during ball mount or during board assembly of the package. The dielectric core 586 can include solderable areas for affixing the core 586 to the substrate 530 during the ball mount process, or to the substrate 530 and/or the system board 510 during board assembly of the package. [0048] The exemplary system 500 includes a die 550 which sends signals to and/or receives signals from an antenna 560. The die 550 includes a plurality of micro- bumps coupling the die 550 to the substrate 530 of which one micro-bump 540 is shown. The substrate 530 includes a plurality of solder bumps coupling the substrate 530 to the system board 510, of which two solder bumps 520 and 522 are shown. The substrate plate 582 is located in a substrate signal path 532 running through the substrate 530 and coupled to the micro-bump 540. The board plate 584 is located in a board signal path 512 running through the system board 510 and coupled to the antenna 560. [0049] Thus, the signal path between the die 550 and the antenna 560 in the exemplary integrated circuit system 500 shown in Figure 5 includes the following couplings: the die 550 is coupled to the micro-bump 540 which is one of the micro- bumps coupling the die 550 to the substrate 530. The micro-bump 540 is coupled to the substrate signal path 532 which includes the substrate plate 582. The substrate plate 582 is capacitively coupled across the capacitor 580 to the board plate 584 which is in the board signal path 512 that is coupled to the antenna 560. [0050] This embodiment integrates capacitive coupling between the die 550 and the antenna 560 into the package construction. Conventionally, implementing a capacitor in the system board is relatively inexpensive, implementing a capacitor in the packaging substrate is more expensive, and implementing a capacitor in the die is even more expensive. Thus, placing the capacitor between the system board and the packaging substrate uses existing board area, reduces or eliminates the need for additional or external elements, and reduces or eliminates the cost of placing extra elements in the die. [0051] As described above with regard to the transformer, the solder balls 520 and 522 surrounding the capacitor 580 can be coupled to ground. Coupling the solderballs surrounding the capacitor 580 to ground helps prevent the electromagnetic field in the capacitor 580 from interfering with other circuits in the system 500. In an actual implementation, the solder balls are distributed in a two-dimensional array between the substrate 530 and the system board 510. The solder balls on all sides of the capacitor 580 can be grounded to shield the rest of the package from the electromagnetic field. If additional shielding in the substrate 530 or the system board 510 is desired, a metal layer or metal mesh can be implemented in the substrate 530 and/or the system board 510 to surround the top and bottom of the capacitor 580. Even further shielding can be implemented by placing metal pillars in the substrate 530 and/or the system board 510 to block the electromagnetic field from the sides of the capacitor 580 above and below the solder balls. The metal pillars can be coupled to the solder balls and to a metal layer or mesh in the substrate 530 and system board 510 to form a shielding cage around the capacitor 580. The shielding cage can also be coupled to ground. The spacing between "bars" in this shielding cage can be a function of the expected wavelength of the electromagnetic energy in the capacitor. Such shielding should include space for the capacitor inputs and outputs to be coupled to the desired circuitry. Some exemplary shielding embodiments are explained in more detail below. [0052] Figure 6 is a vertical cross-section of an exemplary integrated circuit system 600 that includes a system printed circuit board 610, a packaging substrate 630, a die 650 and a passive coupler 670 between the system board 610 and the packaging substrate 630. A plurality of micro-bumps 640 couple the die 650 to the packaging substrate 630. A plurality of solder balls 620 that couple the packaging substrate 630 to the system board 610. [0053] The passive coupler 670 could be a transformer or a capacitor. The passive coupler 670 includes a board element 674 and a substrate element 672 that are electromagnetically coupled across a core material 676. The passive coupler 670 can be used in conjunction with a circuit disposed in the die 650, packaging substrate 630, system board 610 or any combination thereof. [0054] Figure 7 is a horizontal cross-section between the system board 610 and the packaging substrate 630 of the exemplary integrated circuit system 600 shown in Figure 6. Figure 7 shows the plurality of solder balls 620 that couple the packaging substrate 630 to the system board 610 organized in a two-dimensional array. In this embodiment, the passive coupler is located off-center in the two-dimensional array of solder balls 620 and the core material 676 is located in place of one of the solder balls620. The passive coupler 670 could be located at the center, on the edge, or anywhere in-between in the plurality of solder balls 620 coupling the packaging substrate 630 to the system board 610. The dashed line connecting the eight solder balls 620 adjacent to the core material 676 of the passive coupler 670 indicates that those eight solder balls 620 can be coupled to provide shielding of the surrounding circuitry in the integrated circuit system 600 from electromagnetic interference from the passive coupler 670. The adjacent solder balls can be connected to ground or allowed to maintain a floating potential. [0055] Figure 8 is a vertical cross-section of a portion of an exemplary integrated circuit system 800 that includes a system printed circuit board 810, a packaging substrate 830, a die 850 and a passive coupler 870 between the system board 810 and the packaging substrate 830. This embodiment also includes a substrate shielding layer 834 in the packaging substrate 830 and a board shielding layer 814 in the system board 810. A plurality of micro-bumps 840 couple the die 850 to the packaging substrate 830. A plurality of solder balls 820 that couple the packaging substrate 830 to the system board 810. For clarity, Figure 8 only shows the portion of the system 800 adjacent to the passive coupler 870. [0056] The passive coupler 870 could be a transformer or a capacitor. The passive coupler 870 includes a board element 874 and a substrate element 872 that are electromagnetically coupled across a core material 876. The passive coupler 870 can be used in conjunction with a circuit disposed in the die 850, the packaging substrate 830, the system board 810 or any combination thereof. The board shielding layer 814 and substrate shielding layer 834 help electromagnetic interference from the passive coupler 870 from interfering with other circuitry in the system 800. The board shielding layer 814 can be connected to ground or allowed to maintain a floating potential. The substrate shielding layer 834 can also be connected to ground or allowed to maintain a floating potential. Either or both of the board shielding layer 814 and the substrate shielding layer 834 can be connected to one or more of the solder balls 820 surrounding the passive coupler 870. [0057] Figure 9 is a vertical cross-section of a perspective view of the system board 810 and the packaging substrate 830 of the exemplary system 800. In Figure 9, the system board 810 and the packaging substrate 830 are shown as partially transparent to show the substrate shielding layer 834 extending through the packaging substrate 830, the board shielding layer 814 extending through the system board 810, and thepassive coupler 870 extending therebetween . In this embodiment, the substrate shielding layer 834 includes four shielding conductors spaced apart in the package substrate 830 above the substrate element 872 of the passive coupler 870; and the board shielding layer 814 includes four shielding conductors spaced apart in the system board 810 below the board element 874 of the passive coupler 870. Figure 9 also shows a board conductor 816 coupling the four shielding conductors of the board shielding layer 814 and the two solder balls 820 shown adjacent to the passive coupler 870; and a substrate conductor 836 coupling the four shielding conductors of the substrate shielding layer 834 and the two solder balls 820 shown adjacent to the passive coupler 870. The board shielding layer 814, board conductor 816, substrate shielding layer 834, substrate conductor 836 and the solder balls 820 adjacent to the passive coupler 870 can be used to form a shielding cage around the passive coupler 870 to help prevent electromagnetic interference from the passive coupler 870 from interfering with other circuitry in the system 800. The shielding cage may be coupled to a source such as ground, or may be left at a floating potential. [0058] Figure 10 is a perspective view of the board shielding layer 814, the substrate shielding layer 834 and the solder balls 820 surrounding the passive coupler 870 of the exemplary system 800. In Figure 10, the system board 810 and the package substrate 830 are represented only by dotted lines for clarity. The substrate shielding layer 834 in the package substrate 830 is interconnected by the substrate conductor 836, and pillars of the substrate conductor 836 connect each of the solder balls 820 surrounding the passive coupler 870 to the substrate shielding layer 834. The board shielding layer 814 in the system board 810 is interconnected by the board conductor 816, and pillars of the board conductor 816 connect each of the solder balls 820 surrounding the passive coupler 870 to the board shielding layer 814. The coupling of the board shielding layer 814, board conductor 816, substrate shielding layer 834, substrate conductor 836 and the solder balls 820 surrounding the passive coupler 870 form a shielding cage around the passive coupler 870. The shielding cage may be coupled to a source such as ground, or may be left at a floating potential. The shielding cage surrounding the passive coupler 870 includes space for connecting to the substrate element 872 and the board element 874 of the passive coupler 870 so that the passive coupler 870 can be included in circuitry of the system 800. [0059] Figure 11 shows an exemplary wireless communication system 1100 in which electromagnetic coupling between the package substrate and the system boardmay be advantageously employed. For purposes of illustration, Figure 11 shows three remote units 1120, 1130, and 1150 and two base stations 1140. It should be recognized that typical wireless communication systems may have many more remote units and base stations. Any of remote units 1120, 1130, and 1150 may include electromagnetic coupling between the package substrate and the system board as disclosed herein. Figure 11 shows forward link signals 1180 from the base stations 1140 and the remote units 1120, 1130, and 1150 and reverse link signals 1190 from the remote units 1120, 630, and 1150 to base stations 1140. [0060] In Figure 11, remote unit 1120 is shown as a mobile telephone, remote unit 1130 is shown as a portable computer, and remote unit 1150 is shown as a fixed location remote unit in a wireless local loop system. For example, the remote units may be cell phones, hand-held personal communication systems (PCS) units, portable data units such as personal data assistants, or fixed location data units such as meter reading equipment. Although Figure 11 illustrates certain exemplary remote units that may include components having electromagnetic coupling between the package substrate and the system board as disclosed herein, the use of electromagnetic coupling between the package substrate and the system board is not limited to these exemplary illustrated units. Embodiments may be suitably employed in any electronic device in which electromagnetic coupling between the package substrate and the system board as disclosed herein is desired. [0061] While exemplary embodiments incorporating the principles of the present invention have been disclosed hereinabove, the present invention is not limited to the disclosed embodiments. Instead, this application is intended to cover any variations, uses, or adaptations of the invention using its general principles. Further, this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains and which fall within the limits of the appended claims.
One embodiment of a system for efficiently cooling a processor includes an active hybrid heat transport module (304) adapted to be integrated with a fansink (302). The hybrid heat transport module (304) comprises both a fluid channel (312) and an air channel (310) adapted for transporting heat. The hybrid heat transport module (304) and the fansink (302) may be used alone or in combination to dissipate heat from the processor.
WHAT IS CLAIMED IS:1. A system for cooling a processor, the system comprising a hybrid module configured to be thermally coupled to the processor and to a fansink, the hybrid module comprising: an air channel adapted for removing heat from the processor; and a fluid channel adapted for further removing heat from the processor.2. The system of claim 1 , wherein the fluid channel is a closed loop channel.3. The system of claim 1 , wherein the hybrid module is coupled to a pump adapted for circulating the heat transfer fluid through the fluid channel.4. The system of claim 3, wherein the heat transfer fluid in the fluid channel transports heat from the processor to a heat exchanger.5. The system of claim 1 , wherein a bottom plate of the fluid channel is textured.6. The system of claim 5, wherein the texture of the bottom plate comprises a plurality of pins extending upward into the fluid channel.7. The system of claim 1 , wherein the hybrid module is adapted for dissipating heat from the processor through air, through a fluid, or through both air and fluid.8. The system of claim 1 , further comprising a thermal adhesive disposed on a bottom plate of the hybrid module for thermally coupling the hybrid module to the processor.9. The system of claim 1 , wherein the fansink comprises: a fan; and an air channel wherein the fansink is configured to be thermally coupled to the processor.10. The system of claim 9, wherein the fansink is configured to force air through the air channel.11. The system of claim 9, wherein the fansink and the hybrid module are adapted for simultaneous operation.12. The system of claim 9, wherein the fansink and the hybrid module are adapted for independent operation.13. The system of claim 1 , wherein the processor comprises a graphics processing unit.14. The system of claim 1 , wherein the processor comprises a central processing unit.15. The system of claim 1 , wherein the processor comprises an application- specific integrated circuit.16. The system of claim 1, wherein the system is sized to cool a memory chip in addition to the processor.17. A method for cooling a processor, the method comprising the steps of: continually cooling the processor using forced air to remove heat from the processor; monitoring a temperature of the processor; and circulating a heat transfer fluid in a fluid channel to further remove heat from the processor when the processor reaches a threshold temperature.18. The method of claim 17, further comprising the step of ceasing to circulate the heat transfer fluid when the processor is cooled to a desired temperature.19. The method of claim 17, wherein the heat transfer fluid is circulated by turning on a pump.20. The method of claim 18, further comprising the step of transporting the heat transfer fluid through a heat exchanger.
SYSTEM FOR EFFICIENTLY COOLING A PROCESSORBACKGROUND OF THE INVENTION1. Field of the Invention[0001] This invention relates generally to computer hardware and more particularly to a system for efficiently cooling a processor.2. Description of the Background Art[0002] FIG. 1 is an isometric view illustrating a prior art system 100 used to cool a processor (not shown). As shown, system 100 characteristically includes a heat sink assembly 104, which further includes a fan 106, walls 109 and a bottom plate 111. Typically, system 100 is thermally coupled to a processor, for example using thermal adhesive having thermal properties that facilitate transferring heat generated by the processor to bottom plate 111 of heat sink assembly 104. System 100 may also include a heat sink lid (not shown), which, among other things, prevents particles and other contaminants from entering fan 106 and air blown from fan 106 from escaping system 100. Heat sink lid 102, together with walls 109 and bottom plate 111 of heat sink assembly 104, define a plurality of air channels 108.[0003] Fan 106 is configured to force air through air channels 108 such that the heat generated by the processor transfers to the air as the air passes over bottom plate 111. The heated air then exits heat sink assembly 104, as depicted by flow lines 114, thereby dissipating the heat generated by the processor into the external environment. This process cools the processor and, among other things, prevents the processor from burning up during operation. Persons skilled in the art will understand that air channels 108 typically are configured to direct air blown from fan 106, over bottom plate 111 , to the external environment in a manner that most efficiently removes heat from the processor.[0004] One drawback of using system 100 to cool a processor is that a sound wave produced when fan 106 forces air through an air channel 108 oftentimes establishes a standing wave within air channel 108. As persons skilled in the art will understand, this phenomenon substantially increases the noise level of the airflow through air channel 108 because the resulting standing wave produced by the interference between an incident sound wave and a reflected sound wave has an amplitude at the antinodes that is substantially greater than the amplitude of incident sound wave. The increased noise is particularly annoying to persons who use computers and other electronic devices that include a system similar to system 100.[0005] One method for reducing airflow noise while cooling a processor is to implement a fluid-based cooling system, in which heat generated by the processor transfers to a heat transfer fluid (such as water) being quickly circulated close to the processor. However, typical fluid cooling systems are driven by large pumps, which are prone to frequent failure and tend to consume a great deal of power. Moreover, such systems tend to use large quantities of fluid, circulating at a high flow rate, and therefore must be frequently replenished or replaced.[0006] Thus, there is a need in the art for a system for efficiently cooling a processor.SUMMARY OF THE INVENTION[0007] One embodiment of a system for efficiently cooling a processor includes an active hybrid heat transport module adapted to be integrated with a fansink. The hybrid heat transport module comprises both a fluid channel and an air channel adapted for transporting heat. The hybrid heat transport module and the fansink may be used alone or in combination to dissipate heat from the processor.[0008] One advantage of the disclosed system is that, among other things, the system produces less airflow noise during operation.[0009] A second advantage of the disclosed system is that it is more reliable than conventional fluid cooling systems.[0010] A third advantage of the disclosed system is that it dissipates heat more effectively and more efficiently than conventional fan- or fluid-based cooling systems BRIEF DESCRIPTION OF THE DRAWINGS[0011] FIG. 1 is an isometric view illustrating a prior art system used to cool a processor.[0012] FIG. 2 is schematic diagram illustrating a computing device adapted for use with a system for cooling a processor, according to one embodiment of the present invention.[0013] FIG. 3 is an isometric view illustrating an improved system for cooling a processor, according to one embodiment of the present invention.[0014] FIG. 4 is an exploded view of a portion of the cooling system illustrated in FIG. 3;[0015] FIG. 5 is a cross sectional view of a portion of the cooling system illustrated in FIG. 3; and[0016] FIG. 6 is a flow diagram illustrating a method for controlling the cooling system illustrated in FIG. 3, according to one embodiment of the invention.DETAILED DESCRIPTION OF THE INVENTION[0017] FIG. 2 is schematic diagram illustrating a computing device 200 adapted for use with a system 218 for cooling a processor, according to one embodiment of the present invention. Computing device 200 may be any type of computing device, including, without limitation, a desktop computer, a server, a laptop computer, a palm-sized computer, a personal digital assistant (PDA), a tablet computer, a gaming console, a cellular telephone, a computer-based simulator and the like.[0018] As shown, computing device 200 includes a housing 201 , within which a motherboard 204 resides. Mounted on motherboard 204 are a central processing unit (CPU) 206, a processor cooler 208 for cooling CPU 206, a system fan 210 for removing heat from computing device 200, and one or more peripheral component interface (PCI) cards 212, each interfaced with a slot located in the back part of housing 201. Motherboard 204 further incorporates a graphics card 202 that enables computing device 200 to rapidly process graphics related data for graphics intensive application, such as gaming applications. Graphics card 202 comprises a printed circuit board (PCB) upon which a plurality of circuit components (not shown), such as memory chips and the like, are mounted. In addition, graphics card 200 includes a graphics processing unit (GPU) 216, mounted to one face of graphics card 202, for processing graphics related data. Generally, cooling system 218 is configured for coupling to GPU 216 in lieu of a conventional cooling system, such as cooling system 100 of FIG. 1.[0019] FIG. 3 is an isometric view illustrating an improved system 300 for cooling a processor, according to one embodiment of the present invention. Similar to system 218 of FIG. 2, cooling system 300 may be adapted for use in any type of appropriate computing device. As shown, cooling system 300 may include, without limitation, a fanksink 302 and a hybrid heat transport module 304. As described in further detail below, fansink 302 and hybrid heat transport module 304 may operate independently or in combination to dissipate heat from a processor.[0020] In one embodiment, fansink 302 is configured in a manner similar to cooling system 100 of FIG. 1 and includes, without limitation, a fan 308, walls 306 and a bottom plate 318. In one embodiment, system 100 also includes a heat sink lid 320, which, among other things, prevents particles and other contaminants from entering fan 308 and air blown from fan 308 from escaping system 300. Heat sink lid 320, together with walls 306 and bottom plate 318 of fansink 302, define a plurality of air channels 322.[0021] Hybrid heat transport module 304 is adapted to be integrated with fansink 302. In one embodiment, hybrid heat transport module 304 is thermally coupled to a portion of bottom plate 318 and includes, without limitation, a fluid channel 312, an inlet 314, an outlet 316 and a plurality of air channels 310. Hybrid heat transport module 304 is coupled to a pump, which is adapted for circulating a heat transfer fluid (e.g., water or any other suitable heat conducting fluid) through a closed loop, including fluid channel 312. In one embodiment, the pump circulates fluid from hybrid heat transport module 304 through a heat exchanger prior to supplying the fluid back to hybrid heat transport module 304. Inlet 314 and outlet 316 are configured for respectively supplying and removing the heat transfer fluid to fluid channel 312.[0022] In one embodiment, air channels 310 are adapted for coupling to air channels 322 and for transporting forced air from fan 308. In one embodiment, air channels 310 are positioned over and around fluid channel 312, so that fluid channel 312 is substantially enclosed within air channels 310. In alternative embodiment, fluid channel 312 and air channels 310 may be positioned in any relative orientation that provides good heat dissipation. Those skilled in the art will recognize that hybrid heat transport module 304 may be implemented to transfer heat via air channels 310, fluid channel 312, or both in combination.[0023] In one embodiment, fansink 302 dissipates heat in a manner similar to system 100 illustrated in FIG. 1. Fan 308 is configured to force air through air channels 322 and air channels 310 such that the heat generated by the processor transfers to the air as the air passes over bottom plate 318. The heated air then exits system 300, as depicted by flow lines 324, thereby dissipating the heat generated by the processor into the external environment.[0024] In one embodiment, the pump circulates the heat transfer fluid through fluid channel 312 of hybrid heat transport module 304, and heat generated by the processor transfers to the circulating heat transfer fluid as well as to air in air channels 310. Fluid channel 312 is adapted for transporting heat transfer fluid through a downstream heat exchanger, which dissipates heat from the heat transfer fluid into an outside environment.[0025] Persons skilled in the art will recognize that system 300, including fansink 302 and hybrid heat transport module 304, may be used to cool any type of processor. For example, in one embodiment, the processor comprises a graphics processing unit. In an alternative embodiment, the processor may comprise a central processing unit. In yet another alternative embodiment, the processor may comprise an application-specific integrated circuit (ASIC). In another embodiment, system 300 may be sized to cool a memory chip in addition to the processor.[0026] FIG. 4 is an exploded view of a portion of cooling system 300. In one embodiment, bottom plate 318 includes a trench 402 sized for coupling to and sealing fluid channels 312. In one embodiment, the surface of trench 402 is textured to increase the heat transfer surface area of bottom plate 318, as described in further detail below, and to transfer heat from bottom plate 318 to the heat transfer fluid flowing through fluid channel 312. For example, trench 402 may further include a plurality of pins 404 extending upward from bottom plate 318. The density and geometric shape of pins 404 may vary, so long as pins 404 are capable of effectively transferring heat from bottom plate 318 to the heat transfer fluid flowing around pins 404.[0027] FIG. 5 is a cross sectional view of hybrid heat transport module 304, taken along sectional line 3-3' of FIG. 3. As illustrated, hybrid heat transport module 304 is configured to dissipate heat from a processor via fluid channel 312 and/or air channels 310. As described above, air channels 310 may be configured to interface to air channels 322 of fansink 302, so that even when the pump is not activated to circulate fluid through fluid channel 312, air channels 310 will operate to increase the heat transfer surface area of system 300 (e.g., by effectively extending air channels 322), thereby enabling heat to be dissipated more efficiently.[0028] Fansink 302 and hybrid heat transport module 304 may be implemented independently or in combination to dissipate heat from a processor, in order to dissipate heat from the processor in the most efficient manner. For example, fansink 302 may be implemented to dissipate a majority of the generated heat, hybrid fluid heat transport module 304 may be implemented to dissipate a smaller quantity of heat, and the proportions of heat dissipated by fansink 302 and hybrid heat transport module 304 may be dynamically adjusted. Alternatively, one of fansink 302 and hybrid heat transport module 304 may be implemented as a primary means for heat dissipation, while the other mechanism is implemented on an as-needed basis to dissipate excess heat.[0029] FIG. 6 is a flow diagram illustrating a method 600 for controlling cooling system 300, for example for implementation by a control unit coupled to cooling system 300, according to one embodiment of the invention. In the illustrated embodiment, the method 600 implements fansink 302 as a primary means for heat dissipation, while hybrid heat transport module 304 is implemented on as as- needed basis. Method 600 is initialized at step 602 and proceeds to step 604, where method 600 monitors the temperature of the processor, for example by means of a thermal diode or other sensor positioned proximate to the processor. Method 600 then proceeds to step 606 and determines whether the temperature of the processor has reached a predetermined threshold temperature at which a secondary heat dissipation mechanism (e.g., hybrid heat transport module 304) should be implemented.[0030] If method 600 determines at step 606 that the processor temperature has not reached the threshold temperature, method 600 returns to step 604 and continues to monitor the processor temperature. Alternatively, if method 600 determines at step 606 that the threshold temperature has been reached or exceeded, method 600 proceeds to step 608 and turns on the pump of hybrid heat transport module 304, in order to engage the secondary heat dissipation mechanism. Method 600 then determines at step 610 whether the implementation of hybrid heat transport module 304 has cooled the processor to a predetermined desired temperature (e.g., an ideal operating temperature).[0031] If method 600 determines at step 610 that the processor has been cooled to the desired temperature, method 600 proceeds to step 612 and turns off the pump of hybrid heat transport module 304, effectively shutting off hybrid heat transport module 304 so that the processor continues to be cooled by the primary heat dissipation mechanism (e.g., fansink 302). Method 600 then returns to step 604 and continues to monitor the temperature of the processor. Alternatively, if method 600 determines at step 610 that the processor has not yet been cooled to the desired temperature, method 600 returns to step 608 and continues to run the pump of hybrid heat transport module 304 until the processor is cooled to the desired temperature.[0032] Cooling system 300 offers several advantages over conventional cooling systems, such as cooling system 100 of FIG. 1. First, using fansink 302 in conjunction with hybrid heat transport module 304 results in a more reliable cooling system, because the pump of hybrid heat transport module 304 may be implemented on a limited or as-needed basis. The life of the pump is thereby extended, because the pump is not constantly operating at maximum power. For example, in one embodiment, the life of a typical pump may be extended by approximately fifty percent. Alternatively, cooling system 300 may incorporate a pump that is significantly smaller than a pump typically incorporated in a fluid-based cooling system. Moreover, in the event of failure, fansink 302 may operate as a backup to fluid heat transport module 304, and vice versa.[0033] Also, because hybrid heat transport module 304 may be implemented on a limited or as-needed basis (e.g., as opposed to being a primary heat dissipation means), the amount of heat transfer fluid and the flow rate of the fluid through fluid channel 312 may be reduced compared to a conventional fluid-based cooling system. Thus, cooling system 300 requires less maintenance (e.g., frequent replenishment of fluid reservoirs) than conventional fluid-based cooling systems, and the pump consumes less power.[0034] In addition, because cooling system 300 relies less on fansink 302 to dissipate heat (e.g., when hybrid heat transport module 304 is implemented either alone or in conjunction with fansink 302), an amplitude at the antinodes of interfering sound waves established within air channel 322 is smaller. Thus, the noise level of the airflow through air channel 322 may be substantially decreased.[0035] Moreover, using hybrid heat transport module 304 in conjunction with fansink 302 increases the heat flow rate, (dQ/dT), of cooling system 300, which enables cooling system 300 to transfer heat away from the processor more efficiently than conventional cooling systems. One reason for this increase is that the heat transfer area, A, of cooling system 300 can be substantially larger than that of conventional cooling systems, owing to the incorporation of air channels 310 and pins 404. Even if hybrid heat transport module 304 is not active (e.g., the pump is not activated), the configuration of hybrid heat transport module 304 will increase the heat transfer surface area over which air forced by fan 308 travels, as the forced air will travel through both channels 322 and channels 310.[0036] Heat flow rate (dQ/dT) is calculated according to the following equation: (dQ/dT) = ΛA(T sink - T air) (EQN . 1 )where h is the heat transfer coefficient of cooling system 300, TSjnk is the temperature of the heat exchanging elements (e.g., air channels 322, air channels 310 and pins 404) and Tair is the temperature of the air flowing through the heat exchanging elements. As discussed above, since A is much larger for cooling system 300 than for a conventional cooling system (and ΔT is approximately the same), the heat flow rate (dQ/dT) is substantially increased when using cooling system 300.[0037] The increased heat flow rate (dQ/dT) further results in cooling system 300 having an improved heat transfer efficiency, θsa, relative to conventional cooling systems. As persons skilled in the art will recognize, heat transfer efficiency, θsa, may be calculated according to the following equation: θsa= (T sink - T air)/(dQ/dT) (°C/watt) (EQN . 2)where a smaller value for θsaindicates increased efficiency and therefore is more desirable. Again, the larger heat transfer area, A, causes cooling system 300 to have greater heat flow rate (dQ/dT), and, consequently, an improved efficiency as well (as evidenced by the smaller value of θsa).[0038] Simulations comparing improved cooling system 300 with a conventional cooling system show that improved cooling system 300 can cool a processor to temperatures that are upwards of twenty-two percent lower than temperatures achieved with the conventional cooling system, without substantially increasing power consumption.[0039] The location of cooling system 300, fansink 302 and hybrid heat transport module 304, as well as the size and shape of the components, may be dictated by other board mounted components, as well as by accelerated graphics processor (AGP) -specified envelope constraints. Moreover, those skilled in the art will appreciate that the cooling system described herein may be implemented in both ATX motherboard configurations (wherein a graphics card is orientated so that the GPU faces downward relative to the computing device, as illustrated in FIG. 2) and BTX configurations (wherein a graphics card is orientated so that the GPU faces upward relative to the computing device). Therefore, the cooling system of the present invention may be implemented as a single-slot cooling solution, e.g., wherein the size of the cooling system does not require space on the motherboard that may be allocated to other components, such as PCI cards.[0040] Thus, the present invention represents a significant advancement in the field of processor cooling. By implementing a hybrid heat transport module in conjunction with a fansink, a system used to cool a cooling system will produce less airflow noise in operation than systems that incorporate conventional heat sink lids and will cool a processor more effectively and efficiently. Moreover, by implementing the hybrid heat transport module on a limited basis, the life of a pump used to drive a portion of the hybrid heat transport module can be significantly extended.[0041] Although the invention has been described above with reference to specific embodiments, persons skilled in the art will understand that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The foregoing description and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
An apparatus and method are described for performing big integer arithmetic operations. For example, one embodiment of a processor comprises: a first source register to store a first 256-bit integer operand; a second source register to store a second 256-bit integer operand; and multiplication logic comprising a set of multipliers and adders to perform a multiplication of the first and second 256-bit integer operands to generate a 512-bit result responsive to a 256-bit multiplication instruction, the multiplication logic to convert a radix representation of the first and second 256-bit integer operands from a first radix representation to a second radix representation selected based on a size of the multipliers and adders used to perform the multiplication and generate a result, and then to convert the result back to the first radix representation.
CLAIMSWhat is claimed is:1 . A processor comprising:a first source register to store a first 256-bit integer operand;a second source register to store a second 256-bit integer operand; and multiplication logic comprising a set of multipliers and adders to perform a multiplication of the first and second 256-bit integer operands to generate a 512-bit result responsive to a 256-bit multiplication instruction, the multiplication logic to convert a radix representation of the first and second 256-bit integer operands from a first radix representation to a second radix representation selected based on a size of the multipliers and adders used to perform the multiplication and generate a result, and then to convert the result back to the first radix representation.2. The processor as in claim 1 wherein the first radix representation of each the 256-bit integer operands comprises four digits represented in radix 264.3. The processor as in claim 2 wherein the second radix representation comprises five digits represented in radix 252.4. The processor as in claim 3 wherein each of the multipliers comprises a 52 x 52 multiplier.5. The processor as in claim 4 wherein each of the multipliers is to multiply one of the five digits from the first source operand by one of the five digits of the second source operand.6. The processor as in claim 5 wherein for digits AO, A1 , A2, A3, and A4 from the first source operand and for digits BO, B1 , B2, B3, and B4 of the second source operand:a first multiplier is to multiply A1 and B2 to generate the product A1 B2;a second multiplier is to multiply AO and B3 to generate the product A0B3;a third multiplier is to multiply A1 and B1 to generate the product A1 B1 ;a fourth multiplier is to multiply AO and B2 to generate the product A0B2;a fifth multiplier is to multiply A1 and BO to generate the product A1 BO;a sixth multiplier is to multiply AO and B1 to generate the product A0B1 ; and a seventh multiplier is to multiply AO and BO to generate the product A0B0.7. The processor as in claim 6 wherein each of the adders is to add at least two of the results output by the multipliers.8. The processor as in claim 7 further comprising:a first adder to determine a first sum of A1 B2 and A0B3;a second adder to determine a second sum of A1 B1 and A1 B2;a third adder to determine a third sum of A1 BO and A1 B1 ; anda fourth adder to determine a fourth sum of A0B0 and zero.9. The processor as in claim 8 wherein each of the four sums is output to each of four different 128-bit lanes.10. The processor as in claim 9 wherein the four sums in each of the 128-bit lanes are summed and transformed to a radix 264representation.1 1 . The processor as in claim 1 wherein the multiplication logic comprises decode logic to decode a 256-bit multiplication instruction into a plurality ofmicrooperations, the microoperations to perform a plurality of multiplication and sum operations using the second radix representation to generate the 512-bit result.12. The processor as in claim 1 wherein the first and second source registers comprise 512-bit vector registers and wherein the first and second 256-bit integer operands are to be stored in an upper or lower region of the 512 bit vector registers.13. The processor as in claim 12 wherein an immediate value of the 256-bit multiplication instruction indicates whether the first and second 256-bit integer operands are stored in the upper or lower halves of the first and second 512-bit vector registers, respectively.14. A method comprising:storing a first 256-bit integer operand in a first source register to store;storing a second 256-bit integer operand in a second source register; and performing a multiplication of the first and second 256-bit integer operands using a set of multipliers and adders by converting a radix representation of the first and second 256-bit integer operands from a first radix representation to a second radix representation selected based on a size of the multipliers and adders used to perform the multiplication and generate a result, and then converting the result back to the first radix representation.15. The method as in claim 14 wherein the first radix representation of each the 256-bit integer operands comprises four digits represented in radix 264.16. The method as in claim 15 wherein the second radix representation comprises five digits represented in radix 252.17. The method as in claim 16 wherein each of the multipliers comprises a 52 x 52 multiplier.18. The method as in claim 17 wherein each of the multipliers is to multiply one of the five digits from the first source operand by one of the five digits of the second source operand.19. The method as in claim 18 wherein for digits AO, A1 , A2, A3, and A4 from the first source operand and for digits BO, B1 , B2, B3, and B4 of the second source operand the method further comprising:multiplying A1 and B2 to generate the product A1 B2;multiplying AO and B3 to generate the product A0B3;multiplying A1 and B1 to generate the product A1 B1 ;multiplying AO and B2 to generate the product A0B2;multiplying A1 and BO to generate the product A1 BO;multiplying AO and B1 to generate the product A0B1 ; andmultiplying AO and BO to generate the product A0B0.20. The method as in claim 19 wherein each of the adders is to add at least two of the results output by the multipliers.21 . The method as in claim 20 further comprising: determining a first sum of A1 B2 and A0B3;determining a second sum of A1 B1 and A1 B2;determining a third sum of A1 BO and A1 B1 ; anddetermining a fourth sum of A0B0 and zero.22. The method as in claim 21 wherein each of the four sums is output to each of four different 128-bit lanes.23. The method as in claim 22 wherein the four sums in each of the 128-bit lanes are summed and transformed to a radix 264representation.24. The method as in claim 14 wherein the multiplication logic comprises decode logic to decode a 256-bit multiplication instruction into a plurality ofmicrooperations, the microoperations to perform a plurality of multiplication and sum operations using the second radix representation to generate the 512-bit result.25. The method as in claim 14 wherein the first and second source registers comprise 512-bit vector registers and wherein the first and second 256-bit integer operands are to be stored in an upper or lower region of the 512 bit vector registers.
METHOD AND APPARATUS FORPERFORMING BIG-INTEGER ARITHMETIC OPERATIONSBACKGROUNDField of the Invention[0001] This invention relates generally to the field of computer processors. More particularly, the invention relates to a method and apparatus for performing big integer arithmetic operations.Description of the Related Art[0002] An instruction set, or instruction set architecture (ISA), is the part of the computer architecture related to programming, including the native data types, instructions, register architecture, addressing modes, memory architecture, interrupt and exception handling, and external input and output (I/O). It should be noted that the term "instruction" generally refers herein to macro-instructions - that is instructions that are provided to the processor for execution - as opposed to micro-instructions or micro- ops - that is the result of a processor's decoder decoding macro-instructions. The micro-instructions or micro-ops can be configured to instruct an execution unit on the processor to perform operations to implement the logic associated with the macro- instruction.[0003] The ISA is distinguished from the microarchitecture, which is the set of processor design techniques used to implement the instruction set. Processors with different microarchitectures can share a common instruction set. For example, Intel® Pentium 4 processors, Intel® Core™ processors, and processors from Advanced Micro Devices, Inc. of Sunnyvale CA implement nearly identical versions of the x86 instruction set (with some extensions that have been added with newer versions), but have different internal designs. For example, the same register architecture of the ISA may be implemented in different ways in different microarchitectures using well-known techniques, including dedicated physical registers, one or more dynamically allocated physical registers using a register renaming mechanism (e.g., the use of a Register Alias Table (RAT), a Reorder Buffer (ROB) and a retirement register file). Unless otherwise specified, the phrases register architecture, register file, and register are used herein to refer to that which is visible to the software/programmer and the manner in which instructions specify registers. Where a distinction is required, the adjective "logical," "architectural," or "software visible" will be used to indicate registers/files in the register architecture, while different adjectives will be used to designate registers in a given microarchitecture (e.g., physical register, reorder buffer, retirement register, register pool).[0004] An instruction set includes one or more instruction formats. A given instruction format defines various fields (number of bits, location of bits) to specify, among other things, the operation to be performed and the operand(s) on which that operation is to be performed. Some instruction formats are further broken down though the definition of instruction templates (or subformats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format's fields (the included fields are typically in the same order, but at least some have different bit positions because there are less fields included) and/or defined to have a given field interpreted differently. A given instruction is expressed using a given instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and specifies the operation and the operands. An instruction stream is a specific sequence of instructions, where each instruction in the sequence is an occurrence of an instruction in an instruction format (and, if defined, a given one of the instruction templates of that instruction format).BRIEF DESCRIPTION OF THE DRAWINGS[0005] A better understanding of the present invention can be obtained from the following detailed description in conjunction with the following drawings, in which:[0006] FIGS. 1 A and 1 B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments of the invention;[0007] FIG. 2A-D is a block diagram illustrating an exemplary specific vector friendly instruction format according to embodiments of the invention;[0008] FIG. 3 is a block diagram of a register architecture according to one embodiment of the invention; and[0009] FIG. 4A is a block diagram illustrating both an exemplary in-order fetch, decode, retire pipeline and an exemplary register renaming, out-of-orderissue/execution pipeline according to embodiments of the invention;[0010] FIG. 4B is a block diagram illustrating both an exemplary embodiment of an in-order fetch, decode, retire core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according toembodiments of the invention; [0011] FIG. 5A is a block diagram of a single processor core, along with its connection to an on-die interconnect network;[0012] FIG. 5B illustrates an expanded view of part of the processor core in FIG 5A according to embodiments of the invention;[0013] FIG. 6 is a block diagram of a single core processor and a multicore processor with integrated memory controller and graphics according to embodiments of the invention;[0014] FIG. 7 illustrates a block diagram of a system in accordance with one embodiment of the present invention;[0015] FIG. 8 illustrates a block diagram of a second system in accordance with an embodiment of the present invention;[0016] FIG. 9 illustrates a block diagram of a third system in accordance with an embodiment of the present invention;[0017] FIG. 10 illustrates a block diagram of a system on a chip (SoC) inaccordance with an embodiment of the present invention;[0018] FIG. 11 illustrates a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention;[0019] FIG. 12 illustrates an exemplary processor on which embodiments of the invention may be implemented;[0020] FIG. 13 illustrates one embodiment of the invention including 256-bit multiplication logic;[0021] FIG. 14 illustrates another embodiment of the invention including 256-bit multiplication logic;[0022] FIG. 15 illustrates another embodiment of the invention including 256-bit multiplication logic utilizing an immediate value to identify a source operand;[0023] FIG. 16 illustrates a set of multipliers and adders used to implement one embodiment of the invention; and[0024] FIG. 17 illustrates a method in accordance with one embodiment of the invention.DETAILED DESCRIPTION[0025] In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention described below. It will be apparent, however, to one skilled in the art that the embodiments of the invention may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form to avoid obscuring the underlying principles of the embodiments of the invention.EXEMPLARY PROCESSOR ARCHITECTURES AND DATA TYPES[0026] An instruction set includes one or more instruction formats. A given instruction format defines various fields (number of bits, location of bits) to specify, among other things, the operation to be performed (opcode) and the operand(s) on which that operation is to be performed. Some instruction formats are further broken down though the definition of instruction templates (or subformats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format's fields (the included fields are typically in the same order, but at least some have different bit positions because there are less fields included) and/or defined to have a given field interpreted differently. Thus, each instruction of an ISA is expressed using a given instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and includes fields for specifying the operation and the operands. For example, an exemplary ADD instruction has a specific opcode and an instruction format that includes an opcode field to specify that opcode and operand fields to select operands (sourcel /destination and source2); and an occurrence of this ADD instruction in an instruction stream will have specific contents in the operand fields that select specific operands. A set of SIMD extensions referred to the Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using the Vector Extensions (VEX) coding scheme, has been , has been released and/or published (e.g., see Intel® 64 and IA-32 Architectures Software Developers Manual, October 201 1 ; and see Intel® Advanced Vector Extensions Programming Reference, June 201 1 ).Exemplary Instruction Formats[0027] Embodiments of the instruction(s) described herein may be embodied in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) may be executed on such systems, architectures, and pipelines, but are not limited to those detailed.A. Generic Vector Friendly Instruction Format[0028] A vector friendly instruction format is an instruction format that is suited for vector instructions (e.g., there are certain fields specific to vector operations). While embodiments are described in which both vector and scalar operations are supported through the vector friendly instruction format, alternative embodiments use only vector operations the vector friendly instruction format.[0029] Figures 1 A-1 B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments of the invention. Figure 1 A is a block diagram illustrating a generic vector friendly instruction format and class A instruction templates thereof according to embodiments of the invention; while Figure 1 B is a block diagram illustrating the generic vector friendly instruction format and class B instruction templates thereof according to embodiments of the invention. Specifically, a generic vector friendly instruction format 100 for which are defined class A and class B instruction templates, both of which include no memory access 105 instruction templates and memory access 120 instruction templates. The term generic in the context of the vector friendly instruction format refers to the instruction format not being tied to any specific instruction set.[0030] While embodiments of the invention will be described in which the vector friendly instruction format supports the following: a 64 byte vector operand length (or size) with 32 bit (4 byte) or 64 bit (8 byte) data element widths (or sizes) (and thus, a 64 byte vector consists of either 1 6 doubleword-size elements or alternatively, 8 quadword- size elements); a 64 byte vector operand length (or size) with 1 6 bit (2 byte) or 8 bit (1 byte) data element widths (or sizes); a 32 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 1 6 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); and a 1 6 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); alternative embodiments may support more, less and/or different vector operand sizes (e.g., 256 byte vector operands) with more, less, or different data element widths (e.g., 128 bit (16 byte) data element widths).[0031] The class A instruction templates in Figure 1 A include: 1 ) within the no memory access 105 instruction templates there is shown a no memory access, full round control type operation 1 10 instruction template and a no memory access, data transform type operation 1 15 instruction template; and 2) within the memory access 120 instruction templates there is shown a memory access, temporal 125 instruction template and a memory access, non-temporal 130 instruction template. The class B instruction templates in Figure 1 B include: 1 ) within the no memory access 105 instruction templates there is shown a no memory access, write mask control, partial round control type operation 1 12 instruction template and a no memory access, write mask control, vsize type operation 1 17 instruction template; and 2) within the memory access 120 instruction templates there is shown a memory access, write mask control 127 instruction template.[0032] The generic vector friendly instruction format 100 includes the following fields listed below in the order illustrated in Figures 1A-1 B.[0033] Format field 140 - a specific value (an instruction format identifier value) in this field uniquely identifies the vector friendly instruction format, and thus occurrences of instructions in the vector friendly instruction format in instruction streams. As such, this field is optional in the sense that it is not needed for an instruction set that has only the generic vector friendly instruction format.[0034] Base operation field 142 - its content distinguishes different base operations.[0035] Register index field 144 - its content, directly or through address generation, specifies the locations of the source and destination operands, be they in registers or in memory. These include a sufficient number of bits to select N registers from a PxQ (e.g. 32x512, 1 6x128, 32x1024, 64x1024) register file. While in one embodiment N may be up to three sources and one destination register, alternative embodiments may support more or less sources and destination registers (e.g., may support up to two sources where one of these sources also acts as the destination, may support up to three sources where one of these sources also acts as the destination, may support up to two sources and one destination).[0036] Modifier field 146 - its content distinguishes occurrences of instructions in the generic vector instruction format that specify memory access from those that do not; that is, between no memory access 105 instruction templates and memory access 120 instruction templates. Memory access operations read and/or write to the memory hierarchy (in some cases specifying the source and/or destination addresses using values in registers), while non-memory access operations do not (e.g., the source and destinations are registers). While in one embodiment this field also selects between three different ways to perform memory address calculations, alternative embodiments may support more, less, or different ways to perform memory address calculations.[0037] Augmentation operation field 150 - its content distinguishes which one of a variety of different operations to be performed in addition to the base operation. This field is context specific. In one embodiment of the invention, this field is divided into a class field 1 68, an alpha field 152, and a beta field 154. The augmentation operation field 150 allows common groups of operations to be performed in a single instruction rather than 2, 3, or 4 instructions. [0038] Scale field 1 60 - its content allows for the scaling of the index field's content for memory address generation (e.g., for address generation that uses 2scale *index + base).[0039] Displacement Field 162A- its content is used as part of memory address generation (e.g., for address generation that uses 2scale *index + base + displacement).[0040] Displacement Factor Field 1 62B (note that the juxtaposition of displacement field 1 62A directly over displacement factor field 162B indicates one or the other is used) - its content is used as part of address generation; it specifies a displacement factor that is to be scaled by the size of a memory access (N) - where N is the number of bytes in the memory access (e.g., for address generation that uses 2scale *index + base + scaled displacement). Redundant low-order bits are ignored and hence, the displacement factor field's content is multiplied by the memory operands total size (N) in order to generate the final displacement to be used in calculating an effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field 174 (described later herein) and the data manipulation field 154C. The displacement field 1 62A and the displacement factor field 1 62B are optional in the sense that they are not used for the no memory access 105 instruction templates and/or different embodiments may implement only one or none of the two.[0041] Data element width field 1 64 - its content distinguishes which one of a number of data element widths is to be used (in some embodiments for all instructions; in other embodiments for only some of the instructions). This field is optional in the sense that it is not needed if only one data element width is supported and/or data element widths are supported using some aspect of the opcodes.[0042] Write mask field 170 - its content controls, on a per data element position basis, whether that data element position in the destination vector operand reflects the result of the base operation and augmentation operation. Class A instruction templates support merging-writemasking, while class B instruction templates support both merging- and zeroing-writemasking. When merging, vector masks allow any set of elements in the destination to be protected from updates during the execution of any operation (specified by the base operation and the augmentation operation); in other one embodiment, preserving the old value of each element of the destination where the corresponding mask bit has a 0. In contrast, when zeroing vector masks allow any set of elements in the destination to be zeroed during the execution of any operation (specified by the base operation and the augmentation operation); in one embodiment, an element of the destination is set to 0 when the corresponding mask bit has a 0 value. A subset of this functionality is the ability to control the vector length of the operation being performed (that is, the span of elements being modified, from the first to the last one); however, it is not necessary that the elements that are modified be consecutive. Thus, the write mask field 170 allows for partial vector operations, including loads, stores, arithmetic, logical, etc. While embodiments of the invention are described in which the write mask field's 170 content selects one of a number of write mask registers that contains the write mask to be used (and thus the write mask field's 170 content indirectly identifies that masking to be performed), alternative embodiments instead or additional allow the mask write field's 170 content to directly specify the masking to be performed.[0043] Immediate field 172 - its content allows for the specification of an immediate. This field is optional in the sense that is it not present in an implementation of the generic vector friendly format that does not support immediate and it is not present in instructions that do not use an immediate.[0044] Class field 1 68 - its content distinguishes between different classes of instructions. With reference to Figures 1 A-B, the contents of this field select between class A and class B instructions. In Figures 1 A-B, rounded corner squares are used to indicate a specific value is present in a field (e.g., class A 1 68A and class B 168B for the class field 1 68 respectively in Figures 1A-B).Instruction Templates of Class A[0045] In the case of the non-memory access 105 instruction templates of class A, the alpha field 152 is interpreted as an RS field 152A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 152A.1 and data transform 152A.2 are respectively specified for the no memory access, round type operation 1 10 and the no memory access, data transform type operation 1 15 instruction templates), while the beta field 154 distinguishes which of the operations of the specified type is to be performed. In the no memory access 105 instruction templates, the scale field 1 60, the displacement field 1 62A, and the displacement scale filed 1 62B are not present.No-Memory Access Instruction Templates - Full Round Control Type Operation[0046] In the no memory access full round control type operation 1 10 instruction template, the beta field 154 is interpreted as a round control field 154A, whose content(s) provide static rounding. While in the described embodiments of the invention the round control field 154A includes a suppress all floating point exceptions (SAE) field 156 and a round operation control field 158, alternative embodiments may support may encode both these concepts into the same field or only have one or the other of these concepts/fields (e.g., may have only the round operation control field 158).[0047] SAE field 156 - its content distinguishes whether or not to disable the exception event reporting; when the SAE field's 156 content indicates suppression is enabled, a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler.[0048] Round operation control field 158 - its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards- zero and Round-to-nearest). Thus, the round operation control field 158 allows for the changing of the rounding mode on a per instruction basis. In one embodiment of the invention where a processor includes a control register for specifying rounding modes, the round operation control field's 150 content overrides that register value.No Memory Access Instruction Templates - Data Transform Type Operation[0049] In the no memory access data transform type operation 1 15 instruction template, the beta field 154 is interpreted as a data transform field 154B, whose content distinguishes which one of a number of data transforms is to be performed (e.g., no data transform, swizzle, broadcast).[0050] In the case of a memory access 120 instruction template of class A, the alpha field 152 is interpreted as an eviction hint field 152B, whose content distinguishes which one of the eviction hints is to be used (in Figure 1 A, temporal 152B.1 and non- temporal 152B.2 are respectively specified for the memory access, temporal 125 instruction template and the memory access, non-temporal 130 instruction template), while the beta field 154 is interpreted as a data manipulation field 154C, whose content distinguishes which one of a number of data manipulation operations (also known as primitives) is to be performed (e.g., no manipulation; broadcast; up conversion of a source; and down conversion of a destination). The memory access 120 instruction templates include the scale field 1 60, and optionally the displacement field 1 62A or the displacement scale field 1 62B.[0051] Vector memory instructions perform vector loads from and vector stores to memory, with conversion support. As with regular vector instructions, vector memory instructions transfer data from/to memory in a data element-wise fashion, with the elements that are actually transferred is dictated by the contents of the vector mask that is selected as the write mask.Memory Access Instruction Templates - Temporal [0052] Temporal data is data likely to be reused soon enough to benefit from caching. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.Memory Access Instruction Templates - Non-Temporal[0053] Non-temporal data is data unlikely to be reused soon enough to benefit from caching in the 1 st-level cache and should be given priority for eviction. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.Instruction Templates of Class B[0054] In the case of the instruction templates of class B, the alpha field 152 is interpreted as a write mask control (Z) field 152C, whose content distinguishes whether the write masking controlled by the write mask field 170 should be a merging or a zeroing.[0055] In the case of the non-memory access 105 instruction templates of class B, part of the beta field 154 is interpreted as an RL field 157A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 157A.1 and vector length (VSIZE) 157A.2 are respectively specified for the no memory access, write mask control, partial round control type operation 1 12 instruction template and the no memory access, write mask control, VSIZE type operation 1 17 instruction template), while the rest of the beta field 154 distinguishes which of the operations of the specified type is to be performed. In the no memory access 105 instruction templates, the scale field 1 60, the displacement field 1 62A, and the displacement scale filed 1 62B are not present.[0056] In the no memory access, write mask control, partial round control type operation 1 10 instruction template, the rest of the beta field 154 is interpreted as a round operation field 159A and exception event reporting is disabled (a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler).[0057] Round operation control field 159A - just as round operation control field 158, its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round-to-nearest). Thus, the round operation control field 159A allows for the changing of the rounding mode on a per instruction basis. In one embodiment of the invention where a processor includes a control register for specifying rounding modes, the round operation control field's 150 content overrides that register value. [0058] In the no memory access, write mask control, VSIZE type operation 1 17 instruction template, the rest of the beta field 154 is interpreted as a vector length field 159B, whose content distinguishes which one of a number of data vector lengths is to be performed on (e.g., 128, 256, or 512 byte).[0059] In the case of a memory access 120 instruction template of class B, part of the beta field 154 is interpreted as a broadcast field 157B, whose content distinguishes whether or not the broadcast type data manipulation operation is to be performed, while the rest of the beta field 154 is interpreted the vector length field 159B. The memory access 120 instruction templates include the scale field 1 60, and optionally the displacement field 1 62A or the displacement scale field 1 62B.[0060] With regard to the generic vector friendly instruction format 100, a full opcode field 174 is shown including the format field 140, the base operation field 142, and the data element width field 1 64. While one embodiment is shown where the full opcode field 174 includes all of these fields, the full opcode field 174 includes less than all of these fields in embodiments that do not support all of them. The full opcode field 174 provides the operation code (opcode).[0061] The augmentation operation field 150, the data element width field 1 64, and the write mask field 170 allow these features to be specified on a per instruction basis in the generic vector friendly instruction format.[0062] The combination of write mask field and data element width field create typed instructions in that they allow the mask to be applied based on different data element widths.[0063] The various instruction templates found within class A and class B are beneficial in different situations. In some embodiments of the invention, different processors or different cores within a processor may support only class A, only class B, or both classes. For instance, a high performance general purpose out-of-order core intended for general-purpose computing may support only class B, a core intended primarily for graphics and/or scientific (throughput) computing may support only class A, and a core intended for both may support both (of course, a core that has some mix of templates and instructions from both classes but not all templates and instructions from both classes is within the purview of the invention). Also, a single processor may include multiple cores, all of which support the same class or in which different cores support different class. For instance, in a processor with separate graphics and general purpose cores, one of the graphics cores intended primarily for graphics and/or scientific computing may support only class A, while one or more of the general purpose cores may be high performance general purpose cores with out of order execution and register renaming intended for general-purpose computing that support only class B. Another processor that does not have a separate graphics core, may include one more general purpose in-order or out-of-order cores that support both class A and class B. Of course, features from one class may also be implement in the other class in different embodiments of the invention. Programs written in a high level language would be put (e.g., just in time compiled or statically compiled) into an variety of different executable forms, including: 1 ) a form having only instructions of the class(es) supported by the target processor for execution; or 2) a form having alternative routines written using different combinations of the instructions of all classes and having control flow code that selects the routines to execute based on the instructions supported by the processor which is currently executing the code.B. Exemplary Specific Vector Friendly Instruction Format[0064] Figure 2 is a block diagram illustrating an exemplary specific vector friendly instruction format according to embodiments of the invention. Figure 2 shows a specific vector friendly instruction format 200 that is specific in the sense that it specifies the location, size, interpretation, and order of the fields, as well as values for some of those fields. The specific vector friendly instruction format 200 may be used to extend the x86 instruction set, and thus some of the fields are similar or the same as those used in the existing x86 instruction set and extension thereof (e.g., AVX). This format remains consistent with the prefix encoding field, real opcode byte field, MOD R/M field, SIB field, displacement field, and immediate fields of the existing x86 instruction set with extensions. The fields from Figure 1 into which the fields from Figure 2 map are illustrated.[0065] It should be understood that, although embodiments of the invention are described with reference to the specific vector friendly instruction format 200 in the context of the generic vector friendly instruction format 100 for illustrative purposes, the invention is not limited to the specific vector friendly instruction format 200 except where claimed. For example, the generic vector friendly instruction format 100 contemplates a variety of possible sizes for the various fields, while the specific vector friendly instruction format 200 is shown as having fields of specific sizes. By way of specific example, while the data element width field 164 is illustrated as a one bit field in the specific vector friendly instruction format 200, the invention is not so limited (that is, the generic vector friendly instruction format 100 contemplates other sizes of the data element width field 164). [0066] The generic vector friendly instruction format 100 includes the following fields listed below in the order illustrated in Figure 2A.[0067] EVEX Prefix (Bytes 0-3) 202 - is encoded in a four-byte form.[0068] Format Field 140 (EVEX Byte 0, bits [7:0]) - the first byte (EVEX Byte 0) is the format field 140 and it contains 0x62 (the unique value used for distinguishing the vector friendly instruction format in one embodiment of the invention).[0069] The second-fourth bytes (EVEX Bytes 1 -3) include a number of bit fields providing specific capability.[0070] REX field 205 (EVEX Byte 1 , bits [7-5]) - consists of a EVEX.R bit field (EVEX Byte 1 , bit [7] - R), EVEX.X bit field (EVEX byte 1 , bit [6] - X), and 157BEX byte 1 , bit[5] - B). The EVEX.R, EVEX.X, and EVEX.B bit fields provide the same functionality as the corresponding VEX bit fields, and are encoded using 1 s complement form, i.e. ZMM0 is encoded as 1 1 1 1 B, ZMM15 is encoded as 0000B. Other fields of the instructions encode the lower three bits of the register indexes as is known in the art (rrr, xxx, and bbb), so that Rrrr, Xxxx, and Bbbb may be formed by adding EVEX.R, EVEX.X, and EVEX.B.[0071] REX' field 1 10 - this is the first part of the REX' field 1 10 and is the EVEX.R' bit field (EVEX Byte 1 , bit [4] - R') that is used to encode either the upper 1 6 or lower 1 6 of the extended 32 register set. In one embodiment of the invention, this bit, along with others as indicated below, is stored in bit inverted format to distinguish (in the well- known x86 32-bit mode) from the BOUND instruction, whose real opcode byte is 62, but does not accept in the MOD R/M field (described below) the value of 1 1 in the MOD field; alternative embodiments of the invention do not store this and the other indicated bits below in the inverted format. A value of 1 is used to encode the lower 1 6 registers. In other words, R'Rrrr is formed by combining EVEX.R', EVEX.R, and the other RRR from other fields.[0072] Opcode map field 215 (EVEX byte 1 , bits [3:0] - mmmm) - its content encodes an implied leading opcode byte (OF, OF 38, or OF 3).[0073] Data element width field 1 64 (EVEX byte 2, bit [7] - W) - is represented by the notation EVEX.W. EVEX.W is used to define the granularity (size) of the datatype (either 32-bit data elements or 64-bit data elements).[0074] EVEX.vvvv 220 (EVEX Byte 2, bits [6:3]-vvvv)- the role of EVEX.vvvv may include the following: 1 ) EVEX.vvvv encodes the first source register operand, specified in inverted (1 s complement) form and is valid for instructions with 2 or more source operands; 2) EVEX.vvvv encodes the destination register operand, specified in 1 s complement form for certain vector shifts; or 3) EVEX.vvvv does not encode any operand, the field is reserved and should contain 1 1 1 1 b. Thus, EVEX.vvvv field 220 encodes the 4 low-order bits of the first source register specifier stored in inverted (1 s complement) form. Depending on the instruction, an extra different EVEX bit field is used to extend the specifier size to 32 registers.[0075] EVEX.U 1 68 Class field (EVEX byte 2, bit [2]-U) - If EVEX.U = 0, it indicates class A or EVEX.U0; if EVEX.U = 1 , it indicates class B or EVEX.U1 .[0076] Prefix encoding field 225 (EVEX byte 2, bits [1 :0]-pp) - provides additional bits for the base operation field. In addition to providing support for the legacy SSE instructions in the EVEX prefix format, this also has the benefit of compacting the SIMD prefix (rather than requiring a byte to express the SIMD prefix, the EVEX prefix requires only 2 bits). In one embodiment, to support legacy SSE instructions that use a SIMD prefix (66H, F2H, F3H) in both the legacy format and in the EVEX prefix format, these legacy SIMD prefixes are encoded into the SIMD prefix encoding field; and at runtime are expanded into the legacy SIMD prefix prior to being provided to the decoder's PLA (so the PLA can execute both the legacy and EVEX format of these legacy instructions without modification). Although newer instructions could use the EVEX prefix encoding field's content directly as an opcode extension, certain embodiments expand in a similar fashion for consistency but allow for different meanings to be specified by these legacy SIMD prefixes. An alternative embodiment may redesign the PLA to support the 2 bit SIMD prefix encodings, and thus not require the expansion.[0077] Alpha field 152 (EVEX byte 3, bit [7] - EH; also known as EVEX. EH,EVEX.rs, EVEX.RL, EVEX.write mask control, and EVEX.N; also illustrated with a) - as previously described, this field is context specific.[0078] Beta field 154 (EVEX byte 3, bits [6:4]-SSS, also known as EVEX.s2-0, EVEX.r2-0, EVEX.rrl , EVEX.LL0, EVEX.LLB; also illustrated with βββ) - as previously described, this field is context specific.[0079] REX' field 1 10 - this is the remainder of the REX' field and is the EVEX. V bit field (EVEX Byte 3, bit [3] - V) that may be used to encode either the upper 1 6 or lower 1 6 of the extended 32 register set. This bit is stored in bit inverted format. A value of 1 is used to encode the lower 1 6 registers. In other words, V'VVVV is formed by combining EVEX.V, EVEX.vvvv.[0080] Write mask field 170 (EVEX byte 3, bits [2:0]-kkk) - its content specifies the index of a register in the write mask registers as previously described. In oneembodiment of the invention, the specific value EVEX.kkk=000 has a special behavior implying no write mask is used for the particular instruction (this may be implemented in a variety of ways including the use of a write mask hardwired to all ones or hardware that bypasses the masking hardware).[0081] Real Opcode Field 230 (Byte 4) is also known as the opcode byte. Part of the opcode is specified in this field.[0082] MOD R/M Field 240 (Byte 5) includes MOD field 242, Reg field 244, and R/M field 246. As previously described, the MOD field's 242 content distinguishes between memory access and non-memory access operations. The role of Reg field 244 can be summarized to two situations: encoding either the destination register operand or a source register operand, or be treated as an opcode extension and not used to encode any instruction operand. The role of R/M field 246 may include the following: encoding the instruction operand that references a memory address, or encoding either the destination register operand or a source register operand.[0083] Scale, Index, Base (SIB) Byte (Byte 6) - As previously described, the scale field's 150 content is used for memory address generation. SIB.xxx 254 and SIB.bbb 256 - the contents of these fields have been previously referred to with regard to the register indexes Xxxx and Bbbb.[0084] Displacement field 1 62A (Bytes 7-10) - when MOD field 242 contains 10, bytes 7-10 are the displacement field 1 62A, and it works the same as the legacy 32-bit displacement (disp32) and works at byte granularity.[0085] Displacement factor field 1 62B (Byte 7) - when MOD field 242 contains 01 , byte 7 is the displacement factor field 1 62B. The location of this field is that same as that of the legacy x86 instruction set 8-bit displacement (disp8), which works at byte granularity. Since disp8 is sign extended, it can only address between -128 and 127 bytes offsets; in terms of 64 byte cache lines, disp8 uses 8 bits that can be set to only four really useful values -128, -64, 0, and 64; since a greater range is often needed, disp32 is used; however, disp32 requires 4 bytes. In contrast to disp8 and disp32, the displacement factor field 1 62B is a reinterpretation of disp8; when using displacement factor field 1 62B, the actual displacement is determined by the content of thedisplacement factor field multiplied by the size of the memory operand access (N). This type of displacement is referred to as disp8*N. This reduces the average instruction length (a single byte of used for the displacement but with a much greater range). Such compressed displacement is based on the assumption that the effective displacement is multiple of the granularity of the memory access, and hence, the redundant low-order bits of the address offset do not need to be encoded. In other words, the displacement factor field 1 62B substitutes the legacy x86 instruction set 8-bit displacement. Thus, the displacement factor field 1 62B is encoded the same way as an x86 instruction set 8-bit displacement (so no changes in the ModRM/SIB encoding rules) with the only exception that disp8 is overloaded to disp8*N. In other words, there are no changes in the encoding rules or encoding lengths but only in the interpretation of the displacement value by hardware (which needs to scale the displacement by the size of the memory operand to obtain a byte-wise address offset).[0086] Immediate field 172 operates as previously described.Full Opcode Field[0087] Figure 2B is a block diagram illustrating the fields of the specific vector friendly instruction format 200 that make up the full opcode field 174 according to one embodiment of the invention. Specifically, the full opcode field 174 includes the format field 140, the base operation field 142, and the data element width (W) field 1 64. The base operation field 142 includes the prefix encoding field 225, the opcode map field 215, and the real opcode field 230.Register Index Field[0088] Figure 2C is a block diagram illustrating the fields of the specific vector friendly instruction format 200 that make up the register index field 144 according to one embodiment of the invention. Specifically, the register index field 144 includes the REX field 205, the REX' field 210, the MODR/M.reg field 244, the MODR/M.r/m field 246, the WW field 220, xxx field 254, and the bbb field 256.Augmentation Operation Field[0089] Figure 2D is a block diagram illustrating the fields of the specific vector friendly instruction format 200 that make up the augmentation operation field 150 according to one embodiment of the invention. When the class (U) field 1 68 contains 0, it signifies EVEX.U0 (class A 1 68A); when it contains 1 , it signifies EVEX.U1 (class B 1 68B). When U=0 and the MOD field 242 contains 1 1 (signifying a no memory access operation), the alpha field 152 (EVEX byte 3, bit [7] - EH) is interpreted as the rs field 152A. When the rs field 152A contains a 1 (round 152A.1 ), the beta field 154 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the round control field 154A. The round control field 154A includes a one bit SAE field 156 and a two bit round operation field 158. When the rs field 152A contains a 0 (data transform 152A.2), the beta field 154 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data transform field 154B. When U=0 and the MOD field 242 contains 00, 01 , or 10 (signifying a memory access operation), the alpha field 152 (EVEX byte 3, bit [7] - EH) is interpreted as the eviction hint (EH) field 152B and the beta field 154 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data manipulation field 154C.[0090] When U=1 , the alpha field 152 (EVEX byte 3, bit [7] - EH) is interpreted as the write mask control (Z) field 152C. When U=1 and the MOD field 242 contains 1 1 (signifying a no memory access operation), part of the beta field 154 (EVEX byte 3, bit [4]- So) is interpreted as the RL field 157A; when it contains a 1 (round 157A.1 ) the rest of the beta field 154 (EVEX byte 3, bit [6-5]- S2-i) is interpreted as the round operation field 159A, while when the RL field 157A contains a 0 (VSIZE 157.A2) the rest of the beta field 154 (EVEX byte 3, bit [6-5]- S2-i) is interpreted as the vector length field 159B (EVEX byte 3, bit [6-5]- Li-0). When U=1 and the MOD field 242 contains 00, 01 , or 10 (signifying a memory access operation), the beta field 154 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the vector length field 159B (EVEX byte 3, bit [6-5]- L1 -0) and the broadcast field 157B (EVEX byte 3, bit [4]- B).C. Exemplary Register Architecture[0091] Figure 3 is a block diagram of a register architecture 300 according to one embodiment of the invention. In the embodiment illustrated, there are 32 vector registers 310 that are 512 bits wide; these registers are referenced as zmmO through zmm31 . The lower order 256 bits of the lower 1 6 zmm registers are overlaid on registers ymmO-1 6. The lower order 128 bits of the lower 1 6 zmm registers (the lower order 128 bits of the ymm registers) are overlaid on registers xmmO-15. The specific vector friendly instruction format 200 operates on these overlaid register file as illustrated in the below tables.[0092] In other words, the vector length field 159B selects between a maximum length and one or more other shorter lengths, where each such shorter length is half the length of the preceding length; and instructions templates without the vector length field 159B operate on the maximum vector length. Further, in one embodiment, the class B instruction templates of the specific vector friendly instruction format 200 operate on packed or scalar single/double-precision floating point data and packed or scalar integer data. Scalar operations are operations performed on the lowest order data element position in an zmm/ymm/xmm register; the higher order data element positions are either left the same as they were prior to the instruction or zeroed depending on the embodiment.[0093] Write mask registers 315 - in the embodiment illustrated, there are 8 write mask registers (kO through k7), each 64 bits in size. In an alternate embodiment, the write mask registers 315 are 1 6 bits in size. As previously described, in oneembodiment of the invention, the vector mask register kO cannot be used as a write mask; when the encoding that would normally indicate kO is used for a write mask, it selects a hardwired write mask of OxFFFF, effectively disabling write masking for that instruction.[0094] General-purpose registers 325 - in the embodiment illustrated, there are sixteen 64-bit general-purpose registers that are used along with the existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.[0095] Scalar floating point stack register file (x87 stack) 345, on which is aliased the MMX packed integer flat register file 350 - in the embodiment illustrated, the x87 stack is an eight-element stack used to perform scalar floating-point operations on 32/64/80-bit floating point data using the x87 instruction set extension; while the MMX registers are used to perform operations on 64-bit packed integer data, as well as to hold operands for some operations performed between the MMX and XMM registers.[0096] Alternative embodiments of the invention may use wider or narrower registers. Additionally, alternative embodiments of the invention may use more, less, or different register files and registers.D. Exemplary Core Architectures, Processors, and Computer Architectures[0097] Processor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1 ) a general purpose in-order core intended for general-purposecomputing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1 ) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1 ) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures.[0098] Figure 4A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. Figure 4B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. The solid lined boxes in Figures 4A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.[0099] In Figure 4A, a processor pipeline 400 includes a fetch stage 402, a length decode stage 404, a decode stage 406, an allocation stage 408, a renaming stage 410, a scheduling (also known as a dispatch or issue) stage 412, a register read/memory read stage 414, an execute stage 41 6, a write back/memory write stage 418, an exception handling stage 422, and a commit stage 424.[00100] Figure 4B shows processor core 490 including a front end unit 430 coupled to an execution engine unit 450, and both are coupled to a memory unit 470. The core 490 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 490 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.[00101] The front end unit 430 includes a branch prediction unit 432 coupled to an instruction cache unit 434, which is coupled to an instruction translation lookaside buffer (TLB) 436, which is coupled to an instruction fetch unit 438, which is coupled to a decode unit 440. The decode unit 440 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 440 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 490 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 440 or otherwise within the front end unit 430). The decode unit 440 is coupled to a rename/allocator unit 452 in the execution engine unit 450.[00102] The execution engine unit 450 includes the rename/allocator unit 452 coupled to a retirement unit 454 and a set of one or more scheduler unit(s) 456. The scheduler unit(s) 456 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 456 is coupled to the physical register file(s) unit(s) 458. Each of the physical register file(s) units 458 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point,, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 458 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 458 is overlapped by the retirement unit 454 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 454 and the physical register file(s) unit(s) 458 are coupled to the execution cluster(s) 460. The execution cluster(s) 460 includes a set of one or more execution units 462 and a set of one or more memory access units 464. The execution units 462 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 456, physical register file(s) unit(s) 458, and execution cluster(s) 460 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 464). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.[00103] The set of memory access units 464 is coupled to the memory unit 470, which includes a data TLB unit 472 coupled to a data cache unit 474 coupled to a level 2 (L2) cache unit 476. In one exemplary embodiment, the memory access units 464 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 472 in the memory unit 470. The instruction cache unit 434 is further coupled to a level 2 (L2) cache unit 476 in the memory unit 470. The L2 cache unit 476 is coupled to one or more other levels of cache and eventually to a main memory.[00104] By way of example, the exemplary register renaming, out-of-orderissue/execution core architecture may implement the pipeline 400 as follows: 1 ) the instruction fetch 438 performs the fetch and length decoding stages 402 and 404; 2) the decode unit 440 performs the decode stage 406; 3) the rename/allocator unit 452 performs the allocation stage 408 and renaming stage 410; 4) the scheduler unit(s) 456 performs the schedule stage 412; 5) the physical register file(s) unit(s) 458 and the memory unit 470 perform the register read/memory read stage 414; the execution cluster 460 perform the execute stage 41 6; 6) the memory unit 470 and the physical register file(s) unit(s) 458 perform the write back/memory write stage 418; 7) various units may be involved in the exception handling stage 422; and 8) the retirement unit 454 and the physical register file(s) unit(s) 458 perform the commit stage 424.[00105] The core 490 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 490 includes logic to support a packed data instruction set extension (e.g., AVX1 , AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.[00106] It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel®Hyperthreading technology).[00107] While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 434/474 and a shared L2 cache unit 476, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1 ) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.[00108] Figures 5A-B illustrate a block diagram of a more specific exemplary in- order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application.[00109] Figure 5A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 502 and with its local subset of the Level 2 (L2) cache 504, according to embodiments of the invention. In one embodiment, an instruction decoder 500 supports the x86 instruction set with a packed data instruction set extension. An L1 cache 506 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 508 and a vector unit 510 use separate register sets (respectively, scalar registers 512 and vector registers 514) and data transferred between them is written to memory and then read back in from a level 1 (L1 ) cache 506, alternative embodiments of the invention may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back).[00110] The local subset of the L2 cache 504 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 504. Data read by a processor core is stored in its L2 cache subset 504 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 504 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring data-path is 1012-bits wide per direction.[00111] Figure 5B is an expanded view of part of the processor core in Figure 5A according to embodiments of the invention. Figure 5B includes an L1 data cache 506A part of the L1 cache 504, as well as more detail regarding the vector unit 510 and the vector registers 514. Specifically, the vector unit 510 is a 16-wide vector processing unit (VPU) (see the 16-wide ALU 528), which executes one or more of integer, single- precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 520, numeric conversion with numeric convert units 522A-B, and replication with replication unit 524 on the memory input. Write mask registers 526 allow predicating resulting vector writes.[00112] Figure 6 is a block diagram of a processor 600 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. The solid lined boxes in Figure 6 illustrate a processor 600 with a single core 602A, a system agent 610, a set of one or more bus controller units 61 6, while the optional addition of the dashed lined boxes illustrates an alternative processor 600 with multiple cores 602A-N, a set of one or more integrated memory controller unit(s) 614 in the system agent unit 610, and special purpose logic 608.[00113] Thus, different implementations of the processor 600 may include: 1 ) a CPU with the special purpose logic 608 being integrated graphics and/or scientific(throughput) logic (which may include one or more cores), and the cores 602A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 602A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 602A-N being a large number of general purpose in-order cores. Thus, the processor 600 may be a general- purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor,GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 600 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.[00114] The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 606, and external memory (not shown) coupled to the set of integrated memory controller units 614. The set of shared cache units 606 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 612 interconnects the integrated graphics logic 608, the set of shared cache units 606, and the system agent unit 610/integ rated memory controller unit(s) 614, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 606 and cores 602-A-N.[00115] In some embodiments, one or more of the cores 602A-N are capable of multi-threading. The system agent 610 includes those components coordinating and operating cores 602A-N. The system agent unit 610 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic andcomponents needed for regulating the power state of the cores 602A-N and the integrated graphics logic 608. The display unit is for driving one or more externally connected displays. [00116] The cores 602A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 602A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.[00117] Figures 7-10 are block diagrams of exemplary computer architectures.Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.[00118] Referring now to Figure 7, shown is a block diagram of a system 700 in accordance with one embodiment of the present invention. The system 700 may include one or more processors 710, 715, which are coupled to a controller hub 720. In one embodiment the controller hub 720 includes a graphics memory controller hub (GMCH) 790 and an Input/Output Hub (lOH) 750 (which may be on separate chips); the GMCH 790 includes memory and graphics controllers to which are coupled memory 740 and a coprocessor 745; the lOH 750 is couples input/output (I/O) devices 760 to the GMCH 790. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 740 and the coprocessor 745 are coupled directly to the processor 710, and the controller hub 720 in a single chip with the lOH 750.[00119] The optional nature of additional processors 715 is denoted in Figure 7 with broken lines. Each processor 710, 715 may include one or more of the processing cores described herein and may be some version of the processor 600.[00120] The memory 740 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 720 communicates with the processor(s) 710, 715 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such asQuickPath Interconnect (QPI), or similar connection 795.[00121] In one embodiment, the coprocessor 745 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 720 may include an integrated graphics accelerator.[00122] There can be a variety of differences between the physical resources 710, 715 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.[00123] In one embodiment, the processor 710 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 710 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 745.Accordingly, the processor 710 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 745. Coprocessor(s) 745 accept and execute the received coprocessor instructions.[00124] Referring now to Figure 8, shown is a block diagram of a first more specific exemplary system 800 in accordance with an embodiment of the present invention. As shown in Figure 8, multiprocessor system 800 is a point-to-point interconnect system, and includes a first processor 870 and a second processor 880 coupled via a point-to- point interconnect 850. Each of processors 870 and 880 may be some version of the processor 600. In one embodiment of the invention, processors 870 and 880 are respectively processors 710 and 715, while coprocessor 838 is coprocessor 745. In another embodiment, processors 870 and 880 are respectively processor 710 coprocessor 745.[00125] Processors 870 and 880 are shown including integrated memory controller (IMC) units 872 and 882, respectively. Processor 870 also includes as part of its bus controller units point-to-point (P-P) interfaces 876 and 878; similarly, second processor 880 includes P-P interfaces 886 and 888. Processors 870, 880 may exchange information via a point-to-point (P-P) interface 850 using P-P interface circuits 878, 888. As shown in Figure 8, IMCs 872 and 882 couple the processors to respective memories, namely a memory 832 and a memory 834, which may be portions of main memory locally attached to the respective processors.[00126] Processors 870, 880 may each exchange information with a chipset 890 via individual P-P interfaces 852, 854 using point to point interface circuits 876, 894, 886, 898. Chipset 890 may optionally exchange information with the coprocessor 838 via a high-performance interface 839. In one embodiment, the coprocessor 838 is a special- purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.[00127] A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.[00128] Chipset 890 may be coupled to a first bus 81 6 via an interface 896. In one embodiment, first bus 81 6 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited.[00129] As shown in Figure 8, various I/O devices 814 may be coupled to first bus 81 6, along with a bus bridge 818 which couples first bus 81 6 to a second bus 820. In one embodiment, one or more additional processor(s) 815, such as coprocessors, high- throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphicsaccelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 81 6. In one embodiment, second bus 820 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 820 including, for example, a keyboard and/or mouse 822, communication devices 827 and a storage unit 828 such as a disk drive or other mass storage device which may include instructions/code and data 830, in one embodiment. Further, an audio I/O 824 may be coupled to the second bus 820. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 8, a system may implement a multi-drop bus or other such architecture.[00130] Referring now to Figure 9, shown is a block diagram of a second more specific exemplary system 900 in accordance with an embodiment of the present invention. Like elements in Figures 8 and 9 bear like reference numerals, and certain aspects of Figure 8 have been omitted from Figure 9 in order to avoid obscuring other aspects of Figure 9.[00131] Figure 9 illustrates that the processors 870, 880 may include integrated memory and I/O control logic ("CL") 872 and 882, respectively. Thus, the CL 872, 882 include integrated memory controller units and include I/O control logic. Figure 9 illustrates that not only are the memories 832, 834 coupled to the CL 872, 882, but also that I/O devices 914 are also coupled to the control logic 872, 882. Legacy I/O devices 915 are coupled to the chipset 890. [00132] Referring now to Figure 10, shown is a block diagram of a SoC 1000 in accordance with an embodiment of the present invention. Similar elements in Figure 6 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 10, an interconnect unit(s) 1002 is coupled to: anapplication processor 1010 which includes a set of one or more cores 202A-N and shared cache unit(s) 606; a system agent unit 610; a bus controller unit(s) 616; an integrated memory controller unit(s) 614; a set or one or more coprocessors 1020 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 1030; a direct memory access (DMA) unit 1032; and a display unit 1040 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 1020 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.[00133] Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.[00134] Program code, such as code 830 illustrated in Figure 8, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.[00135] The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.[00136] One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.[00137] Such machine-readable storage media may include, without limitation, non- transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable's (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.[00138] Accordingly, embodiments of the invention also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.[00139] In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.[00140] Figure 11 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 11 shows a program in a high level language 1 102 may be compiled using an x86 compiler 1 104 to generate x86 binary code 1 106 that may be natively executed by a processor with at least one x86 instruction set core 1 1 1 6. The processor with at least one x86 instruction set core 1 1 1 6 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1 ) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 1 104 represents a compiler that is operable to generate x86 binary code 1 106 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 1 1 1 6. Similarly, Figure 11 shows the program in the high level language 1 102 may be compiled using an alternative instruction set compiler 1 108 to generate alternative instruction set binary code 1 1 10 that may be natively executed by a processor without at least one x86 instruction set core 1 1 14 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 1 1 12 is used to convert the x86 binary code 1 106 into code that may be natively executed by the processor without an x86 instruction set core 1 1 14. This converted code is not likely to be the same as the alternative instruction set binary code 1 1 10 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up ofinstructions from the alternative instruction set. Thus, the instruction converter 1 1 12 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 1 106.METHOD AND APPARATUS FOR PERFORMING BIG INTEGER ARITHMETIC OPERATIONS[00141] Big-integer arithmetic (multiplication in particular) is used extensively for public key cryptography in protocols such as transport layer security (TLS). A sample list of algorithms that rely on big-integer arithmetic includes, but is not limited to, Elliptic Curve (EC) Cryptography where it is used for Elliptic curve Diffie-Hellman (ECDH) key exchanges and Elliptic Curve Digital Signature Algorithm (ECDSA) signature scheme; and modular arithmetic based algorithms such as Revest-Shamir-Adleman (RSA), Diffie-Hellman (DH), and Digital Secure Algorithm (DSA). [00142] The use of elliptic curve cryptography (ECC) is now proliferating due to the high efficiency of these algorithms for implementing a Perfect Forward Secrecy TSL session. Currently, most of the EC-based signatures and key exchange algorithms are performed over a 256 (or 255) bit prime field. These EC-based techniques can benefit greatly from a fast 256-bit multiplication routine such as described below.[00143] One embodiment of the invention includes new instructions that perform a multiplication of two 256 bit integers, producing a 512 bit result and also perform a square of 256 bit integer, producing a 512 bit result. In addition, one embodiment of the invention includes an implementation that reuses existing the 52 x 52 -> 104 bit multipliers used in current architecture designs, with limited additional hardware. For example, such multipliers are currently found in the floating-point multiply-add (FMA) units in existing x86 architectures, which are currently utilized by vpmadd52luq and vpmadd52huq instructions. Currently, CNL/ICL server processors include the FMA hardware (ports 1 /5 executing at 4 cycles, with 1 cycle throughput). Effectively, 1 6 multipliers and adders are available to be used, although they are not exposed by the architecture.[00144] As illustrated in Figure 12, an exemplary processor 1255 with a plurality of cores 0-N on which embodiments of the invention may be implemented. In particular, each core includes a decode stage 1230 with 256-bit multiply instruction decode logic 1231 for decoding a 256-bit multiply instruction into a plurality of micro-operations to be executed by execution logic 1240. In particular, the exemplary processor 1255 also includes 256-bit multiply instruction execution logic 1241 for executing the 256-bit multiply operation in accordance with the embodiments of the invention described below (e.g., using multipliers and adders described below with respect to Figure 16).[00145] In addition, each core 0-N includes a set of general purpose registers (GPRs) 1205, a set of vector registers 1206, and a set of mask registers 1207. In one embodiment, multiple vector data elements are packed into each vector register 1206 which may have a 512 bit width for storing two 256 bit values, four 128 bit values, eight 64 bit values, sixteen 32 bit values, etc. However, the underlying principles of the invention are not limited to any particular size/type of vector data. In one embodiment, the mask registers 1207 include eight 64-bit operand mask registers used forperforming bit masking operations on the values stored in the vector registers 1206 (e.g., implemented as mask registers k0-k7 described above). However, the underlying principles of the invention are not limited to any particular mask register size/type. [00146] The details of a single processor core ("Core 0") are illustrated in Figure 12 for simplicity. It will be understood, however, that each core shown in Figure 12 may have the same set of logic as Core 0. For example, each core may include a dedicated Level 1 (L1 ) cache 1 21 2 and Level 2 (L2) cache 121 1 for caching instructions and data according to a specified cache management policy. The L1 cache 1 21 2 includes a separate instruction cache 1 220 for storing instructions and a separate data cache 1221 for storing data. The instructions and data stored within the various processor caches are managed at the granularity of cache lines which may be a fixed size (e.g., 64, 128, 51 2 Bytes in length). Each core of this exemplary embodiment has an instruction fetch unit 1 21 0 for fetching instructions from main memory 1 200 and/or a shared Level 3 (L3) cache 1 21 6; a decode unit 1 220 for decoding the instructions (e.g., decoding program instructions into micro-operatons or "uops"); an execution unit 1 240 for executing the instructions; and a writeback unit 1 250 for retiring the instructions and writing back the results.[00147] The instruction fetch unit 1 21 0 includes various well known components including a next instruction pointer 1 203 for storing the address of the next instruction to be fetched from memory 1 200 (or one of the caches); an instruction translation lookaside buffer (ITLB) 1204 for storing a map of recently used virtual-to-physical instruction addresses to improve the speed of address translation ; a branch prediction unit 1202 for speculatively predicting instruction branch addresses; and branch target buffers (BTBs) 1 201 for storing branch addresses and target addresses. Once fetched, instructions are then streamed to the remaining stages of the instruction pipeline including the decode unit 1 230, the execution unit 1 240, and the writeback unit 1 250. The structure and function of each of these units is well understood by those of ordinary skill in the art and will not be described here in detail to avoid obscuring the pertinent aspects of the different embodiments of the invention.[00148] In one embodiment, the following set of instructions are decoded and executed by the 256-bit multiply instruction decode logic 1 231 and the 256-bit multiply instruction execution logic 1 241 :1. VMUL256T0512 ZMM1 , YMM2, YMM3: One embodiment of this instruction multiplies the 256bit numbers in source registers ymm2 and ymm3 and stores the 51 2 bit result in destination vector register zmm1 (all of which may be registers within the set of vector register 1 206). 2. VMUL256T0512 ZMM1 , ZMM2, ZMM3: One embodiment of this instruction multiplies the 256bit numbers in the lower half of the source vector registers zmm2 and zmm3 and stores the 512 bit result in destination vector register zmm1 .3. VMUL256T0512 ZMM1 , ZMM2, ZMM3, IMM8: One embodiment of this instruction multiplies 256bit numbers from source vector registers zmm2 and zmm3 and stores the 512 bit result in destination vector register zmm1 . The multiplication may be implemented according to the following definition using the immediate value:imm8 = 0x00 -> res = zmm2[255:0]*zmm3[255:0],imm8 = 0x10 -> res = zmm2[51 1 :256]*zmm3[255:0],imm8 = 0x01 -> res = zmm2[255:0]*zmm3[51 1 :256],imm8 = 0x1 1 -> res = zmm2[51 1 :256]*zmm3[51 1 :256].[00149] In other words, for an immediate value of 0x00, the 256bit values are selected from bits 255:0 of source vector registers zmm2 and zmm3. For an immediate value of 0x10, the 256bit values are selected from bits 51 1 :256 of source vector register zmm2 and from bits 255:0 of source vector register zmm3. For an immediate value of 0x01 , the 256bit values are selected from bits 51 1 :256 of source vector register zmm3 and from bits 255:0 of source vector register zmm2. Finally, for an immediate value of 0x1 1 , the 256bit values are selected from bits 51 1 :256 of source vector registers zmm2 and zmm3.[00150] Figure 13 illustrates one embodiment of the invention for implementing the first variation of the instruction in which 256-bit multiplication logic 1300 multiplies a first 256-bit integer stored in a first 256-bit source register 1301 (e.g., YMM2 in one embodiment) with a second 256-bit integer stored in a second 256-bit source register 1302 (e.g., YMM3 in one embodiment). The result of the multiplication is stored in a 512-bit destination register 1303 (e.g., ZMM1 in one embodiment).[00151] Figure 14 illustrates one embodiment of the invention for implementation in the second variation of the instruction in which the 256-bit multiplication logic 1300 multiplies a first 256-bit integer stored in the lower half of a first 512-bit source vector register 1401 (e.g., encoded in bits 255:0 of ZMM2) with a second 256-bit integer stored in the lower half of a second 512-bit source vector register 1402 (e.g., encoded in bits 255:0 of ZMM3). The result of the multiplication is stored in a 512-bit destination register 1303 (e.g., ZMM1 in one embodiment).[00152] Figure 15 illustrates another embodiment of the invention for implementing the third variation of the instruction. In this embodiment, the 256-bit source operands may be selected from either the lower or the upper half of the first source register 1401 and the second source register 1402. In one embodiment, the 256-bit multiplication logic selects the 256-bit source operand from either the lower or upper halves of the registers 1401 -1402 in accordance with an immediate value 1500 (e.g., imm8) provided with the instruction. As mentioned above, if the immediate value is 0, then the source operands may be selected from the lower halves of both registers 1401 -1402 (i.e., 255:0). If the immediate value is 1 , then the first source operand is selected from the upper half of the first source register 1401 (51 1 :256) and the second source operand is selected from the lower half of the second source register 1402 (255:0). If the immediate value is 2, then the first source operand is selected from the lower half of the first source register 1401 (255:0) and the second source operand is selected from the lower half of the second source register 1402 (51 1 :256). Finally, if the immediate value is 3, then the source operands may be selected from the upper halves of both registers 1401 -1402 (i.e., 51 1 :256).[00153] Figure 16 illustrates one embodiment of the 256-bit multiplication logic 1300 which may be used to perform the above operations. The following discussion assumes that the multiplication operands are two 256 bit numbers, A'and B\ Instead of looking at the numbers as 4 digits in radix 264, they may be treated as 5 digits in radix 252. This allows multiplications to be performed in the floating point execution units 1 600, 1 610 which, in one embodiment, are capable of operating on 52 bit numbers (the mantissa of a double). In one embodiment, the same floating points units areimplemented which are currently used to operate on integers for the vpmadd52luq and vpmadd52huq x86 instructions. Using this hardware, A'and ΕΕ may be determined as follows:A'= A4*252*4+ A3*252*3+ A2*252*2+ A1*252*1+ AO= B4*252*4+ B3*252*3+ B2*252*2+ B1*252*1+ B0[00154] In this embodiment, AO, A1 , A2, A3 as well as B0, B1 , B2, B3 are exactly 52 bits long, while A4 and B4 are 48 bit integers. Multiplying the above values for A'and B'results in the following:R = A4*B4*252*8+ (A4*B3+A3*B4)*252*7+ (A4*B2+A3*B3+A2*B4)*252*6+(A4*B1 +A3*B2+A2*B3+A1*B4)*252*5+ (A4*B0+A3*B1 +A2*B2+A1*B3+A0*B4)*252*4+ (A3*B0+A2*B1 +A1*B2+A0*B3)*252*3+ (A2*B0+A1*B1 +A0*B2)*252*2+(A1*B0+A0*B1 )*252*1+ A0*B0where each coefficient can be up to 107 bits long.[00155] The purpose of the instruction is therefore to compute the coefficients, and then sum them correctly to produce the number FT = R, in a 512 bit representation (radix 264). For this purpose, one embodiment of the invention introduces four new microoperations (pops): mulassistl , mulassist2, mulassist3 and mulassist4:mulassist1 /2/3/4 tmpzmm, a256, b256[00156] In operation, each mulassist*μορ first transforms the input operands a256/b256 into radix 252 representation (e.g., using wiring and selection logic). Each mulassist*μορ then computes 4 x 105 bit values. Each of the four values is stored in a 128 bit lane of the resulting register as follows:mulassistl : (A0B3+A1 B2) || (A0B2+A1 B1 ) || (A0B1 +A1 B0) || (A0B0)mulassist2: (A2B3+A1 B4) || (A1 B3+A0B4) || (A3B0+A2B1 ) || (A2B0)mulassist3: (A3B1 +A2B2) || (A4B1 +A3B2) || (A4B1 +A3B2) || (A4B3+A3B4) mulassist4: (A4B0)|| (A2B4) || (A4B4) || 0In the above representation, each 128 bit lane is separated by a || indicator. Lane 0 is the rightmost lane and Lane 3 is the leftmost lane, with Lanes 2 and 1 arranged sequentially in between.[00157] In one embodiment, the multiplications and additions use the floating-point multiply-add (FMA) units available over execution ports 0 and 5 within the execution units 1240. Of course, the underlying principles of the invention are not limited to any particular set of execution ports. In some implementations, for example, the FMA units may be available on other ports.[00158] Figure 16 illustrates the specific manner in which operations may be performed for mulassistl using a set of multiplication units 1600 and adders 1 610. While details are shown only for mulassistl , the same basic principles may be implemented for mulassist2, mulassist3 and mulassist4.[00159] As illustrated in Figure 16, using the A and B values provided above, 52x52- bit multiplier 1 601 multiplies A1 and B2; multiplier 1 602 multiplies AO and B3; multiplier 1 603 multiplies A1 and B1 ; multiplier 1 604 multiplies AO and B2; multiplier 1 605 multiplies A1 and B0; multiplier 1 606 multiplies AO and B1 ; and multiplier 1 607 multiplies AO and B0.[00160] The 104 x 104 adder 161 1 then determines the sum of A1 B2 and A0B3 and outputs the result to 128-bit Lane 3; adder 1 612 determines the sum of A1 B1 and A0B2 and outputs the result to 128-bit Lane 2; adder 1 613 determines the sum of A1 B0 and A0B1 and outputs the result to 128-bit Lane 1 ; and adder 1614 determines the sum of A0B0 and a value of zero to output A0B0 to 128-bit Lane 0. [00161] In one embodiment, the four results may then be summed and transformed to a regular representation via additional hardware and microoperations. Note that the actual order of the operands may be modified based on design considerations.[00162] A method in accordance with one embodiment of the invention is illustrated in Figure 17. The method may be implemented within the architectures described above, but is not limited to any particular architecture.[00163] At 1701 , a 256-bit multiplication instruction is fetched from memory or read from a cache (e.g., such as vmul256to512 zmm1 , zmm2, zmm3, imm8 or one of the other instructions highlighted above). At 1702, the first and second 256-bit integer operands are stored in first and second source registers, respectively. For example, if the source operand registers are 512-bit vector registers (e.g., ZMM2), then the first and second 256-bit integer source operands may be stored in the upper or lower halves of the registers (e.g., based on the value of imm8 in the implementation described above).[00164] At 1703 the first and second source operands are converted from a first radix representation to a second radix representation based on the size of the multiplier and adder hardware. For example, as mentioned above, for an implementation which utilizes 52 bit multipliers, instead of looking at the numbers as 4 digits in radix 264, they may be treated as 5 digits in radix 252. At 1704, a sequence of multiplication and addition operations are performed using the second radix representation to arrive at a result (see, e.g., Figure 16 and associated text). Finally, at 1705, the result is converted back to the first radix representation (e.g., and stored in a 512-bit destination register).[00165] In the foregoing specification, the embodiments of invention have been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.[00166] Embodiments of the invention may include various steps, which have been described above. The steps may be embodied in machine-executable instructions which may be used to cause a general-purpose or special-purpose processor to perform the steps. Alternatively, these steps may be performed by specific hardware components that contain hardwired logic for performing the steps, or by anycombination of programmed computer components and custom hardware components. [00167] As described herein, instructions may refer to specific configurations of hardware such as application specific integrated circuits (ASICs) configured to perform certain operations or having a predetermined functionality or software instructions stored in memory embodied in a non-transitory computer readable medium. Thus, the techniques shown in the Figures can be implemented using code and data stored and executed on one or more electronic devices (e.g., an end station, a network element, etc.). Such electronic devices store and communicate (internally and/or with other electronic devices over a network) code and data using computer machine-readable media, such as non-transitory computer machine-readable storage media (e.g., magnetic disks; optical disks; random access memory; read only memory; flash memory devices; phase-change memory) and transitory computer machine-readablecommunication media (e.g., electrical, optical, acoustical or other form of propagated signals - such as carrier waves, infrared signals, digital signals, etc.). In addition, such electronic devices typically include a set of one or more processors coupled to one or more other components, such as one or more storage devices (non-transitory machine- readable storage media), user input/output devices (e.g., a keyboard, a touchscreen, and/or a display), and network connections. The coupling of the set of processors and other components is typically through one or more busses and bridges (also termed as bus controllers). The storage device and signals carrying the network trafficrespectively represent one or more machine-readable storage media and machine- readable communication media. Thus, the storage device of a given electronic device typically stores code and/or data for execution on the set of one or more processors of that electronic device. Of course, one or more parts of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware. Throughout this detailed description, for the purposes of explanation, numerous specific details were set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the invention may be practiced without some of these specific details. In certain instances, well known structures and functions were not described in elaborate detail in order to avoid obscuring the subject matter of the present invention. Accordingly, the scope and spirit of the invention should be judged in terms of the claims which follow.
A method and apparatus for implementing incremental design changes. In various embodiments, primary outputs of a new design are compared for logical equivalence to corresponding primary outputs of a prior implementation. If the logic is equivalent, the implementation of the primary outputs from the prior implementation is reused to implement the corresponding primary outputs of the new design.
What is claimed is: 1. A computer-implemented method for implementing a second circuit design derived from a first circuit design, the first design embodied in a first implementation, and the first implementation and second design including primary outputs, comprising:comparing for logical equivalence corresponding primary outputs in the first implementation and the second design, and testing for user-specified re-synthesize attributes associated with the primary outputs of the second design; and for primary outputs of the second design having re-synthesize attributes, creating primary outputs in the second implementation without reusing implementation of the primary outputs from the first implementation, and for primary outputs of the second design that are logically equivalent to corresponding primary outputs in the first implementation, reusing implementation of the primary outputs from the first implementation as implementation of the primary outputs of the second design in a second implementation. 2. The method of claim 1, further comprising comparing for logical equivalence using formal verification techniques.3. The method of claim 2, further comprising comparing for logical equivalence using Boolean expression diagrams.4. The method of claim 1, further comprising expressing a primary output of the second design that is not present in the first implementation in terms of logic that is present in the first implementation.5. The method of claim 1, further comprising expressing primary outputs of the second design having logic changes from the first implementation in terms of logic that is present in the first implementation.6. The method of claim 5, further comprising:creating a combined netlist including primary outputs of the first implementation and the second design; adding all combinational nodes from the first implementation to the combined netlist and making each combinational node from the first implementation a primary output; and running an optimization tool on the combined netlist. 7. The method of claim 1, wherein if the second design includes an assignment of an output port to a global input/output pin, further comprising converting global signals mapped to the global input/output pin in the first design into non-global control signals in the second implementation.8. The method of claim 1, wherein if the second design includes an assignment of an input port to a global input/output pin, further comprising converting global signals mapped to the global input/output pin in the first design into non-global control signals in the second implementation.9. The method of claim 1, further comprising:creating a first design netlist from a first design specification using a first front-end synthesis tool; creating a second design netlist from a second design specification using a second front-end synthesis tool, wherein the second design specification describes the second circuit design; and using the second design netlist to compare for logical equivalence corresponding primary outputs in the first implementation and the second design. 10. The method of claim 9, wherein the first front-end synthesis tool is the same as the second front-end synthesis tool.11. The method of claim 9, wherein the first front-end synthesis tool is different from the second front-end synthesis tool.12. The method of claim 11, wherein the second design netlist is structurally different from the first design netlist.13. The method of claim 1, further comprising:optimizing logic paths of the second design that do not meet timing specifications; and reusing implementation of logic paths from the first implementation in the second implementation for logic paths that satisfy the timing specifications. 14. The method of claim 1, wherein the first implementation has associated therewith a first set of optimization criteria, and further comprising:optimizing selected portions of the second design specification in creating the second implementation, wherein the selected portions include portions having associated therewith a re-synthesize attribute and portions failing to meet a second set of optimization criteria associated with the second design specification. 15. An apparatus for implementing a second circuit design derived from a first circuit design, the first design embodied in a first implementation, and the first implementation and second design including primary outputs, comprising:means for comparing for logical equivalence corresponding primary outputs in the first implementation and the second design; means for testing for user-specified re-synthesize attributes associated with the primary outputs of the second design; means for creating, for primary outputs of the second design having re-synthesize attributes, primary outputs in the second implementation without reusing implementation of the primary outputs from the first implementation; and means for reusing, for primary outputs of the second design that are logically equivalent to corresponding primary outputs in the first implementation, implementation of the primary outputs from the first implementation as implementation of the primary outputs of the second design in a second implementation. 16. A system for incremental synthesis, comprising:a front-end synthesis tool configured and arranged to generate a netlist from an input design; a fitter tool configured and arranged to optimize and map the netlist and produce an implementation, the fitter tool further configured and arranged to read a prior implementation generated from a prior design, compare for logical equivalence primary outputs of the prior implementation and primary outputs of the netlist, and reuse implementation of,primary outputs from the prior implementation as implementation of logically equivalent primary outputs of the netlist.
FIELD OF THE INVENTIONThe present invention generally relates to the implementation of circuit designs, and more particularly to reusing selected portions of a prior circuit implementation to implement portions of a new circuit design.BACKGROUNDThe term "net" as used herein refers to a conductive region connecting components of a user's circuit design. For example, one net may connect the output terminal of an AND gate to the input. terminal of another AND gate and to the input terminal of a flip-flop. An AND gate is one component type, and a flip-flop is another component type. An "instance" is a single occurrence of a component type. A "netlist" is a list of all the nets that connect the component instances of a user's design and the component instances connected by each net.The circuit design process generally includes, in order, design entry, synthesis, optimization, device mapping, and place-and-route, along with functional and timing simulations to verify correctness. If an error is identified late in the process, much of the process may need to be repeated in order to integrate a design change.One solution to avoid repeating the entire process of optimization, device mapping, and place-and-route is to only re-implement the parts of the design that changed from the previous design cycle. Although this solution may be fairly straightforward when using schematics for design entry (because changes to a schematic cause very little change in a netlist), it is more difficult when the design has been generated through the use of high-level languages and synthesis. That is, a small design change in a high-level language may substantially impact the entire design process, and the new implementation may no longer meet the designer's timing requirements, fit the target device, or have the same pin assignments as the prior implementation. Thus, additional work may be required to address the problems that were introduced by a small design change. It is desirable that significantly different implementations do not result from relatively small design changes, so that additional engineering costs are not incurred. An incremental design method that addresses the aforementioned problems, as well as other related problems, is therefore desirable.SUMMARY OF THE INVENTIONThe present invention generally relates to the implementation of circuit designs, and more particularly to testing for logical equivalence between portions of a new circuit design and portions of a prior circuit implementation and reusing selected portions of the prior circuit implementation to implement the new circuit design. By testing for logical equivalence instead of testing for structural equivalence, the present invention eliminates unnecessary repetition of stages of the design cycle such as optimization, device mapping, and place-and-route.In various embodiments, the invention generally includes comparing for logical equivalence primary outputs of the new design to corresponding primary outputs of a prior implementation. If the logic is equivalent, the implementation of the primary outputs from the prior implementation is reused to implement the corresponding primary outputs of the new design.In another embodiment, attributes can be associated with the primary outputs of the new design to selectively control whether elements of the prior implementation are used to implement the primary outputs. Thus, a designer has control over which portions of a new design are to be implemented without regard to the prior implementation.For outputs in the new design that are not present in the prior implementation and for outputs in the new design having logic that has changed from the prior implementation, the primary outputs of the new design are expressed in terms of the logic from the prior implementation, thereby reusing portions of the prior implementation for the new and changed primary outputs.It will be appreciated that various other embodiments are set forth in the Detailed Description and Claims which follow.BRIEF DESCRIPTION OF THE DRAWINGSVarious aspects and advantages of the invention will become apparent upon review of the following detailed description and upon reference to the drawings in which:FIG. 1 is a data flow diagram of a prior art process for implementing a circuit design;FIG. 2 is a data flow diagram of a process for incremental synthesis in accordance with one embodiment of the invention;FIGS. 3A and 3B illustrate two structurally different implementations of logically equivalent netlist equations;FIG. 4 is a flowchart of a process for incrementally synthesizing a design, in accordance with one embodiment of the invention; andFIG. 5 is a flowchart of a process for expressing logic of a primary output of specification Sn in terms of implementation C0.DETAILED DESCRIPTIONSome of the terminology used in this description includes the following. An "output port" is a user-defined output, and an "input port" is a user defined input. A "primary output" is the end point of any combinational logic. Primary outputs may therefore include output ports and the end points of paths ending at registers. A "primary input" is the starting point of any combinational logic. Primary inputs therefore include input ports and register outputs used in logic.A "design specification" refers to a user's design, which is specified, for example, in a hardware definition language (HDL). An initial design specification is referenced as S0, and the corresponding implementation is referenced as C0. A new specification derived from S0 is referenced as Sn, and the corresponding implementation is referenced as Cn. In order to reduce engineering time in producing implementation Cn, it is desirable to reuse as much of the implementation from C0 as is feasible in Cn. It will be appreciated that the saved engineering time includes both the implementation process and verification of the implementation. The process of verifying an entire design even though only a small change was made to the logic can be difficult and time-consuming.In one embodiment, "implementation" refers to an optimized netlist, which is a result of optimizing logic in a netlist in accordance with various user-selected parameters, for example, limiting the number of inputs that can be used by any equation. The implementation also includes a mapping of the optimized logic to device or technology resources (e.g., programmable logic device (PLD) structures or discrete logic implemented in any process, such as CMOS). Optimizing and mapping processes as referred to in the embodiments described herein operate in accordance with algorithms known to those skilled in the art.FIG. 1 is a data flow diagram of a prior art process for implementing a circuit design. The process is controlled by software elements that include front-end synthesis element 102, build element 104, and fitter element 106. Front-end synthesis element 102 inputs a circuit design specification, S0, as embodied in a hardware definition language, for example, and produces S0 netlist file 108. Build element 104 creates design database file 110 based on the input netlist and constraints associated with S0. Fitter element 106 optimizes the logic of the design as embodied in the S0 database file based on logic optimization control options as set forth in the S0 design specification. Fitter element 106 also maps the optimized logic to the technology or device resources and outputs C0 design implementation file 114 and report file 116. Report file 116 indicates the results of generating the implementation. The optimized and mapped logic of design S0 is embodied in C0 design implementation file 114.FIG. 2 is a data flow-diagram of a process for incremental synthesis in accordance with an example embodiment of the invention. The process generally uses selected portions of a prior circuit implementation, C0, in creating a new circuit implementation Cn for a circuit design specification Sn. Furthermore, well-known formal logic verification techniques are used to test for logical equivalence between portions of the C0 implementation and Sn logic. It is assumed in this example that Sn is derived from circuit design specification S0. However, it will be appreciated that S0 and Sn could be more loosely related, such as reusing portions of S0 in Sn, or could be entirely unrelated.Front-end synthesis element 102 creates Sn netlist file 202 based on the Sn design specification, and build element 104 creates a new database file 208 based on the Sn netlist. Fitter element 206 applies formal verification techniques to check for logical equivalence between corresponding portions of implementation file 210 (C0) and netlist database 208 (Sn). A new implementation (Cn) is generated, based on reused portions of C0 and new implementation for portions of Sn.Rather than comparing for structural equivalence between netlists, for example, the present invention uses formal verification techniques to test for logical equivalence between a new design and a prior implementation. Techniques that structurally compare netlists (S0 netlist to Sn netlist, for example) are unlikely to identify equivalences between netlists generated by different front-end synthesis tools. Furthermore, any optimization performed by front-end synthesis may result in structural differences between the netlists even though logically they may be the same.FIGS. 3A and 3B illustrate two structurally different implementations of logically equivalent netlist equations. FIG. 3A is the structural form of the equation c=a'b+ab', and FIG. 3B is the structural form of the equation c=(ab+a'b')'.It will be appreciated that the netlist equations c=a'b+ab' and c=(ab+a'b')' are two different implementations of the XOR function. FIG. 3A is a logic representation of a'b+ab', and FIG. 3B is a logic representation of (ab+a'b')'.The primary outputs of Sn are compared to corresponding primary outputs in C0 using formal logic verification techniques. The corresponding primary outputs are those having the same names. If a primary output is present in Sn and cannot be located in C0, the primary output of Sn is expressed, wherever possible, in terms of existing logic from C0, and then synthesized. If the name of a primary output changes from C0 to Sn while the logic remains unchanged, the primary output having the new name is therefore expressed in terms of the same logic from C0. It will be appreciated, however, that if the primary output is also used as a primary input, then it may not be possible to express the fanouts of the primary input in terms of logic from C0.Where the logic for a primary output has changed or a new primary output is introduced in Sn, the primary outputs of Sn are expressed, wherever possible, in terms of existing logic from C0. This process is described in more detail in the discussion that accompanies FIG. 5.The following is an example where a primary output z is added to Sn relative to S0. Ifz=a'bdef+a'b'e'+a'b'f'+a'd'e'+a'd'f'then synthesizing z without considering C0 may produce the following:z=kla'+k'l'a'k=b'+d'l=e'+f'However, if C0 includesx=ab+bdy=a'e'+a'f'then expressing. z in terms of C0 would producez=xy'a'+x'yWhere fitter element 206 finds that a primary output of Sn is logically equivalent to a primary output of C0, no logic optimization and mapping is necessary, and the logic from C0 is used to implement that portion of Sn.In the context of CPLD synthesis, fitter element 206 also considers input and output ports that are assigned to global input/output (I/O) pins in Sn. For such assignments, any signals in C0 that were optimized to these global I/O pins are converted to non-global control signals in Cn.FIG. 4 is a flowchart of a process for incrementally synthesizing a design, in accordance with one embodiment of the invention. A prior implementation C0 and a new design specification Sn are read at steps 262 and 264. It will be appreciated that in alternative embodiments the new design Sn could be read as either the netlist embodied in database file 208 or the netlist embodied in netlist file 202.In the context of design synthesis for PLDs, there are global I/O pins and non-global I/O pins available on the device. If an output port of Sn is assigned to a non-global I/O pin, the implementation is unaffected. However, placement and routing are affected. If an output port of Sn is assigned to a global I/O pin, use of that pin is precluded for optimized global signals. Thus, if C0 contains a global signal optimized to the specified pin, the global signal is converted to a non-global control signal at step 266. Similarly, if an input port of Sn is assigned to a global I/O pin, any global signal optimized to the specified pin in C0 is converted to a non-global control signal. Step 268 separates the primary outputs (POs) of Sn into three categories: POs of Sn having the same names as POs in C0 (block 270), POs of Sn having names not found in C0 (block 272), and POs of Sn having a "re-synthesize" attribute (block 274). In the example embodiment, POs of Sn having the "re-synthesize" attribute are categorized as such, even if the name of a PO in Sn matches the name of a PO in C0. In other words, the user's specification of the re-synthesize attribute for a PO overrides reusing the implementation from C0.For POs of Sn having the same names as POs in C0, the formal verification algorithm is applied at step 276 to each primary output to test for logical equivalence between the PO in the C0 implementation and the PO in the design Sn. For example, Boolean verification techniques involving Boolean Expression Diagrams (BEDs) are used to test for logical equivalence. In another embodiment, Binary Decision Diagrams (BDDs) are used. Those skilled in the art will recognize still other embodiments that use methods capable of converting a Boolean expression into canonical form.Decision step 278 directs control to step 280 for POs in Sn that are logically equivalent to the POs in C0. At step 280, the optimized device mapping from C0 is used to implement the logically equivalent PO of Sn. Once the POs of Sn have been replaced, control is directed to decision step 282. If all the POs of Sn satisfy the timing specifications, control is directed to step 284 where conventional place and route processing is applied. For POs of Sn that fail timing specifications, control is directed to step 286 where those POs are implemented, i.e., optimized and mapped, without using logic from C0.For POs in Sn that are not logically equivalent to the C0 implementation (see step 278), step 288 expresses the logic-of the Sn PO in terms of the implementation in C0, as further described in FIG. 5. The process of expressing a PO includes generating logic for that PO using existing logic from C0. Thus, at step 290, any new connection logic has to be implemented (i.e., optimized and mapped). POs that are present in Sn but not in C0 (block 272) also go through the process of expression in terms of C0 at steps 288 and 290.The user can override implementation reuse of C0 for a PO of Sn. This override is accomplished with the "re-synthesize" attribute that can be associated with a primary output. When a PO has the re-synthesize attribute, the PO is implemented without regard to the logic from C0. This feature can be used to selectively optimize certain parts of a design. For example, if a designer finds that a primary output has too many levels of logic, the optimization parameters can be changed and the re-synthesize attribute can be applied to the PO. When the design specification is implemented, only the logic associated with the selected PO is implemented and the prior implementation is used for the other POs. With prior art software, the entire design is implemented without reuse, which may lead to a different, undesirable implementation. Block 274 illustrates the designation of POs with the resynthesize attribute.Other types of possible design changes are described below and are referenced as "use cases."In a first use case, no changes are made to the logic of a design and some of the paths in the netlist do not meet existing, changed, or new timing specifications. Only those paths are optimized and mapped without regard to C0, to meet the timing specifications.A new primary input added to Sn will also be referenced in logic in Sn. Thus, in this second use case, the logic of a PO of Sn is identified as having changed. The logic of the modified output is expressed in terms of implementation C0, and the logic that connects existing logic from C0 with the modified PO is optimized and mapped.The use case where. a primary input is removed is covered by the use cases above, because a primary input can only be removed as a result of a logic change that renders the input unused.In a third use case, no logic changes are made to Sn, but the names of POs are changed, for example, by using a different front-end synthesis tool. The POs are identified as new. The logic is expressed in terms of C0, and because the logic is identical, the logic from C0 can be used without requiring any new connecting logic.In another use case, no changes are made to the logic of Sn, and the names of primary inputs are changed, for example, by front-end synthesis. All affected POs are expressed in terms of C0, and the POs and-connecting logic are optimized and mapped.In addition to timing specifications, various other control options can be specified by the user in order to control the optimization and mapping of a design. When changes are made to these control options, only the logic with the re-synthesize attribute or the logic that does not meet timing specifications is affected by the control options and re-optimized and re-mapped, while the logic of unaffected elements is neither re-optimized nor re-mapped. Thus, the portions of an existing implementation that satisfy the user's requirements can be reused. Example control options include a maximum number of inputs for any equation, a maximum number of p-terms that can be used in any equation, whether multi-level logic optimization is applied, whether p-term control signals can be converted into global signals to improve timing or density, the default slew rate for all output ports, the default power mode of all macrocells, and whether user-specified pin assignments should be ignored or respected. These and other control options are recognized by those skilled in the art.FIG. 5 is a flowchart of a process for expressing logic of primary outputs (POs) of Sn in terms of implementations from C0. At step 322, a netlist N is created from the POs of C0. All combinational logic of C0 is added to N and made a primary output. At step 324, all POs in Sn that need to be expressed are added to netlist N. In another embodiment for expressing PLD logic, POs of Sn that are inputs to registers are excluded from the netlist N since they are not-available independent of the register.The netlist N is optimized at step 326. The optimization can be accomplished using a conventional optimization tool. Generally, if the logic of a PO in Sn is equivalent to the logic of a PO in C0, the logic from C0 is used. The optimizer may also use multiple primary outputs from the netlist created in step 322 to implement a PO of Sn, in addition to some glue logic, if there is no identical PO in C0.At step 328, the implementation of the POs of Sn is extracted from the netlist N. and the implementation is added to the new implementation Cn. Control is then returned to step 288 of FIG. 4.The present invention is believed to be applicable to a variety of processes for implementing circuit designs and has been found to be particularly applicable and beneficial in PLDs. While the present invention is not so limited, an appreciation of the present invention has been provided by way of specific examples involving PLDs. Other aspects and embodiments of the present invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and illustrated embodiments be considered as examples only, with a true scope and spirit of the invention being indicated by the following claims.
In an embodiment, a user equipment (UE) groups a plurality of images. The UE displays a first image among the plurality of images, determines an object of interest within the first image and a desiredlevel of zoom, and determines to lock onto the object of interest in association with one or more transitions between the plurality of images. The UE determines to transition to a second image amongthe plurality of images, and detects, based on the lock determination, the object of interest within the second image. The UE displays the second image by zooming-in upon the object of interest at a level of zoom that corresponds to the desired level of zoom.
1.A method for operating user equipment UE, which includes:Grouping a plurality of images captured during an image capturing session by the UE, wherein the grouping groups the plurality of images based on shared time characteristics and shared space characteristics;Displaying the first image among the plurality of images;Determining the object of interest in the first image and the desired zoom level;Combine one or more transitions between the multiple images to determine to lock onto the object of interest;Determining to transition to a second image among the plurality of images;Based on the locking determination, detecting the object of interest in the second image; andThe second image is displayed by zooming in on the object of interest at a zoom level corresponding to the desired zoom level.2.The method of claim 1, wherein the detecting comprises:In conjunction with the transition of the second image or after the transition, the second image is scanned for the object of interest.3.The method of claim 2, wherein the scanning is limited to the area of interest within the second image.4.The method of claim 3, wherein the region of interest is determined in conjunction with the locking determination.5.The method according to claim 1, wherein the shared time characteristic is to capture the plurality of images within a threshold time period of each other and/or within a defined time window.6.The method according to claim 1, wherein the shared time characteristic is to capture the plurality of images within a threshold time period of each other and/or within a defined time window, and wherein the shared space characteristic is to The multiple images are captured within a threshold distance of and/or within a defined geographic area.7.The method of claim 1, further comprising:Receiving a request to modify the desired zoom level;After the receiving, transition to a new image among the plurality of images;Detecting the object of interest in the new image; andThe object of interest in the new image is displayed at a zoom level corresponding to the modified zoom level.8.The method of claim 1, further comprising:Receiving a request to modify the concerned object;After the receiving, transition to a new image among the plurality of images;Detecting the modified object of interest in the new image; andThe modified object of interest in the new image is displayed at a zoom level corresponding to the desired zoom level.9.The method of claim 1, further comprising:Receiving a request to modify both the desired zoom level and the object of interest;After the receiving, transition to a new image among the plurality of images;Detecting the modified object of interest in the new image; andThe modified object of interest in the new image is displayed at a zoom level corresponding to the modified zoom level.10.The method of claim 1, further comprising:Receive the user name of the subset of the plurality of images as an acceptable representation of the image capture session.11.The method of claim 10, further comprising:Compress and/or delete any non-specified images from the plurality of images that are not part of the subset.12.The method of claim 1, further comprising:Receive the user name of the subset of the plurality of images as an unacceptable representation of the image capture session.13.The method of claim 12, further comprising:Compress and/or delete every image that is part of the subset.14.The method according to claim 1, wherein the lock determination determines to lock onto a plurality of objects of interest.15.The method of claim 14, wherein the detecting detects only one of the plurality of objects of interest within the second image.16.The method according to claim 15, wherein the display is enlarged on the one detected object of interest, and is centered on the one detected object of interest.17.The method of claim 14, wherein the detecting detects at least two of the plurality of objects of interest within the second image.18.The method according to claim 17, wherein the display is enlarged on the at least two detected objects of interest, and is centered on the at least two detected objects of interest.19.The method according to claim 1,Wherein the first image is the first of the plurality of images displayed by the UE, orWherein transitioning from another image from among the plurality of images to the first image, wherein the locking determination is invalid for the another image.20.The method according to claim 1,Where transition from the first image to the second image, orThere is a transition from the first image with the object lock to the second image.21.The method according to claim 1,Wherein different images among the plurality of images are captured by different UEs, andWherein each image among the plurality of images that is not captured by the UE is shared with the UE to facilitate the grouping.22.A user equipment UE, which includes:An apparatus for grouping a plurality of images captured during an image capturing session, wherein the apparatus for grouping groups the plurality of images based on shared time characteristics and shared space characteristics;A device for displaying the first image among the plurality of images;A device for determining the object of interest and the desired zoom level in the first image;Used to combine one or more transitions between the multiple images to determine the device locked to the object of interest;Means for determining a second image transitioned to the plurality of images;Means for detecting the object of interest in the second image based on the locking determination; andMeans for displaying the second image by zooming in on the object of interest at a zoom level corresponding to the desired zoom level.23.The UE according to claim 22, wherein the means for detecting comprises:Means for incorporation into or after the transition of the second image, scanning the second image for the object of interest.24.A user equipment UE, which includes:At least one processor coupled to the user interface output circuit and configured to:Grouping a plurality of images captured during an image capturing session, wherein the at least one processor groups the plurality of images based on shared time characteristics and shared space characteristics;Displaying the first image among the plurality of images;Determining the object of interest in the first image and the desired zoom level;Combine one or more transitions between the multiple images to determine to lock onto the object of interest;Determining to transition to a second image among the plurality of images;Based on the locking determination, detecting the object of interest in the second image; andThe second image is displayed by zooming in on the object of interest at a zoom level corresponding to the desired zoom level.25.The UE of claim 24, wherein the at least one processor is further configured to:In conjunction with or after the transition to the second image, the second image is scanned for the object of interest.26.A non-transitory computer-readable medium on which instructions are stored, the instructions, when executed by a user equipment UE, cause the UE to perform operations, and the instructions include:At least one instruction to cause the UE to group multiple images captured during an image capture session, wherein the at least one instruction to cause the UE to group causes the UE to be based on shared time characteristics and shared space characteristics, Grouping the multiple images;At least one instruction for causing the UE to display the first image among the plurality of images;At least one instruction for causing the UE to determine the object of interest and the desired zoom level in the first image;At least one instruction used to cause the UE to determine the lock to the object of interest in conjunction with one or more transitions between the multiple images;At least one instruction used to cause the UE to determine a transition to a second image among the plurality of images;At least one instruction to cause the UE to detect the object of interest in the second image based on the lock determination; andAt least one instruction to cause the UE to display the second image by zooming in on the object of interest at a zoom level corresponding to the desired zoom level.27.The non-transitory computer-readable medium of claim 26, further comprising:At least one instruction to cause the UE to bind to the transition of the second image or after the transition, scan the second image for the object of interest.
Lock a group of images to the desired zoom level and focus between image transitions ObjectCross reference to related applicationsThis patent application claims that the title of the application filed on July 18, 2016 is “Update the metadata of offline media files based on crowdsourced metadata information of social networking services, and then lock a set of images to the desired zoom between image transformations. Level and the object or area of interest, and based on at least one attribute of the media file or context information related to the media file, selectively delete or compress the media file in the local storage device on the user equipment (UPDATINGMETADATA FOR OFFLINE MEDIA FILES BASED ON CROWD-SOURCED METADATA INFORMATIONOF A SOCIAL NETWORKING SERVICE, LOCKING A GROUP OF IMAGES TO A DESIRED LEVELOF ZOOM AND AN OBJECT OR AREA OF INTEREST BETWEEN IMAGE TRANSITIONS, ANDSELECTIVELY DELETING OR COMPRESSING A MEDIA FILE ON USER CAL IN LOCAL IN AT LEAST ONE ATTRIBUTE OF THE MEDIA FILE OR CONTEXTUALINFORMATION RELATED TO THE MEDIA FILE)” in the United States Provisional Application No. 62/363,790, which has the same inventors as this application, and is assigned to the assignee of this case And is hereby expressly incorporated herein by reference in its entirety.Technical fieldEmbodiments involve locking a set of images to the desired zoom level and the object of interest between image transitions.Background techniqueIt is common for users to capture images in bursts. For example, although the user may end up wanting to stay in a limited number of representative pictures (such as pictures of a group of people standing in front of landmarks, newborn babies, etc.), the user may take a relatively high number of pictures in an image capture session , To ensure that at least one of the pictures will be satisfactory (for example, all eyes in the image are open, etc.). After the image capture session, the user will usually view the images captured during the image capture session one by one on his/her image capture device to delete unsatisfactory images, etc. If the user is interested in a specific target feature present in most or all images (such as a specific person's face, a specific animal in a zoo, a specific cloud in the sky, etc.), then the user may want to zoom in to focus The image is evaluated on the target feature. In this case, whenever the user transitions from an image capture session to a new image, the user may be required to manually zoom to the target feature.Summary of the inventionAn embodiment is directed to a method of operating a user equipment (UE), which includes: grouping a plurality of images; displaying a first image among the plurality of images; determining an object of interest in the first image; Desired zoom level; combined with one or more transitions between the multiple images to determine to lock onto the object of interest; determine to transition to the second image among the multiple images; determine based on the lock, Detecting the object of interest within the second image; and displaying the second image by zooming in on the object of interest at a zoom level corresponding to a desired zoom level.Another embodiment is directed to a user equipment (UE), which includes: means for grouping a plurality of images; means for displaying a first image among the plurality of images; and means for determining the The device of interest and the desired zoom level in the first image; used to combine one or more transitions between the multiple images to determine the device locked to the object of interest; used to determine the transition to the desired Means for the second image among the plurality of images; means for detecting the object of interest in the second image based on the lock determination; and means for passing the zoom level corresponding to the desired zoom level A device for zooming in on the object of interest to display the second image.An embodiment is directed to a UE that includes at least one processor coupled to a user interface output circuit and configured to: group a plurality of images; display a first image among the plurality of images; Determine the object of interest and the desired zoom level in the first image; combine one or more transitions between the multiple images to determine to lock onto the object of interest; determine to transition to one of the multiple images Based on the locking determination, detecting the object of interest within the second image; and displaying the first image by zooming in on the object of interest at a zoom level corresponding to the desired zoom level Two images.The embodiment is directed to a non-transitory computer-readable medium having instructions stored thereon, which, when executed by a UE, causes the UE to perform operations, and the instructions include: at least for causing the UE to group multiple images An instruction; at least one instruction to cause the UE to display the first image among the plurality of images; at least one instruction to cause the UE to determine the object of interest and the desired zoom level in the first image; Enable the UE to combine one or more transitions between the plurality of images to determine at least one instruction locked to the object of interest; to cause the UE to determine the transition to the second image among the plurality of images At least one instruction; at least one instruction to cause the UE to detect the object of interest in the second image based on the lock determination; and to cause the UE to pass a zoom level corresponding to the desired zoom level in the At least one instruction of zooming in on the object of interest to display the second image.Description of the drawingsThe following detailed descriptions considered in conjunction with the accompanying drawings will provide a better understanding of the embodiments of the present disclosure, and it is easy to obtain a more complete understanding of the embodiments of the present disclosure. The accompanying drawings are presented for illustrative purposes only and do not limit the present disclosure. In the attached picture:FIG. 1 illustrates a high-level system architecture of a wireless communication system according to an embodiment of the present disclosure.FIG. 2 illustrates an example of user equipment (UE) according to an embodiment of the present disclosure.Figure 3 illustrates a communication device including structural components according to an embodiment of the present disclosure.Figure 4 illustrates a server according to an embodiment of the present disclosure.FIG. 5 illustrates a process of controlling how to display a series of images to a user according to an embodiment of the present disclosure.Figures 6-7 illustrate example implementations of portions of the process of Figure 5 according to embodiments of the present disclosure.Figures 8-9 illustrate example implementations of the process of Figure 5 according to embodiments of the present disclosure.Detailed waysAspects of the present invention are disclosed in the following description and related drawings for specific embodiments of the present invention. Alternative embodiments may be designed without departing from the scope of the present disclosure. In addition, well-known elements of the present disclosure will not be described in detail, or the elements will be omitted so as not to obscure relevant details of the present disclosure.The words "exemplary" and/or "example" are used herein to mean "serving as an example, instance, or illustration." Any embodiment described herein as "exemplary" or "example" should not necessarily be construed as preferred or advantageous over other embodiments. Likewise, the term "embodiments of the present disclosure" does not require that all embodiments of the present disclosure include the discussed feature, advantage, or mode of operation.In addition, many embodiments are described in terms of a sequence of actions to be performed by elements such as computing devices. It will be appreciated that the various actions described herein may be performed by specific circuits (eg, application specific integrated circuits (ASIC)), by program instructions being executed by one or more processors, or by a combination of the two. In addition, the sequence of actions described herein can be regarded as all embodied in any form of computer-readable storage medium, in which a corresponding set of computer instructions is stored, and the computer instructions are being executed. This will cause the associated processor to perform the functionality described herein. Therefore, the various aspects of the present invention can be implemented in many different forms, all of which are expected to be within the scope of the claimed subject matter. In addition, for each of the embodiments described herein, the corresponding form of any such embodiment may be described herein as, for example, "logic configured to (perform the described actions)" .The client device, referred to herein as User Equipment (UE), may be mobile or fixed, and may communicate with a wired access network and/or a radio access network (RAN). As used herein, the term "UE" is interchangeably referred to as "access terminal" or "AT", "wireless device", "subscriber device", "subscriber terminal", "subscriber station", "user terminal" Or UT, "mobile device", "mobile terminal", "mobile station" and their changes. In an embodiment, the UE may communicate with the core network via the RAN, and the UE may be connected to an external network such as the Internet through the core network. Of course, for the UE, other mechanisms to connect to the core network and/or the Internet are also possible, such as a wired access network, a WiFi network (for example, based on IEEE 802.11, etc.), and so on. The UE can be implemented by any of several types of devices, including but not limited to cell phones, personal digital assistants (PDAs), pagers, laptop computers, desktop computers, PC cards, small flash memory devices, external or internal modems, wireless Or wired phone, etc. The communication link through which the UE can transmit signals to the RAN is called an uplink channel (e.g., reverse traffic channel, reverse control channel, access channel, etc.). The communication link through which the RAN can send signals to the UE is called a downlink or forward link channel (for example, a paging channel, a control channel, a broadcast channel, a forward traffic channel, etc.). As used herein, the term "traffic channel (TCH)" may refer to uplink/reverse or downlink/forward traffic channels.Figure 1 illustrates a high-level system architecture of a wireless communication system 100 according to an embodiment of the present disclosure. The wireless communication system 100 includes UEs 1...N. For example, in FIG. 1, UEs 1...2 are illustrated as cellular calling phones, UEs 3...5 are illustrated as cellular touch screen phones or smart phones, and UE N are illustrated as desktop computers or PCs.Referring to Figure 1, UEs 1...N are configured to communicate with an access network (e.g., RAN 120, access point) via a physical communication interface or layer (shown as air interfaces 104, 106, 108 and/or direct wired connection in Figure 1). 125 etc.) Communication. The air interfaces 104 and 106 may conform to a given cellular communication protocol (for example, CDMA, EVDO, eHRPD, GSM, EDGE, W-CDMA, LTE, etc.), and the air interface 108 may conform to a wireless IP protocol (for example, IEEE 802.11). The RAN 120 may include multiple access points that serve UEs via air interfaces (e.g., air interfaces 104 and 106). The access point in the RAN 120 may be referred to as an access node or AN, an access point or AP, a base station or BS, a Node B, an eNode B, and so on. These access points can be terrestrial access points (or ground stations), or satellite access points. The RAN 120 may include a core network 140 configured to connect to the core network 140, which may perform a variety of functions, including bridging circuit switching (CS) between UEs served by the RAN 120 and other UEs served by the RAN 120 or a completely different RAN. ) Calls, and can also mediate the exchange of packet exchange (PS) data with external networks such as the Internet 175.In some instances, the Internet 175 includes several routing agents and processing agents (for convenience, not shown in FIG. 1). In FIG. 1, UE N is shown as directly connected to the Internet 175 (i.e., separate from the core network 140, such as an Ethernet connection via a WiFi or 802.11 based network). The Internet 175 can thus be used to bridge the packet exchange data communication between UEs 1...N via the core network 140. Also shown in FIG. 1 is the access point 125 separate from the RAN 120. The access point 125 may be connected to the Internet 175 independently of the core network 140 (for example, via an optical communication system such as FiOS, a cable modem, etc.). The air interface 108 may serve UE 4 or UE 5 via a local wireless connection (e.g., IEEE 802.11 in one example). The UE N is shown as a desktop computer with a wired connection to the Internet 175, such as a direct connection to a modem or router, which in one example may correspond to the access point 125 itself (e.g. for wired and wireless connectivity Both WiFi routers).Referring to Figure 1, the social network server 170 is shown as connected to the Internet 175, the core network 140, or both. The social network server 170 may be implemented as a plurality of structurally separated servers, or may alternatively correspond to a single server. As will be described in more detail below, the social network server 170 is configured to support social networking services (e.g., Facebook, Myspace, Google+) with respect to UEs that can connect to the social network server 170 via the core network 140 and/or the Internet 175. Wait).FIG. 2 illustrates an example of a UE (ie, a client device) according to an embodiment of the present disclosure. Referring to FIG. 2, UE 200A is illustrated as a calling phone, and UE 200B is illustrated as a touch screen device (such as a smart phone, a tablet computer, etc.). As shown in FIG. 2, the external housing of the UE 200A is configured with an antenna 205A, a display 210A, at least one button 215A (such as a PTT button, a power button, a volume control button, etc.), a keypad 220A, and other components, as in the art Known in. Moreover, as known in this technology, the external housing of the UE 200B is equipped with a touch screen display 205B, peripheral buttons 210B, 215B, 220B, and 225B (for example, power control buttons, volume or vibration control buttons, flight mode toggle buttons, etc. ), and at least one front panel button 230B (for example, a home button, etc.), and other components. Although not explicitly shown as part of UE 200B, UE 200B may include one or more external antennas and/or one or more integrated antennas built into the external housing of UE 200B, including (but not limited to) ) WiFi antennas, cellular antennas, satellite position system (SPS) antennas (for example, global positioning system (GPS) antennas), etc.Although the internal components of UEs such as UE 200A and 200B may be embodied in different hardware configurations, the basic high-level UE configuration for internal hardware components is shown as platform 202 in FIG. 2. The platform 202 may receive and execute software applications, data and/or commands transmitted from the RAN 120, which may ultimately come from the core network 140, the Internet 175, and/or other remote servers and networks (eg, social network server 170, network URL, etc. ). The platform 202 can also independently execute locally stored application programs without RAN interaction. The platform 202 may include a transceiver 206 that is operatively coupled to an application specific integrated circuit (ASIC) 208, or other processor, microprocessor, logic circuit, or other data processing device. The ASIC 208 or other processor executes the application programming interface (API) 210 layer, which interfaces with any resident programs in the memory 212 of the wireless device. The memory 212 may be constituted by read-only or random access memory (RAM and ROM), EEPROM, flash card, or any memory commonly used in computer platforms. The platform 202 may also include a local database 214, which may store applications that are not actively used in the memory 212, and other data. The local database 214 is usually a flash memory unit, but can be any secondary storage device known in this technology, such as magnetic media, EEPROM, optical media, tape, floppy disk, or hard disk, or the like.Therefore, embodiments of the present disclosure may include UEs (e.g., UE 200A, 200B, etc.) that include the ability to perform the functions described herein. As those skilled in the art will understand, various logic elements can be implemented as discrete elements, software modules executed on a processor, or any combination of software and hardware to realize the functionality disclosed herein. For example, the ASIC 208, the memory 212, the API 210, and the local database 214 can all be used in a cooperative manner to load, store, and execute the various functions disclosed herein, and thus the logic for performing these functions can be allocated to Various components. Alternatively, the functionality can be incorporated into a discrete component. Therefore, the features of UE 200A and 200B in FIG. 2 will be considered as illustrative only, and the present invention is not limited to the illustrated features or arrangement.The wireless communication between UE 200A and/or 200B and RAN 120 may be based on different technologies, such as CDMA, W-CDMA, Time Division Multiple Access (TDMA), Frequency Division Multiple Access (FDMA), Orthogonal Frequency Division Multiplexing ( OFDM), GSM, or other protocols that can be used in wireless communication networks or data communication networks. As discussed in the foregoing and known in the art, a variety of networks and configurations can be used to transmit voice and/or data from the RAN to the UE. Therefore, the description provided herein is not intended to limit the embodiments of the present invention, and only assists in describing various aspects of the embodiments of the present invention.FIG. 3 illustrates a communication device 300 including structural components according to an embodiment of the present disclosure. The communication device 300 may correspond to any of the above-mentioned communication devices, including but not limited to UE 1...N, UE 200A and 200B, any component included in the RAN 120, such as a base station, an access point or eNodeB, a core Any component of the network 140, a component coupled to the Internet 175 (such as the social network server 170), and so on. Therefore, the communication device 300 may correspond to any electronic device configured to communicate with (or facilitate communication with) one or more other entities via the wireless communication system 100 of FIG. 1.Referring to Figure 3, the communication device 300 includes a transceiver circuit 305 configured to receive and/or transmit information. In an example, if the communication device 300 corresponds to a wireless communication device (such as UE 200A or UE 200B), the transceiver circuit 305 configured to receive and/or transmit information may include a wireless communication interface (such as Bluetooth, Wi-Fi , Wi-Fi Direct, Long Term Evolution (LTE) Direct, etc.), such as wireless transceivers and associated hardware (such as RF antennas, modems, modulators and/or demodulators, etc.). In another example, the transceiver circuit 305 configured to receive and/or transmit information may correspond to a wired communication interface (for example, a serial connection, a USB or Firewire connection, an Ethernet connection through which the Internet 175 can be accessed, etc.) . Therefore, if the communication device 300 corresponds to a certain type of web-based server (such as the social network server 170), the transceiver circuit 305 configured to receive and/or transmit information may correspond to an Ethernet card, in one example, it The network-based server is connected to other communication entities via the Ethernet protocol. In another example, the transceiver circuit 305 configured to receive and/or transmit information may include sensing or measurement hardware, whereby the communication device 300 may monitor its local environment (such as accelerometers, temperature sensors, light sensors, For monitoring local RF signals, etc.). The transceiver circuit 305 configured to receive and/or transmit information may also include software that, when executed, permits the associated hardware of the transceiver circuit 305 configured to receive and/or transmit information to perform its receiving and/or transmitting information. / Or launch function. However, the transceiver circuit 305 configured to receive and/or transmit information does not correspond to software alone, and the transceiver circuit 305 configured to receive and/or transmit information at least partially relies on structural hardware to realize its functionality. In addition, the transceiver circuit 305 configured to receive and/or transmit information may indicate by language other than "receive" and "transmit" as long as the basic function corresponds to the receiving or transmitting function. For example, as a specific type of receiving function, functions such as obtaining, acquiring, retrieving, measuring, etc. may be performed by a transceiver circuit 305 configured to receive and/or transmit information in some cases. In another example, as a specific type of transmission function, functions such as sending, transferring, transmitting, forwarding, etc. may be performed by a transceiver circuit 305 configured to receive and/or transmit information in some cases. Other functions corresponding to other types of receiving and/or transmitting functions may also be performed by a transceiver circuit 305 configured to receive and/or transmit information.Referring to FIG. 3, the communication device 300 further includes at least one processor 310 configured to process information. Example implementations of the types of processing that can be performed by at least one processor 310 configured to process information include, but are not limited to, performing determinations, establishing connections, choosing between different information options, performing data-related evaluations, and coupling to communications The sensors of the device 300 interact to perform measurement operations, convert information from one format to another format (for example, between different protocols, for example, .wmv to .avi, etc.), and so on. For example, the at least one processor 310 configured to process information may include a general purpose processor, DSP, ASIC, field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware component, Or it is designed to perform any combination of the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the at least one processor 310 configured to process information may be any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing devices (for example, a combination of a DSP and a microprocessor, a combination of multiple microprocessors, one or more microprocessors in combination with a DSP core, or any other such configuration). The at least one processor 310 configured to process information may also include software that, when executed, permits the associated hardware of the at least one processor 310 configured to process information to perform its processing functions. However, the at least one processor 310 configured to process information does not separately correspond to software, and the at least one processor 310 configured to process information at least partially relies on structural hardware to realize its functionality. In addition, at least one processor 310 configured to process information may be indicated by a language other than "processing" as long as the basic function corresponds to the processing function. For example, as a specific type of processing function, functions such as evaluation, determination, calculation, identification, etc. may be executed by at least one processor 310 configured to process information in some cases. Other functions corresponding to other types of processing functions may also be performed by at least one processor 310 configured to process information.3, the communication device 300 further includes a memory 315 configured to store information. In an example, the memory 315 configured to store information may include at least one non-transitory memory and associated hardware (such as a memory controller, etc.). For example, the non-transitory memory included in the memory 315 configured to store information may correspond to RAM, flash memory, ROM, erasable programmable ROM (EPROM), EEPROM, register, hard disk, removable Disk, CD-ROM or any other form of storage medium known in the art. The memory 315 configured to store information may also include software that, when executed, permits the associated hardware of the memory 315 configured to store information to perform its storage function. However, the memory 315 configured to store information does not correspond to software alone, and the memory 315 configured to store information at least partially depends on structural hardware to realize its functionality. In addition, the memory 315 configured to store information may be indicated in a language other than "storage" as long as the basic function corresponds to the storage function. For example, as a specific type of storage function, functions such as caching, saving, etc. may be performed by the memory 315 configured to store information in some cases. Other functions corresponding to other types of storage functions may also be performed by the memory 315 configured to store information.3, the communication device 300 further optionally includes a user interface output circuit 320 configured to present information. In an example, the user interface output circuit 320 configured to present information may include at least one output device and associated hardware. For example, the output device may include a video output device (for example, a display screen, a port that can carry video information, such as USB, HDMI, etc.), an audio output device (for example, a speaker, a port that can carry audio information, such as a microphone jack , USB, HDMI, etc.), vibration devices, and/or any other devices by which information can be formatted for output or actual output by the user or operator of the communication device 300. For example, if the communication device 300 corresponds to the UE 200A and/or UE 200B as shown in FIG. 2, the user interface output circuit 320 configured to present information may include the display 226. In another example, for certain communication devices, the user interface output circuit 320 configured to present information may be omitted, such as a network communication device that does not have a local user (such as a network switch or router, a remote server, etc.). The user interface output circuit 320 configured to present information may also include software that, when executed, permits the associated hardware of the user interface output circuit 320 configured to present information to perform its presentation function. However, the user interface output circuit 320 configured to present information does not correspond to software alone, and the user interface output circuit 320 configured to present information at least partially relies on structural hardware to realize its functionality. In addition, the user interface output circuit 320 configured to present information may indicate in a language other than "presentation" as long as the basic function corresponds to the presentation function. For example, as a specific type of presentation function, functions such as displaying, outputting, prompting, transmitting, etc. may be performed by the user interface output circuit 320 configured to present information in some cases. Other functions corresponding to other types of storage functions may also be performed by the user interface output circuit 320 configured to present information.3, the communication device 300 further optionally includes a user interface input circuit 325 configured to receive local user input. In an example, the user interface input circuit 325 configured to receive local user input may include at least one user input device and associated hardware. For example, the user input device may include buttons, a touch screen display, a keyboard, a camera, an audio input device (e.g., a microphone or a port that can carry audio information, such as a microphone jack, etc.), and/or can be used from the communication device 300 The user or operator of any other device that receives information. For example, if the communication device 300 corresponds to the UE 200A or the UE 200B as shown in FIG. 2, the user interface input circuit 325 configured to receive local user input may include a button 220A, a display 210A (if a touch screen), and the like. In another example, for certain communication devices, the user interface input circuit 325 configured to receive local user input may be omitted, such as a network communication device that does not have a local user (such as a network switch or router, a remote server, etc.). The user interface input circuit 325 configured to receive local user input may also include software that, when executed, permits the associated hardware of the user interface input circuit 325 configured to receive local user input to perform its input receiving function. However, the user interface input circuit 325 configured to receive local user input does not correspond to software alone, and the user interface input circuit 325 configured to receive local user input depends at least in part on structural hardware to realize its functionality. In addition, the user interface input circuit 325 configured to receive local user input may indicate in a language other than "receive local user input" as long as the basic function corresponds to receiving a local user function. For example, as a specific type of receiving local user functions, functions such as obtaining, receiving, collecting, etc. may be performed by the user interface input circuit 325 configured to receive local user input in some cases. Other functions corresponding to other types of functions for receiving local user input may also be performed by a user interface input circuit 325 configured to receive local user input.Referring to FIG. 3, although the configured structural components of 305 to 325 are shown in FIG. 3 as separate or different blocks that are implicitly connected to each other via an associated communication bus (not explicitly shown), it will be understood that the components of 305 to 325 The hardware and/or software through which corresponding configured structural components perform their corresponding functions may partially overlap. For example, any software used to facilitate the functionality of the configured structural components of 305 to 325 may be stored in a non-transitory memory associated with the memory 315 configured to store information such that the configured structures of 305 to 325 The components each perform their respective functionality (ie, in this case, software execution) based in part on the operation of the software stored by the memory 315 configured to store information. Likewise, the hardware directly associated with one of the configured structural components of 305 to 325 may be borrowed or used by other components of the configured structural components of 305 to 325 from time to time. For example, at least one processor 310 configured to process information may format the data into an appropriate format and then be transmitted by a transceiver circuit 305 configured to receive and/or transmit information, such that it is configured to receive and/or The transceiver circuit 305 that transmits information performs its functionality (ie, in this case, the transmission of data) based in part on the operation of structural hardware associated with at least one processor 310 configured to process the information.Various embodiments may be implemented on any of a variety of commercially available server devices (such as server 400 shown in FIG. 4). In an example, the server 400 may correspond to an example configuration of the social network server 170 described above. In FIG. 4, the server 400 includes a processor 401, which is coupled to a volatile memory 402 and a large-capacity non-volatile memory, such as a disk drive 403. The server 400 may also include a floppy disk drive, compact disk (CD) or DVD disk drive 406, which is coupled to the processor 401. The server 400 may also include a network access port 404 coupled to the processor 401 for establishing a data connection with a network 407 (e.g., coupled to other broadcasting system computers and servers or a local area network coupled to the Internet). In the context of FIG. 3, it will be understood that the server 400 of FIG. 4 illustrates an example implementation of the communication device 300, whereby the transceiver circuit configured to transmit and/or receive information 305 corresponds to the server 400 used to communicate with the network 407 The network access point 404 for communication, at least one processor 310 configured to process information corresponds to the processor 401, and the memory 315 configured to store information corresponds to the volatile memory 402, the magnetic disk drive 403, and/or the optical disk drive 406 Any combination of. Not explicitly shown in FIG. 4 and may or may not include an optional user interface output circuit 320 configured to present information and an optional user interface input circuit 325 configured to receive local user input. Therefore, FIG. 4 helps to show that in addition to the UE in FIG. 2, the communication device 300 can be implemented as a server.It is common for users to capture images in bursts. For example, although the user may end up wanting to stay in a limited number of representative pictures (such as pictures of a group of people standing in front of landmarks, newborn babies, etc.), the user may take a relatively high number of pictures in an image capture session , To ensure that at least one of the pictures will be satisfactory (for example, all eyes in the image are open, etc.). After the image capture session, the user will usually view the images captured during the image capture session one by one on his/her image capture device to delete unsatisfactory images, etc. If the user is interested in a specific target feature present in most or all images (such as a specific person's face, a specific animal in a zoo, a specific cloud in the sky, etc.), then the user may want to zoom in to focus The image is evaluated on the target feature. In this case, whenever the user transitions from an image capture session to a new image, the user may be required to manually zoom to the target feature.FIG. 5 illustrates a process of controlling how to display a series of images to a user according to an embodiment of the present disclosure. Figures 6-7 illustrate example implementations of portions of the process of Figure 5 according to embodiments of the present disclosure.Referring to FIG. 5, at block 500, the UE groups multiple images. In an example, the image grouping of the box 500 may occur via selection of image thumbnails from the photo library, resulting in the selected image thumbnail being highlighted in the photo library 600 of FIG. 6 or the photo library 705 of FIG. 7 ( After selecting the image thumbnail from the photo library 700). In another example, some or all of the grouping of the images at block 500 may be automatically performed by the UE (for example, if the user captures a burst of images within a threshold amount of time separated from each other, for example, a few seconds, the UE may automatically Grouping without user interaction). Next, at block 505, the UE displays the first image among the plurality of images. Displaying the first image at block 505 may start at the default zoom level, as depicted in images 710-715 of FIG.Although the UE displays the first image, at block 510, the UE determines at least one object of interest (such as a human face, pet, object, eyes, wave breaking, etc.) and the desired zoom level in the first image. The UE then, at block 515, combines one or more transitions between the multiple images to determine to lock onto the object of interest. In an example, the determination of the box 510 may be based on the user convergence on a specific part of the first image, as depicted in image 720, where the user zooms in on the dog’s eyes, and the determination of the lock of the box 515 is in response to the user The final magnified position in the first image (for example, the dog's eyes are centered) is selected so that the object of interest can be confirmed. Alternatively, the lock determination of the frame 515 may be based on the user's click or tap on the concerned object, regardless of the zoom level (for example, the user taps on the dog's eyes in the image 720, which sets the zoom lock on the dog's On the eyes, it has nothing to do with the current magnification program). Blocks 510-515 can optionally be combined to determine the area of interest.In an example, the desired zoom level may be indicated as the current zoom level (for example, 150%, 250%, etc.) at which the UE zooms in the first image, or may be linked to at least one object (and/or area) of interest (for example, the user A human face is selected as at least one object, where the desired zoom level corresponds to any zoom percentage necessary to show the human face). In another alternative example, the user can optionally select the absolute pixel area of the first image as the area of interest, wherein the UE is configured to only after the image transition, at least one object of interest is located in the defined absolute pixel Only then can it be conditionally locked on at least one object of interest.In the example of block 510, the at least one object of interest may include multiple objects of interest. For example, the user may be interested in seeing multiple faces in the image group (such as the face of a baby, the face of a mother, etc.), so that the user wants to zoom in as much as possible during the image transition, while still being able to view the multiple faces Each of them. In one example, in this scenario, the desired zoom level may be as large as possible on the multiple faces while still keeping each face viewable (for example, in combination with the centering of optional face positions, such that There is a space between the edge of the image and the position of the face). In another example, if only one of the multiple faces is detected in a particular image, then the desired zoom level will only be zoomed in on this face (for example, combined with the centering of the optional face position, so that the image Leave a gap between the edge of the face and the position of the face).In the example of blocks 510-515, the UE may recommend to the user one or more objects (and/or regions) of interest and/or the desired zoom level, and the user then agrees or modifies it (for example, the UE recommends absolute pixel area , And the user drags the absolute pixel area to cover the desired area of the first image, fine-tune the zoom level, etc.). Alternatively, the user can initiate the selection of the object (and/or area) of interest and/or the desired zoom level. Alternatively, the UE may automatically determine at least one object (and/or area) of interest and/or the desired zoom level on behalf of the UE (although the user may then override these automatically loaded settings if necessary).Referring to FIG. 5, at block 520, the UE determines to transition to the second image among the plurality of images. In an example, the determination of block 520 may correspond to a transition from the first image to the second image. In an alternative example, the determination of block 520 may be when the UE has transitioned to some other image (e.g., among the plurality of images), or even transitioned to a completely different application (e.g., web browser, email application, etc.). ). In other words, even if the user does not directly transition from the first image to the second image, the lock determination of the frame 510 can be effectively maintained. In fact, in at least one instance, once an object lock is attached to the plurality of images, the object lock can be implemented whenever a transition to any of these images is implemented (e.g., until the object lock is removed So far, this may be in response to user input such as the name of the representative image, or occur after a threshold time period, etc.). Of course, in other embodiments, whenever the user exits the photo library and/or transitions to an image in the photo library that is not part of the plurality of images, the object lock may be cancelled instead.Referring to block 520 of FIG. 5, in an example, it is typical for the user to switch between images on the UE by sliding left or right on the screen of the UE. However, when the user zooms in on the image, swiping to the left or right usually causes the area of the current image to shift without actually transforming the image. In at least one embodiment, after the UE determines the at least one object of interest and the desired zoom level in the first image at block 510, the UE may use a physical button that does not normally perform an image transition function (such as a home button). Etc.) to determine the transition between grouped images (for example at box 520) (for example, pressing the home button will usually return the UE to the home screen, but can be used instead to lock the UE to all When a set of specific objects or areas of interest and the series of images are zoomed in, transition between images. In an alternative example, at block 510, the user indicates at least one object of interest within the first image and the desired zoom level Thereafter, the UE may determine to transition between grouped images via user selection of a soft or virtual button (for example, at block 520).Once at block 520, the UE determines to transition to the second image, the UE at block 525, based on the lock determination of 515, detects the object of interest (such as the eyes of a dog, a specific person) in the second image Or one or more faces of a crowd, etc.), and the UE displays the first object by zooming in on the object of interest at a zoom level corresponding to the desired zoom level (for example, as in image 725) at box 530 Two images. As will be appreciated, this program allows the UE to lock on a specific object at the target zoom level while quickly transitioning from image to image, without the user having to manually zoom in on the desired image portion every time a new image is loaded. Although not explicitly shown in FIG. 5, blocks 510-515 may be repeated for the same or different images in the grouped images to adjust the desired zoom level and/or at least one object or area of interest, so the user can fine-tune how to display the image.When the user reviews different images, blocks 520-530 may be repeated several times. Finally, although not explicitly shown in Figure 5, the user can select one (or more than one) of the images as the "best" image for a particular image capture session, as shown in the photo library 610 and images in Figure 6 As reflected in the photo library 730 of 7, a thumbnail of the grouped image has a check mark. The other images from the grouped image can then remain on the UE or be deleted based on the user's preference. Of course, it is also possible that the user does not like any of the grouped images. In this case, the "best" image is not selected. In an example, this can result in all images captured during the image capture session being compressed or deleted.Figures 8-9 illustrate example implementations of the process of Figure 5 according to embodiments of the present disclosure.Referring to Figure 8, at block 800, images 1...N are captured, and the UE joins the image capture session to group images 1...N. Block 800 represents one way in which block 500 of FIG. 5 may be implemented. In the example of block 800, the UE's camera may group images 1...N based on shared time characteristics (eg, capture the multiple images within a threshold time period of each other and/or within a defined time window). In another example of block 800, the images 1...N may be grouped based on the shared time characteristics in combination with the shared space characteristics (for example, within a threshold distance of each other and/or within a defined geographic area). image). For example, the UE may capture images 1...N between 7:01 and 7:04 in the afternoon on a specific day (for example, satisfying the instance sharing time characteristic as captured within 5 minutes of each other), which images are also at each other at the same time. Capture within 1000 meters (for example, to meet the shared space characteristics of instances captured within 1 mile of each other).In another example, if the UE obtains (e.g. from another UE, via download from a social networking service, etc.) at 7:02 pm but captured in a different location (e.g. via a remote UE sharing this image with the UE) Another image, then this image can be excluded from the grouping of box 800 because of the lack of shared space characteristics. In another example, if an image is captured at the same location but at a different time (eg, an hour earlier, a different day or week or year, etc.), then this image can be excluded from the grouping of box 800 because of the lack of shared time characteristics. In another example, it is assumed that the UE is operated by a user who has a group of friends each having its own corresponding UE, whereby the friends in the group each use its corresponding UE to capture images and share the images with each other. In this case, the shared image can satisfy the shared time and space characteristics even though it is captured by different UEs. Therefore, the fact that the UE does not capture each of the images 1...N is not necessarily a disqualification criterion for the grouping of block 800 (for example, but alternative embodiments may be more specific to the self-capturing grouping condition of block 800). Capture image grouping).Referring to FIG. 8, at block 805, the UE opens the photo library application and displays image 1 via the display. At block 810, it is assumed that the UE transitions from image 1 to image 2 (eg, in response to the user clicking an arrow to shift to the next image, in response to the user sliding right or left on the UE's touch screen, etc.). At block 810, instead of simply moving to the next image, the user provides user input (such as double-tap or double-tap input on the touch screen, pinch and split finger input, etc.), causing the UE to zoom in on the image 2 depicting user A and Part of B's face. Users A and B may correspond to the acquaintance of the UE’s user, or one of the users A and B may correspond to the UE’s user itself (for example, if the UE captured Image 2 in the selfie mode, or if the UE received from an external source The UE). In an example, the object recognition module on the UE (for example, in this facial object example, the object recognition module will be a facial recognition module) can be used to recognize and recognize the faces of users A and B as the focus of attention. Object.At block 820, the UE determines to lock onto the faces of users A and B of images 1...N at the target (or desired) zoom level. As mentioned above, the desired zoom level can be inferred in different ways. For example, if the user zooms in to 150% zoom at block 815, then the desired zoom level can simply be set to 150%. In another example, the fact that multiple objects are recognized as objects of interest (such as the faces of users A and B) can be used to define the desired zoom level as the highest zoom level, where the faces of users A and B remain available. View, and the associated image is centered around the faces of users A and B. In another example, the relative sizes of the faces of users A and B can be used to define the zoom level as any zoom level necessary to view the faces of users A and B at those specific sizes in other pictures (e.g., therefore If image 2 is taken while away from users A and B, and image 3 is taken at a much closer position, the absolute zoom level will not need to be as high as in image 3 in order to view the corresponding face at the same size relative to the display ).As will be appreciated, blocks 810, 815, and 820 represent an example implementation of blocks 505, 510, and 515 of FIG. 5, respectively. Therefore, the first image described with respect to box 505 does not need to be the first image to be viewed (for example, image 1 that was first viewed at box 805, without object lock being implemented), but the focus is detected in box 510. Any image of the object, which results in the lock determination as in block 515.Referring to FIG. 8, at block 825, a user input is received at the UE, which causes the UE to transition from image 2 to image 3 (e.g., as in block 520 of FIG. 5). At block 830, based on the object lock determined at block 820, it is assumed that the UE scans image 3 (e.g., using an object recognition module) and only detects the face of user A (e.g., as in block 525 of FIG. 5). Therefore, at block 835, image 3 is displayed, zoomed in on the face of user A at the target zoom level (e.g., as in block 530 of FIG. 5). In an example, when less than all the objects of interest are detected, the target zoom level may be different. In one example, the faces of users A and B can be zoomed in while keeping the two faces in view (for example, resulting in 135% zoom, etc.), but if the face of only one of users A and B is detected, The zoom can then be higher (e.g. 250% zoom to put the face in full screen mode, etc.).Referring to Figure 8, at block 840, a user input is received at the UE, which causes the UE to transition from image 3 to image 4 (e.g., as in block 520 of Figure 5). At block 845, based on the object lock determined at block 820, it is assumed that the UE scans image 4 (for example, using an object recognition module), and detects the faces of both users A and B (for example, as in block 525 of FIG. 5 ). Therefore, at block 850, image 4 is displayed in which the faces of users A and B are zoomed in at the target zoom level (e.g., as in block 530 of FIG. 5). As described above, when less than all the objects of interest are detected, the target zoom level can be different.Referring to FIG. 8, at block 855, instead of just moving to the next image, the user provides user input (such as double-touch or double-tap input on the touch screen, pinch and split finger input, etc.), causing the UE to zoom in on image 4 Only the part of the face of user A is drawn. At block 860, the UE determines to update the object lock established at block 820 to a new object lock that is locked to only the face of user A of images 1...N (e.g., as in blocks 510-515). At block 860, the target (or desired) zoom level may also be updated, or the target zoom level used when user A is the only face detected may be used, as described above with respect to block 835.Referring to FIG. 8, at block 865, a user input is received at the UE, which causes the UE to transition from image 4 to image 5 (e.g., in block 520 of FIG. 5). At block 870, based on the object lock determined at block 860, it is assumed that the UE scans image 5 (e.g., using an object recognition module) and detects the face of user A (e.g., as in block 525 of FIG. 5). Therefore, at block 875, image 5 is displayed in which the face of user A is zoomed in at the target zoom level (e.g., as in block 530 of FIG. 5).FIG. 9 is a continuation of the process of FIG. 8 according to an embodiment of the present disclosure. Referring to Figure 9, at box 900, instead of simply moving to the next image, the user provides user input (such as clicking or tapping, etc.), which indicates the selection of user A's eyes as the object of interest (such as In block 510). In other words, the user of the UE indicates that not only the face of user A but also the eyes of the user of the UE are of great concern. At block 905, the UE determines to update the object lock established at block 860 to a new object lock locked to the eyes of user A of images 1...N (e.g., as in block 515). At block 905, the target (or desired) zoom level for zooming in on user A's eyes may also be updated.Referring to Figure 9, at block 910, a user input is received at the UE, which causes the UE to transition from image 5 to image 6 (e.g., as in block 520 of Figure 5). At block 915, based on the object lock determined at block 905, it is assumed that the UE scans image 6 (for example, using an object recognition module) and does not detect user A's eyes (for example, user A moves his gaze away from the camera in image 6) , User A does not even draw in image 6, etc.). In an example, any identified object of interest lacking an object lock may result in an immediate and automatic transition to the next image, as shown via the transition from image 6 directly to image 7 at block 920. In instances where images lacking any object of interest are automatically skipped, there is even no need to display the image 6 to the user at all. In another example, image 6 may be displayed briefly (for example, so that the user perceives that the image is skipped). In another example, instead of automatically skipping images that do not have any identified objects of interest for the object lock, the images may simply be presented at the default zoom level.Referring to FIG. 9, at block 925, based on the object lock determined at block 905, it is assumed that the UE scans image 7 (for example, using an object recognition module) and detects the face of user A, and more specifically, the eyes (for example, (As in block 525 of Figure 5). Therefore, at block 930, image 7 is displayed in which the eyes of user A are zoomed in at the target zoom level (e.g., as in block 530 of FIG. 5).Referring to Figure 9, at block 935, it is assumed that the UE provides an alert, which temporarily causes a different application to load. For example, at block 935, the UE may receive a phone call that causes a phone application to load, an email alert that causes an email application to load, a text message that causes a messaging application to load, and news that causes a news application to load. Alerts, etc. At block 940, the UE returns to the photo library and determines to display image 8 (e.g., automatically whenever the application from block 935 exits, via manual user operation, etc.).Referring to FIG. 9, at block 945, based on the object lock determined at block 905, it is assumed that the UE scans image 8 (e.g., using an object recognition module) and detects the face of user A, and more specifically, the eyes (e.g., (As in block 525 of Figure 5). In this case, it is assumed that the eyes of user A are open (as opposed to being closed). At block 950, image 8 is displayed with zoomed in on user A's eyes at the target zoom level (e.g., as in block 530 of FIG. 5), similar to block 930. Therefore, in one example, even if the UE temporarily transitions to a different application, the object lock can be maintained. In an alternative example, an existing image viewing application (such as a photo library application, etc.) having the object lock can reset (or cancel) the object lock.At block 955, when the UE is displaying image 8, the UE determines the area of interest in image 8. For example, the user can manually specify the range of the image 8 that is of interest (for example, the left side of the image 8, the center of the image 8, etc.). At block 960, only when user A's eyes are open (for example, as in block 515), and only when user A's open eyes are located in the specific area of interest determined at block 955, the UE It is determined to update the object lock established at block 905 to a new object lock that is locked to the eyes of user A of images 1...N. For example, assume that the area of interest is the upper right quadrant of image 8. In this case, subsequent images where user A's eyes are not in the upper quadrant and/or are not open will cause the UE to determine that no object of interest is in those particular images. In another example, the open eyes condition of the object lock established at block 960 may be based on fast user input, passive monitoring of user behavior (for example, compared with closed eyes, the user spends more time reviewing the user An image of A with open eyes, etc.), or may be a default condition (for example, based on the general assumption that the user is not interested in the picture, where the eyes of the person who is important to the user are closed, etc.) . At block 960, the target (or desired) zoom level for zooming in on user A's eyes may also be updated.Referring to Figure 9, at block 965, a user input is received at the UE, which causes the UE to transition from image 8 to image 9 (e.g., as in block 520 of Figure 5). At block 970, based on the object lock determined at block 960, it is assumed that the UE scans the area of interest (for example, the upper left quadrant, etc.) in the image 9 (for example, using the object recognition module), detects the eyes of user A, and further detects the user A's eyes are open (e.g., as in box 525 of Figure 5). At block 975, image 9 is displayed with the target zoom level zoomed in on user A's open eyes within the area of interest (e.g., as in block 530 of FIG. 5).Referring to FIG. 8, at block 980, it is assumed that the user of the UE provides an input indicating that image 8 is a desired image that will be used as a representative of an image capture session. At block 985, any non-specified images may be compressed and/or deleted from the UE. In an alternative example, instead of a single image, at block 980, a subset with any number of representative images is represented as a representative image capture session. In another alternative example, instead of specifying which images are representative at block 980, the user may specify which subsets of images are unacceptable representations of the image capture session (eg, the user indicates a bad picture instead of a good picture). In this case, the designated unacceptable image can be compressed and/or deleted from the UE.Those skilled in the art will understand that any of a variety of different techniques and techniques can be used to represent information and signals. For example, voltage, current, electromagnetic waves, magnetic fields or magnetic particles, light fields or light particles, or any combination thereof may be used to represent data, instructions, commands, information, signals, bits, symbols that may be referred to throughout the above description. And chip.In addition, those skilled in the art will understand that the various illustrative logic blocks, modules, circuits, and algorithm steps described in conjunction with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or a combination of both. To clearly illustrate this interchangeability of hardware and software, the functionality of various illustrative components, blocks, modules, circuits, and steps has been generally described above. Whether such functionality is implemented as hardware or software depends on the specific application and design constraints imposed on the overall system. Those skilled in the art can implement the described functionality in different ways for each specific application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.General-purpose processors, digital signal processors (DSP), application-specific integrated circuits (ASIC), field programmable gate arrays (FPGA), or other programmable logic devices, discrete gates, or transistors designed to perform the functions described herein can be used Logic, discrete hardware components, or any combination thereof, implement or execute various illustrative logic blocks, modules, and circuits described in conjunction with the embodiments disclosed herein. A general-purpose processor may be a microprocessor; but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing devices, for example, a combination of a DSP and a microprocessor, a combination of multiple microprocessors, a combination of one or more microprocessors and a DSP core, or any other such configuration.The methods, sequences, and/or algorithms described in conjunction with the embodiments disclosed herein may be directly embodied in hardware, in software modules executed by a processor, or in a combination of the two. The software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM, or any other form of storage media known in the art . An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral with the processor. The processor and storage medium may reside in the ASIC. The ASIC may reside in a user terminal (e.g., UE). In the alternative, the processor and the storage medium may reside as discrete components in the user terminal.In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions can be stored as one or more instructions or codes on a computer-readable medium or transmitted via the computer-readable medium. Computer-readable media includes both computer storage media and communication media, and communication media includes any medium that facilitates the transfer of a computer program from one place to another. A storage medium can be any available medium that can be accessed by a computer. By way of example and not limitation, such computer-readable media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage devices, magnetic disk storage devices, or other magnetic storage devices, or may be used to carry or store instructions or data structures. Any other medium that requires program code in the form of a computer and can be accessed by a computer. And, any connection is appropriately referred to as a computer-readable medium. For example, if the software is launched from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, the same Shaft cables, fiber optic cables, twisted pair cables, DSL or wireless technologies such as infrared, radio, and microwave are included in the definition of media. As used herein, magnetic disks and optical disks include compact disks (CDs), laser disks, optical disks, digital versatile disks (DVDs), flexible disks, and Blu-ray disks. Disks usually reproduce data magnetically, while disks use laser Reproduce the data optically. Combinations of the above should also be included in the scope of computer-readable media.Although the foregoing disclosure shows illustrative embodiments of the present invention, it should be noted that various changes and modifications can be made herein without departing from the scope of the present invention as defined by the appended claims. The functions, steps and/or actions of the method claims according to the embodiments of the present invention described herein need not be performed in any specific order. In addition, although elements of the present disclosure may be described or claimed in the singular form, the plural form is also encompassed unless it is expressly stated that they are limited to the singular form.
Described herein are integrated circuit (IC) devices that include devices that include fin-based field-effect transistors (FinFETs) integrated over gate-all-around (GAA) transistors. The GAA transistors may serve to provide high-performance compute logic, and may be relatively low-voltage transistors, while FinFETs may be more suitable than GAA transistors for providing high-voltage transistors, and, therefore, may serve to provide peripheral logic for backend memory arrays implemented over the same support structure over which the GAA transistors and the FinFETs are provided. Such an arrangement may address the fundamental voltage incompatibility by integrating a mix of FinFETs and GAA transistors in stacked complimentary FET (CFET) architecture to enable embedded 1T-1X based memories.
An integrated circuit (IC) device, comprising:a support structure;a first layer, comprising a plurality of gate-all-around (GAA) transistors;a second layer, comprising a plurality of fin-based field-effect transistors, FinFETs; anda third layer, comprising a memory array that includes a plurality of memory cells, where an individual cell of the plurality of memory cells includes a transistor with a channel region comprising a thin-film semiconductor material,wherein:the first layer is between the support structure and the second layer, andthe second layer is either at least partially overlaps with the third layer or is between the first layer and the third layer.The IC device according to claim 1, wherein:the plurality of FinFETs includes a first group of FinFETs and a second group of FinFETs,an individual FinFET of the first group includes a gate dielectric of a first thickness,an individual FinFET of the first group includes a gate dielectric of a second thickness, andthe second thickness is greater than the first thickness.The IC device according to claim 2, wherein one or more of the FinFETs of the first group are coupled to one or more of the GAA transistors.The IC device according to claims 2 or 3, wherein one or more of the FinFETs of the second group are coupled to one or more of the memory cells.The IC device according to any one of the preceding claims, wherein an average grain size of the thin-film semiconductor material is smaller than about 0.1 millimeter.The IC device according to any one of the preceding claims, wherein channel regions of the FinFETs include one or more semiconductor materials with an average grain size greater than about 1 millimeter.The IC device according to any one of the preceding claims, wherein channel regions of the GAA transistors include one or more semiconductor materials with an average grain size greater than about 1 millimeter.The IC device according to any one of the preceding claims, wherein the individual cell of the plurality of memory cells further includes a capacitor to store a bit value, the capacitor coupled to the transistor.The IC device according to any one of the preceding claims, wherein the GAA transistors include nanoribbon transistors.The IC device according to any one of the preceding claims, further comprising a bonding interface between the first layer and the second layer.An electronic device, comprising:a carrier substrate; andone or more of the IC devices according to any one of the preceding claims, coupled to the carrier substrate.The electronic device according to claim 11, wherein the carrier substrate is a motherboard or a printed circuit board.The electronic device according to claims 11 or 12, wherein the electronic device is a wearable electronic device or a handheld electronic device.A method of fabricating an integrated circuit (IC) device, the method comprising:providing a first layer of transistors over a support structure, the first layer including a plurality of gate-all-around (GAA) transistors;performing a layer transfer to provide a second layer of transistors over the first layer, the second layer including a plurality of fin-based field-effect transistors, FinFETs; andproviding a third layer over the second layer, the third layer including a plurality of memory cells, where an individual cell of the plurality of memory cells includes a transistor with a channel region comprising a thin-film semiconductor material.The method according to claim 14, wherein the support structure is a first support structure, and wherein performing the layer transfer includes:transferring a layer of a substantially single-crystalline semiconductor material grown on a second support structure to be over the first layer over the first support structure, andforming the FinFETs using portions of the substantially single-crystalline semiconductor material transferred to be over the first layer over the first support structure as channel regions of the FinFETs.
BackgroundFor the past several decades, the scaling of features in integrated circuits (ICs) has been a driving force behind an ever-growing semiconductor industry. Scaling to smaller and smaller features enables increased densities of functional units on the limited real estate of semiconductor chips. For example, shrinking transistor size allows for the incorporation of an increased number of memory or logic devices on a chip, lending to the fabrication of products with increased capacity. The drive for the ever-increasing capacity, however, is not without issue. The necessity to optimize the performance of each IC die and each IC assembly or package that includes one or more dies becomes increasingly significant.Brief Description of the DrawingsEmbodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.FIG. 1 is a schematic illustration of an IC device that includes fin-based field-effect transistors (FinFETs) integrated over gate-all-around (GAA) transistors, according to some embodiments of the present disclosure.FIG. 2 is a cross-sectional side view of an example IC device with FinFETs integrated over GAA transistors, according to some embodiments of the present disclosure.FIG. 3 is a cross-sectional side view of an example IC device with FinFETs and backend memory integrated over GAA transistors, according to some embodiments of the present disclosure.FIGS. 4A-4C are cross-sectional side views of gate stacks that could be used with any of the transistors of IC devices with FinFETs integrated over GAA transistors, according to some embodiments of the present disclosure.FIG. 5 is a flow diagram of a method of manufacturing an IC device with FinFETs integrated over GAA transistors, according to some embodiments of the present disclosure.FIGS. 6A-6B are top views of a wafer and dies that may include one or more IC devices with FinFETs integrated over GAA transistors in accordance with any of the embodiments disclosed herein.FIG. 7 is a cross-sectional side view of an IC device that may include one or more IC devices with FinFETs integrated over GAA transistors in accordance with any of the embodiments disclosed herein.FIG. 8 is a cross-sectional side view of an IC package that may include one or more IC devices with FinFETs integrated over GAA transistors in accordance with any of the embodiments disclosed herein.FIG. 9 is a cross-sectional side view of an IC device assembly that may include one or more IC devices with FinFETs integrated over GAA transistors in accordance with any of the embodiments disclosed herein.FIG. 10 is a block diagram of an example computing device that may include one or more IC devices with FinFETs integrated over GAA transistors in accordance with any of the embodiments disclosed herein.Detailed DescriptionThe systems, methods and devices of this disclosure each have several innovative aspects, no single one of which is solely responsible for all of the desirable attributes disclosed herein. Details of one or more implementations of the subject matter described in this specification are set forth in the description below and the accompanying drawings.For purposes of illustrating IC devices with FinFETs integrated over GAA transistors as described herein, it might be useful to first understand phenomena that may come into play in certain IC arrangements. The following foundational information may be viewed as a basis from which the present disclosure may be properly explained. Such information is offered for purposes of explanation only and, accordingly, should not be construed in any way to limit the broad scope of the present disclosure and its potential applications.Some memory devices may be considered "standalone" devices in that they are included in a chip that does not also include compute logic (where, as used herein, the term "compute logic devices" or simply "compute logic" or "logic devices," refers to IC components, e.g., transistors, for performing computing/processing operations). Other memory devices may be included in a chip along with compute logic and may be referred to as "embedded" memory devices. Using embedded memory to support compute logic may improve performance by bringing the memory and the compute logic closer together and eliminating interfaces that increase latency. Various embodiments of the present disclosure relate to embedded memory arrays, as well as corresponding methods and devices.Dynamic random-access memory (DRAM) and in particular, embedded DRAM (eDRAM), has been introduced in the past to address the limitation in density and standby power of other types or memory. As an example, a DRAM cell may include a capacitor for storing a bit value, or a memory state (e.g., logical "1" or "0") of the cell, and an access transistor controlling access to the cell (e.g., access to write information to the cell or access to read information from the cell). Such a memory cell may be referred to as a "1T-1C memory cell," highlighting the fact that it uses one transistor (i.e., "1T" in the term "1T-1C memory cell") and one capacitor (i.e., "1C" in the term "1T-1C memory cell"). The capacitor of a 1T-1C memory cell may be coupled to one S/D region of the access transistor (e.g., to the source region of the access transistor), while the other S/D region of the access transistor (e.g., to the drain region) may be coupled to a bit-line (BL), and a gate terminal of the transistor may be coupled to a word-line (WL). Since such a memory cell can be fabricated with as little as a single access transistor, it can provide higher density and lower standby power versus some other types of memory in the same process technology. Other types of memory may also involve access transistors such as the ones used in DRAM, but store bit values in other circuit components coupled to the access transistors. Therefore, such memory types are generally referred to as "1T-1X memory" to highlight the fact that an individual memory cell may use one transistor and one other circuit component (i.e., "1X" in the term "1T-1X memory"), such as a capacitor, a magnetic storage element, a resistor, or another transistor, coupled to the access transistor.For future high-performance system-on-chip (SoC) architectures, there is an increasing desire for high bandwidth and high-density memory that is directly integrated on a single die with a processing unit (XPU), such as a computing processing unit (CPU) or a graphics processing unit (GPU). To this end, there has been research to embed DRAM in logic processes or to embed logic in DRAM-like processes. However, the advanced logic technology roadmap is driven by voltage scaling and adopts GAA transistor architecture for continued scaling, which is not always the most suitable for providing high-voltage transistors that may be needed for embedded DRAM.Embodiments of the present disclosure relate to IC devices that include FinFETs integrated in a layer over GAA transistors, both provided over a single support structure (e.g., a substrate, a die, a wafer, or a chip). An example IC device may include a support structure (e.g., a substrate, a die, a wafer, or a chip); a first layer, comprising a plurality of GAA transistors; a second layer, comprising a plurality of FinFETs; and a third layer, comprising a memory array that includes a plurality of memory cells, where an individual cell of the plurality of memory cells includes a transistor with a channel region comprising a thin-film semiconductor material, where the first layer is between the support structure and the second layer (i.e., the second layer is further away from the support structure than the first layer), and the second layer is either at least partially overlaps with the third layer (i.e., the third layer may be located at approximately the same level with respect to the support structure as the second layer) or is between the first layer and the third layer (i.e., the third layer may be further away from the support structure). The GAA transistors may serve to provide high-performance compute logic, and may be relatively low-voltage transistors, while FinFETs may be more suitable than GAA transistors for providing high-voltage transistors, and, therefore, may serve to provide peripheral logic for backend memory arrays implemented over the same support structure over which the GAA transistors and the FinFETs are provided. Such an arrangement may address the fundamental voltage incompatibility by integrating a mix of FinFETs and GAA transistors in stacked complimentary FET (CFET) architecture to enable embedded 1T-1X based memories. Other technical effects will be evident from various embodiments described here.In the following detailed description, various aspects of the illustrative implementations may be described using terms commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. For example, the term "connected" means a direct electrical or magnetic connection between the things that are connected, without any intermediary devices, while the term "coupled" means either a direct electrical or magnetic connection between the things that are connected, or an indirect connection through one or more passive or active intermediary devices. The term "circuit" means one or more passive and/or active components that are arranged to cooperate with one another to provide a desired function. As used herein, a "logic state" (or, alternatively, a "state" or a "bit" value) of a memory cell may refer to one of a finite number of states that the cell can have, e.g., logic states "1" and "0," each state represented by a different charge, or a range of charges, stored in a storage node of the cell, while "READ" and "WRITE" memory access or operations refer to, respectively, determining/sensing a logic state of a memory cell and programming/setting a logic state of a memory cell.Furthermore, some descriptions may refer to a particular source or drain region of a transistor being either a source region or a drain region. However, unless specified otherwise, which region of a transistor is considered to be a source region and which region is considered to be a drain region is not important because, as is common in the field of transistors, designations of source and drain are often interchangeable. Therefore, descriptions of some illustrative embodiments of the source and drain regions provided herein are applicable to embodiments where the designation of source and drain regions may be reversed. Unless explained otherwise, in some settings, the terms S/D region, S/D contact, and S/D terminal of a transistor may be used interchangeably, although, in general, the term "S/D contact" is used to refer to an electrically conductive structure for making a contact to a S/D region of a transistor, while the term "S/D terminal" may generally refer to either S/D region or S/D contact of a transistor.A term "interconnect" may be used to describe any element formed of an electrically conductive material for providing electrical connectivity to one or more components associated with an IC or/and between various such components. In general, the term "interconnect" may refer to both conductive lines (or, simply, "lines," also sometimes referred to as "traces" or "trenches") and conductive vias (or, simply, "vias"). In general, in context of interconnects, the term "conductive line" may be used to describe an electrically conductive element isolated by an insulator material (e.g., a low-k dielectric material) that is provided within the plane of an IC die. Such lines are typically stacked into several levels, or several layers, of a metallization stack. On the other hand, the term "via" may be used to describe an electrically conductive element that interconnects two or more lines of different levels. To that end, a via may be provided substantially perpendicularly to the plane of an IC die and may interconnect two lines in adjacent levels or two lines in not adjacent levels. A term "metallization stack" may be used to refer to a stack of one or more interconnects for providing connectivity to different circuit components of an IC chip. Sometimes, lines and vias may be referred to as "metal traces" and "metal vias", respectively, to highlight the fact that these elements include electrically conductive materials such as metals.Still further, the terms "package" and "IC package" are synonymous, as are the terms "die" and "IC die," the term "insulating" means "electrically insulating," the term "conducting" means "electrically conducting," unless otherwise specified. Although certain elements may be referred to in the singular herein, such elements may include multiple sub-elements. For example, "an electrically conductive material" may include one or more electrically conductive materials. If used, the terms "oxide," "carbide," "nitride," etc. refer to compounds containing, respectively, oxygen, carbon, nitrogen, etc., the term "high-k dielectric" refers to a material having a higher dielectric constant than silicon oxide, while the term "low-k dielectric" refers to a material having a lower dielectric constant than silicon oxide. The terms "substantially," "close," "approximately," "near," and "about," generally refer to being within +/- 20% of a target value (e.g., within +/- 10% or within +/- 5% of a target value) based on the context of a particular value as described herein or as known in the art. Similarly, terms indicating orientation of various elements, e.g., "coplanar," "perpendicular," "orthogonal," "parallel," or any other angle between the elements, generally refer to being within +/-5-20% of a target value based on the context of a particular value as described herein or as known in the art.For the purposes of the present disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B, and C). The term "between," when used with reference to measurement ranges, is inclusive of the ends of the measurement ranges. As used herein, the notation "A/B/C" means (A), (B), and/or (C).The description may use the phrases "in an embodiment" or "in embodiments," which may each refer to one or more of the same or different embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous. The disclosure may use perspective-based descriptions such as "above," "below," "top," "bottom," and "side"; such descriptions are used to facilitate the discussion and are not intended to restrict the application of disclosed embodiments. The accompanying drawings are not necessarily drawn to scale. Unless otherwise specified, the use of the ordinal adjectives "first," "second," and "third," etc., to describe a common object, merely indicate that different instances of like objects are being referred to and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking or in any other manner.In the following detailed description, reference is made to the accompanying drawings that form a part hereof, and in which is shown, by way of illustration, embodiments that may be practiced. It is to be understood that other embodiments may be utilized, and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense. For convenience, if a collection of drawings designated with different letters are present, e.g., FIGS. 4A-4C , such a collection may be referred to herein without the letters, e.g., as " FIG. 4 ." In order to not clutter the drawings, sometimes only one instance of a given element is labeled in a drawing with a reference numeral, although other similar elements may be shown.In the drawings, some schematic illustrations of example structures of various devices and assemblies described herein may be shown with precise right angles and straight lines, but it is to be understood that such schematic illustrations may not reflect real-life process limitations which may cause the features to not look so "ideal" when any of the structures described herein are examined using e.g., scanning electron microscopy (SEM) images or transmission electron microscope (TEM) images. In such images of real structures, possible processing defects could also be visible, e.g., not-perfectly straight edges of materials, tapered vias or other openings, inadvertent rounding of corners or variations in thicknesses of different material layers, occasional screw, edge, or combination dislocations within the crystalline region, and/or occasional dislocation defects of single atoms or clusters of atoms. There may be other defects not listed here but that are common within the field of device fabrication. Furthermore, although a certain number of a given element may be illustrated in some of the drawings (e.g., a certain number and type of memory layers, a certain number and type of transistors of memory cells, or a certain arrangement of interconnects), this is simply for ease of illustration, and more, or less, than that number may be included in the IC devices and related assemblies and packages according to various embodiments of the present disclosure. Still further, various views shown in some of the drawings are intended to show relative arrangements of various elements therein. In other embodiments, various IC devices and related assemblies and packages, or portions thereof, may include other elements or components that are not illustrated (e.g., transistor portions, various further components that may be in electrical contact with any of the illustrated components of the IC devices and related assemblies and packages, etc.). Inspection of layout and mask data and reverse engineering of parts of a device to reconstruct the circuit using e.g., optical microscopy, TEM, or SEM, and/or inspection of a cross-section of a device to detect the shape and the location of various device elements described herein using e.g., physical failure analysis (PFA) would allow determination of presence of one or more FinFETs integrated over GAA transistors as described herein.Various operations may be described as multiple discrete actions or operations in turn in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations may not be performed in the order of presentation. Operations described may be performed in a different order from the described embodiment. Various additional operations may be performed, and/or described operations may be omitted in additional embodiments.Various IC devices with FinFETs integrated over GAA transistors as described herein may be implemented in, or associated with, one or more components associated with an IC or/and may be implemented between various such components. In various embodiments, components associated with an IC include, for example, transistors, diodes, power sources, resistors, capacitors, inductors, sensors, transceivers, receivers, antennas, etc. Components associated with an IC may include those that are mounted on IC or those connected to an IC. The IC may be either analog or digital and may be used in a number of applications, such as microprocessors, optoelectronics, logic blocks, audio amplifiers, etc., depending on the components associated with the IC. The IC may be employed as part of a chipset for executing one or more related functions in a computer.FIG. 1 provides a schematic illustration of an example IC device (e.g., a chip) 100 in which FinFETs integrated over GAA transistors may be implemented, according to some embodiments of the present disclosure. As shown in FIG. 1 , the IC device may include a support structure 110, a GAA transistor layer 120, and a FinFET layer 130. As shown in FIG. 1 , the FinFET layer 130 may be integrated over the GAA transistor layer 120 so that the GAA transistor layer 120 is between the support structure 110 and the FinFET layer 130 (i.e., the FinFET layer 130 is stacked above the GAA transistor layer 120). In some embodiments, the IC device may further include a thin-film memory layer 140, integrated over the GAA transistor layer 120 so that the FinFET layer 130 is either at least partially overlaps with the thin-film memory layer 140 (i.e., the thin-film memory layer 140 may be located at approximately the same level with respect to the support structure 110 as the FinFET layer 130) or is between the GAA transistor layer 120 and the thin-film memory layer 140 (i.e., the thin-film memory layer 140 may be further away from the support structure 110 than the FinFET layer 130).In general, the support structure 110 may include any of the materials described below with reference to the substrate 2102 ( FIG. 7 ).The GAA transistor layer 120 may be a layer in which a plurality of GAA transistors may be implemented and may be front end of line (FEOL) transistors such as the transistors 2140 ( FIG. 7 ), fabricated so that channel regions of the transistors include substantially single-crystalline semiconductor material provided (e.g., epitaxially grown) as a top layer of the support structure 110. Because carrier mobility is the highest in single-crystalline semiconductor materials, such GAA transistors may be particularly suitable for providing high-performance compute logic of the IC device 100. For example, the GAA transistors of the GAA transistor layer 120 may be used to implement one or more of I/O circuitry, power delivery circuitry, a field programmable gate array logic, etc. In various embodiments, the GAA transistor layer 120 may include any combination of nanoribbon transistors, nanosheet transistors, and nanowire transistors.The FinFET layer 130 may be a layer in which a plurality of FinFETs may be implemented. Because the FinFET layer 130 is stacked above the GAA transistor layer 120, channel regions of the FinFETs of the FinFET layer 130 may not be formed based on the substantially single-crystalline semiconductor material provided as a top layer of the support structure 110. In some embodiments, providing the FinFET layer 130 may include performing a layer transfer of a substantially single-crystalline semiconductor material grown (e.g., epitaxially grown) on another support structure to be over the GAA transistor layer 120, and then forming the FinFETs so that channel regions of the FinFETs of the FinFET layer 130 include the substantially single-crystalline semiconductor material that was transferred from another support structure. The architecture of FinFETs allows including thicker gate dielectrics in the gate stacks of the transistors compared to gate dielectrics that may be included in GAA transistors, which allows realizing relatively high-voltage transistors based on FinFETs ("high-voltage" compared to what can be realized with the GAA transistors). Furthermore, because the transistors of the FinFET layer 130 may be built based on a substantially single-crystalline semiconductor material, carrier mobility in these transistors may be comparable to that of the transistors of the GAA transistor layer 120, making the FinFETs of the FinFET layer 130 also relatively high-performance transistors. Because the transistors of the FinFET layer 130 may be made both relatively high-voltage and high-performance, they may be particularly suitable for providing peripheral logic for one or more memory arrays implemented in the thin-film memory layer 140.The thin-film memory layer 140 may include a plurality of 1T-1X memory cells, where the transistors of the memory cells have channel regions formed of thin-film semiconductor materials, i.e., the transistors of the thin-film memory layer 140 may be thin-film transistors (TFTs). A TFT is a special kind of a field-effect transistor made by depositing a thin-film of a semiconductor material, as well as a dielectric layer and metallic contacts, over a support layer (or, simply, a "support") that may be a non-conducting and a non-semiconducting layer. In context of the IC device 100, such a thin-film of a semiconductor material may be deposited over the GAA transistor layer 120 and/or over the FinFET layer 130. At least a portion of the active thin-film semiconductor material forms a channel region of the TFT. Thin-film semiconductor materials are typically polycrystalline, polymorphous, or amorphous semiconductor materials, which is different from single-crystalline semiconductor materials that may be epitaxially grown on semiconductor substrates. TFTs are particularly suitable for being included in a back end of line (BEOL) portions of IC devices because thin-film channel materials may be deposited at relatively low temperatures, compared to the relatively high temperatures required for epitaxially growing single-crystalline semiconductor materials. Thus, TFTs are different from conventional, non-TFT, FEOL transistors where the active semiconductor material of the channel regions is typically a part of a semiconductor substrate, e.g., a part of a silicon wafer. FinFETs and TFT-based memory integrated over GAA transistors, described herein, may be used, for example, to address the scaling challenge of logic transistor (e.g., FEOL) based eDRAM technology and enable high-density embedded memory in an advanced complementary metal-oxide-semiconductor (CMOS) process.Implementing transistors of 1T-1X memory cells as TFTs of the thin-film memory layer 140 may have the advantages of the reduced leakage and/or less expensive fabrication. On the other hand, implementing FinFETs of the FinFET layer 130 using layer transfer to provide substantially single-crystalline semiconductor channel materials over the GAA transistor layer 120 may have the advantages of faster operation of such transistors, due to carrier mobility being higher in single-crystalline semiconductor materials, compared to carrier mobility in polycrystalline, polymorphous, or amorphous semiconductor materials (i.e., in thin-film channel materials). In some embodiments, at least some of the FinFETs of the FinFET layer 130 (e.g., the relatively high-voltage FinFETs) may be coupled to one or more memory cells of the thin-film memory layer 140 and may be used to control access to data stored in the thin-film memory layer 140. On the other hand, the GAA transistors of the GAA transistor layer 120 being the high-performance compute logic transistors may be configured to perform various operations with respect to data accessed by the FinFETs of the FinFET layer 130 from the memory cells of the thin-film memory layer 140. Such operations may, e.g., include arithmetic and logic operations, pipelining of data from the FinFET layer 130 or the thin-film memory layer 140, pipelining of data from external devices/chips, etc. In contrast, in some embodiments, the FinFETs of the FinFET layer 130 may be configured to only control input/output (I/O) access to data stored in the thin-film memory layer 140 but not perform any operations on the data.Whether a semiconductor channel material of a given transistor is a thin-film channel material or a single-crystalline semiconductor material may be identified by inspecting the grain size of the material. An average grain size of a semiconductor material in a channel region of a transistor being between about 0.05 and 1 millimeters (in which case the material may be considered to be polycrystalline) or smaller than about 0.05 millimeter (in which case the material may be considered to be polymorphous) may be indicative of the semiconductor material having been deposited at the relatively low temperatures (i.e., indicative of the transistor being a TFT). On the other hand, an average grain size of the semiconductor material being equal to or greater than about 1 millimeter (in which case the material may be considered to be a substantially single-crystalline material) may be indicative of the semiconductor material having been epitaxially grown (which, in general, is a process performed at substantially higher temperatures than those at which thin-film semiconductor materials may be deposited for TFTs). Presence of transistors with substantially single-crystalline semiconductor channel regions in the BEOL of an IC device (e.g., in the FinFET layer 130 of the IC device 100) may, therefore, be indicative of the layer transfer used to form such transistors.For any of the TFTs described herein, a channel region may be composed of semiconductor material systems including, for example, N-type or P-type materials systems. In some embodiments, the channel region of a TFT may include a high mobility oxide semiconductor material, such as tin oxide, antimony oxide, indium oxide, indium tin oxide, titanium oxide, zinc oxide, indium zinc oxide, indium gallium zinc oxide (IGZO), gallium oxide, titanium oxynitride, ruthenium oxide, or tungsten oxide. In general, the channel region of a TFT may include one or more of tin oxide, cobalt oxide, copper oxide, antimony oxide, ruthenium oxide, tungsten oxide, zinc oxide, gallium oxide, titanium oxide, indium oxide, titanium oxynitride, indium tin oxide, indium zinc oxide, nickel oxide, niobium oxide, copper peroxide, IGZO, indium telluride, molybdenite, molybdenum diselenide, tungsten diselenide, tungsten disulfide, N- or P-type amorphous or polycrystalline silicon, germanium, indium gallium arsenide, silicon germanium, gallium nitride, aluminum gallium nitride, indium phosphite, and black phosphorus, each of which may possibly be doped with one or more of gallium, indium, aluminum, fluorine, boron, phosphorus, arsenic, nitrogen, tantalum, tungsten, and magnesium, etc. In particular, the channel region of a TFT may be a thin-film material. Some such materials may be deposited at relatively low temperatures, which allows depositing them within the thermal budgets imposed on back end fabrication to avoid damaging the frontend components (e.g., the GAA transistors of the GAA transistor layer 120). In some embodiments, the channel region of a TFT may have a thickness between about 5 and 75 nanometers, including all values and ranges therein.For any of the transistors that are not TFTs described herein, a channel region may be composed of semiconductor material systems including, for example, N-type or P-type materials systems. In some embodiments, the channel region of a non-TFT may include a high mobility oxide semiconductor material, such as tin oxide, antimony oxide, indium oxide, indium tin oxide, titanium oxide, zinc oxide, indium zinc oxide, gallium oxide, titanium oxynitride, ruthenium oxide, or tungsten oxide. In some embodiments, the channel region of a non-TFT may include a combination of semiconductor materials. In some embodiments, the channel region of a non-TFT may include a monocrystalline semiconductor, such as silicon (Si) or germanium (Ge). In some embodiments, the channel region of a non-TFT may include a compound semiconductor with a first sub-lattice of at least one element from group III of the periodic table (e.g., Al, Ga, In), and a second sub-lattice of at least one element of group V of the periodic table (e.g., P, As, Sb). For some example N-type transistor embodiments (i.e., for the embodiments where the transistor is an N-type metal-oxide-semiconductor (NMOS) transistor), the channel region may advantageously include a III-V material having a high electron mobility, such as, but not limited to InGaAs, InP, InSb, and InAs. For some such embodiments, the channel region may be a ternary III-V alloy, such as InGaAs, GaAsSb, InAsP, or InPSb. For some InxGa1-xAs fin embodiments, In content (x) may be between 0.6 and 0.9, and may advantageously be at least 0.7 (e.g., In0.7Ga0.3As). In some embodiments with highest mobility, the channel region of a non-TFT may be an intrinsic III-V material, i.e., a III-V semiconductor material not intentionally doped with any electrically active impurity. In alternate embodiments, a nominal impurity dopant level may be present within the channel region, for example to further fine-tune a threshold voltage Vt of the transistor, to provide HALO pocket implants, etc. Even for impurity-doped embodiments however, impurity dopant level within the channel region may be relatively low, for example below 1015 dopant atoms per cubic centimeter (cm-3), and advantageously below 1013 cm-3. For some example P-type transistor embodiments (i.e., for the embodiments where the transistor is a P-type metal-oxide-semiconductor (PMOS) transistor), the channel region may advantageously be a group IV material having a high hole mobility, such as, but not limited to Ge or a Ge-rich SiGe alloy. For some example embodiments, the channel region may have a Ge content between 0.6 and 0.9, and advantageously may be at least 0.7. In some embodiments with highest mobility, the channel region may be intrinsic III-V (or IV for P-type devices) material and not intentionally doped with any electrically active impurity. In alternate embodiments, one or more a nominal impurity dopant level may be present within the channel region, for example to further set a threshold voltage (Vt), or to provide HALO pocket implants, etc. Even for impurity-doped embodiments however, impurity dopant level within the channel region is relatively low, for example below 1015 cm-3, and advantageously below 1013 cm-3.Thin-film semiconductor materials typically have larger bandgaps and may, therefore, be less temperature sensitive, than epitaxially grown semiconductor materials. Therefore, in some embodiments, bandgaps of the semiconductor materials of the channel regions of the TFTs of the thin-film memory layer 140 may be larger than bandgaps of the semiconductor materials of the channel regions of the non-TFT transistors of the GAA transistor layer 120 and the FinFET layer 130.FIG. 2 is a cross-sectional side view of an example IC device 200 with FinFETs integrated over GAA transistors, according to some embodiments of the present disclosure. The IC device 200 may be an example of the IC device 100, shown in FIG. 1 . To that end, FIG. 2 illustrates some of the reference numerals used in the IC device 100 of FIG. 1 . Descriptions of the elements with such reference numerals provided with respect to FIG. 1 are applicable to the IC device 200 of FIG. 2 and, in the interest of brevity, are not repeated. Furthermore, a number of elements that are labeled in FIG. 2 , as well as in FIG. 3 and FIG. 4 , with reference numerals are illustrated in these figures with different patterns, with a legend showing the correspondence between the reference numerals and patterns being provided at the bottom of these figures. For example, the legend illustrates that FIG. 2 uses different patterns to show an insulating material 202, a channel material 222 of the GAA transistor layer 120, a channel material 232 of the FinFET layer 130, etc.As shown in FIG. 2 , the GAA transistor layer 120 may include one or more stacks of GAA transistors provided over the support structure 110. Two such stacks are shown in FIG. 2 as stacks 204-1 and 204-2, each stack 204 including three GAA transistors 220 (labeled individually as transistors 220-1, 220-2, and 220-3) but, in other embodiments, the IC device 200 may include any number of one or more stacks 204, each stack 204 including any number of one or more GAA transistors 220, and different stacks 204 may include different number of the GAA transistors 220. The GAA transistors 220 may include a channel material 222, which may include any of the semiconductor materials described above with respect to channel regions of transistors that are not TFTs.As further shown in FIG. 2 , the FinFET layer 130 may include one or more fins based on which FinFETs may be formed, the FinFET layer 130 being further away from the support structure 110 than the GAA transistor layer 120. Two such fins are shown in FIG. 2 as fins 206-1 and 206-2, with FIG. 2 illustrating a FinFET 230 in each of the fins 206, but, in other embodiments, the IC device 200 may include any number of one or more fins 206, each fin 206 including any number of one or more FinFETs 230, and different fins 206 may include different number of the FinFETs 230. The FinFETs 230 may include a channel material 232, which may include any of the semiconductor materials described above with respect to channel regions of transistors that are not TFTs.FIG. 2 further illustrates that a gate stack may at least partially wrap around a channel region of an individual GAA transistor 220 or a channel region of an individual FinFET 230, the gate stack including at least a gate dielectric material 252 and a gate electrode material 254, as shown in FIG. 2 . Although FIG. 2 illustrates the same gate dielectric material 252 and the same gate electrode material 254 used in the GAA transistors 220 and in the FinFETs 230, in general, material compositions of the gate dielectric material 252 and the gate electrode material 254 in different ones of these transistors may be different. Various embodiments of the gate stacks that may be used with any of the GAA transistors 220, the FinFETs 230, as well as transistors of the thin-film memory layer 140 are described below with reference to FIGS. 4A-4C .Although not specifically shown in FIG. 2 , the IC device 200 further includes various interconnects for routing signals, power, and data between various components of the IC device 200. For example, such interconnects may couple one or more of the FinFETs 230 and one or more of the GAA transistors 220. In another example, such interconnects may couple any of the FinFETs 230 and any of the GAA transistors 220 to external components. In some embodiments, such interconnects may be implemented as described with reference to different metal layers of the metallization stack 2119, shown in FIG. 7 . In various embodiments, various interconnects included in various embodiments of the IC device 100 may include any suitable electrically conductive material, alloy, or a stack of multiple electrically conductive materials. In some embodiments, various electrically conductive materials may include one or more metals or metal alloys, with metals such as copper, ruthenium, palladium, platinum, cobalt, nickel, hafnium, zirconium, titanium, tantalum, molybdenum, tungsten and aluminum. In some embodiments, various electrically conductive materials may include one or more electrically conductive alloys, oxides (e.g., conductive metal oxides), carbides (e.g., hafnium carbide, zirconium carbide, titanium carbide, tantalum carbide, and aluminum carbide), or nitrides (e.g., hafnium nitride, zirconium nitride, titanium nitride, tantalum nitride, and aluminum nitride) of one or more metals.FIG. 2 further illustrates an insulating material 202 (e.g., an interlayer dielectric (ILD) material) that may surround various portions of the GAAs 220, the FinFETs 230, and various other components, including various interconnects, implemented in the IC device 200. In various embodiments, the insulating material 202 may include any suitable ILD materials such as silicon oxide, carbon-doped silicon oxide, silicon carbide, silicon nitride, aluminum oxide, and/or silicon oxynitride. In various embodiments, the insulating material 202 may include a low-k dielectric material. Examples of the low-k dielectric materials that may be used as the insulating material 202 include, but are not limited to, silicon dioxide, carbon-doped oxide, silicon nitride, fused silica glass (FSG), and organosilicates such as silsesquioxane, siloxane, and organosilicate glass. Other examples of low-k dielectric materials that may be used as the insulating material 202 include organic polymers such as polyimide, polynorbornenes, benzocyclobutene, perfluorocyclobutane, or polytetrafluoroethylene (PTFE). Still other examples of low-k dielectric materials that may be used as the insulating material 202 include silicon-based polymeric dielectrics such as hydrogen silsesquioxane (HSQ) and methylsilsesquioxane (MSQ). Other examples of low-k materials that may be used in the insulating material 202 include various porous dielectric materials, such as for example porous silicon dioxide or porous carbon-doped silicon dioxide, where large voids or pores are created in a dielectric in order to reduce the overall dielectric constant of the layer, since voids can have a dielectric constant of nearly 1.FIG. 3 is a cross-sectional side view of an example IC device 300 with FinFETs and backend memory integrated over GAA transistors, according to some embodiments of the present disclosure. The IC device 300 may be an example of the IC device 100, shown in FIG. 1 . To that end, FIG. 3 illustrates some of the reference numerals used in the IC device 100 of FIG. 1 . Furthermore, the IC device 300 may include the GAA transistors 220 and the FinFETs 230 as described with reference to FIG. 2 , which can be seen by FIG. 3 illustrating some of the reference numerals used in the IC device 200 of FIG. 2 . Descriptions of the elements with such reference numerals provided with respect to FIG. 1 and FIG. 2 are applicable to the IC device 300 of FIG. 3 and, in the interest of brevity, are not repeated.As shown in FIG. 3 , in some embodiments, in addition to the GAA transistors 220 and FinFETs 230 similar to those described with reference to FIG. 2 , the IC device may further include a plurality of 1T-1X memory cells 340, forming a TFT-based memory array 342 in the thin-film memory layer 140. In particular, FIG. 3 illustrates a thin-film semiconductor material 344 based on which TFTs of the 1T-1X memory cells 340 may be formed. The thin-film semiconductor material 344 may include any of the semiconductor materials described above with respect to channel regions of TFTs. In addition, FIG. 3 illustrates capacitors 346, in case 1T-1X memory cells 340 are 1T-1C DRAM cells, however, in general, the memory cells 340 may include any type of storage elements besides the capacitors 346. Details of the TFTs of the 1T-1X memory cells 340 are not specifically shown in FIG. 3 because how TFT-based backend memory may be implemented may be different, depending on a particular design, and is generally known in the art.FIG. 3 illustrates that, in some embodiments, the FinFETs 230 of the IC device 300 may be functionally divided into two groups 330. The first group 330-1 may include relatively low-voltage FinFETs 230, which may be coupled to various ones of the GAA transistors 220 to form desired logic circuits. For example, in some embodiments, the FinFETs 230 of the first group 330-1 may help the GAA transistors 220 (or, more generally, cooperate with the GAA transistors 220 in) providing high-performance compute logic functionality. The second group 330-2 may include relatively high-voltage FinFETs 230, which may be coupled to various ones of the backend TFT-based memory cells 340 to form desired memory circuits. For example, in some embodiments, the FinFETs 230 of the second group 330-2 may control read and write of data of the memory array 342. In such embodiments, a thickness of the gate dielectric 252 of the FinFETs 230 of the first group 330-1 may be smaller than a thickness of the gate dielectric 252 of the FinFETs 230 of the second group 330-2.In various embodiments, any of the gate stacks of the GAA transistors 220, the FinFETs 230, or the TFTs of the 1T-1X memory cells 340 may be implemented in different manners. FIGS. 4A-4C are cross-sectional side views of gate stacks 400 that could be used with any of the transistors of IC devices with FinFETs integrated over GAA transistors, according to different embodiments of the present disclosure. Any of the gate stacks 400 shown in FIGS. 4A-4C may be used to implement any of the gate stacks of the GAA transistors 220, any of the gate stacks of the FinFETs 230, or any of the gate stacks of the TFTs of the 1T-1X memory cells 340.A gate stack 400A, shown in FIG. 4A , illustrates an embodiment where any of the gate stacks of the transistors of the IC device 100 may include a stack of a gate electrode material 254 and a gate dielectric material 252, where the gate dielectric material 252 is between the gate electrode material 254 and the corresponding channel material 222/232/344. In some such embodiments, one side of the gate dielectric material 252 may be in contact with the channel material 222/232/344 while the opposite side of the gate dielectric material 252 may be in contact with the gate electrode material 254.In various embodiments, the gate dielectric material 252 may include one or more high-k dielectric materials and may include elements such as hafnium, silicon, oxygen, titanium, tantalum, lanthanum, aluminum, zirconium, barium, strontium, yttrium, lead, scandium, niobium, and zinc. Examples of high-k materials that may be used in the gate dielectric material 252 may include, but are not limited to, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, tantalum oxide, tantalum silicon oxide, lead scandium tantalum oxide, and lead zinc niobate. In some embodiments, an annealing process may be carried out on the gate dielectric material 252 during manufacture of the transistors to improve the quality of the gate dielectric material 252. In some embodiments, the gate dielectric material 252 may have a thickness between about 0.5 nanometers and 3 nanometers, including all values and ranges therein, e.g., between about 1 and 3 nanometers, or between about 1 and 2 nanometers.In some embodiments, the gate dielectric material 252 may be a multilayer gate dielectric, e.g., it may include any of the high-k dielectric materials in one layer and a layer of IGZO. In some embodiments, the gate stack (i.e., a combination of the gate dielectric material 252 and the gate electrode material 254) may be arranged so that the IGZO is disposed between the high-k dielectric and the channel material 222/232/344. In such embodiments, the IGZO may be in contact with the channel material 222/232/344, and may provide the interface between the channel material 222/232/344 and the remainder of the multilayer gate dielectric material 252. The IGZO may have a gallium to indium ratio of 1:1, a gallium to indium ratio greater than 1 (e.g., 2:1, 3:1, 4:1, 5:1, 6:1, 7:1, 8:1, 9:1, or 10:1), and/or a gallium to indium ratio less than 1 (e.g., 1:2, 1:3, 1:4, 1:5, 1:6, 1:7, 1:8, 1:9, or 1:10).The gate dielectric material 252 may laterally surround the channel material 222/232/344, and the gate electrode material 254 may laterally surround the gate dielectric material 252 such that the gate dielectric material 252 is disposed between the gate electrode material 254 and the channel material 222/232/344.The gate electrode material 254 may include at least one P-type work function metal or N-type work function metal, depending on whether a given transistor of the IC device 100 in which this gate electrode material 254 is implemented is a PMOS transistor or an NMOS transistor. For a PMOS transistor, metals that may be used for the gate electrode material 254 may include, but are not limited to, ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides (e.g., ruthenium oxide). For an NMOS transistor, metals that may be used for the gate electrode material 254 include, but are not limited to, hafnium, zirconium, titanium, tantalum, aluminum, alloys of these metals, and carbides of these metals (e.g., hafnium carbide, zirconium carbide, titanium carbide, tantalum carbide, and aluminum carbide). In some embodiments, the gate electrode material 254 may include a stack of two or more metal layers, where one or more metal layers are work function metal layers and at least one metal layer is a fill metal layer. Further metal layers may be included for other purposes, such as to act as a diffusion barrier layer.A gate stack 400B, shown in FIG. 4B , illustrates an embodiment where any of the gate stacks of the GAA transistors 220, the FinFETs 230, or the TFTs of the 1T-1X memory cells 340 may include a stack of the gate electrode material 254, a ferroelectric (FE) or an antiferroelectric (AFE) material 416, and the gate dielectric material 252. In such embodiments, the gate dielectric material 252 is still between the gate electrode material 254 and the corresponding channel material 222/232/344, as in the gate stack 400A. More specifically, the gate dielectric material 252 may be between the FE/AFE material 416 and the corresponding channel material 222/232/344, and the FE/AFE material 416 may be between the gate dielectric material 252 and the gate electrode material 254. In some such embodiments, one side of the gate dielectric material 252 may be in contact with the channel material 222/232/344 while the opposite side of the gate dielectric material 252 may be in contact with the FE/AFE material 416. Similarly, one side of the FE/AFE material 416 may be in contact with the gate dielectric material 252 and the opposite side of the FE/AFE material 416 may be in contact with the gate electrode material 254.As used herein, a FE or an AFE material is a material that exhibits, over some range of temperatures, spontaneous electric polarization, i.e., displacement of positive and negative charges from their original position, where the polarization can be reversed or reoriented by application of an electric field. In particular, an AFE material is a material that can assume a state in which electric dipoles from the ions and electrons in the material may form a substantially ordered (e.g., substantially crystalline) array, with adjacent dipoles being oriented in opposite (antiparallel) directions (i.e., the dipoles of each orientation may form interpenetrating sub-lattices, loosely analogous to a checkerboard pattern), while a FE material is a material that can assume a state in which all of the dipoles point in the same direction. Because the displacement of the charges in FE and AFE materials can be maintained for some time even in the absence of an electric field, such materials may be used to implement memory cells. The term "ferroelectric" is said to be adopted to convey the similarity of FE memories to ferromagnetic memories, despite the fact that there is typically no iron (Fe) present in FE materials. The term "FE transistor" may be used to refer to a transistor employing FE or AFE materials, e.g., in a gate stack as shown in FIG. 4B . Memory cells with FE transistors have the potential for adequate non-volatility, short programming time, low power consumption, high endurance, and high speed writing. In addition, FE transistors advantageously have the potential to be manufactured using processes compatible with the standard CMOS technology.The FE/AFE material 416 may be provided between the gate electrode material 254 and the channel material 222/232/344. The FE/AFE material 416 may include one or more materials which exhibit sufficient FE or AFE behavior even at thin dimensions as typically used in scaled transistors as the ones illustrated here. In some embodiments, the FE/AFE material 416 may include a material including hafnium, zirconium, and oxygen (e.g., hafnium zirconium oxide (HZO)), possibly doped with one or more dopants such as silicon, germanium, aluminum, yttrium, lanthanum, gadolinium, or niobium. In some embodiments, the FE/AFE material 416 may include a material including hafnium and oxygen (e.g., hafnium oxide), doped with one or more dopants. For example, the FE/AFE material 416 may include one or more of a material including silicon, hafnium, and oxygen (e.g., silicon-doped hafnium oxide), a material including germanium, hafnium, and oxygen (e.g., germanium-doped hafnium oxide), a material including aluminum, hafnium, and oxygen (e.g., aluminum-doped hafnium oxide), a material including yttrium, hafnium, and oxygen (e.g., yttrium-doped hafnium oxide), a material including lanthanum, hafnium, and oxygen (e.g., lanthanum-doped hafnium oxide), a material including gadolinium, hafnium, and oxygen (e.g., gadolinium-doped hafnium oxide), and a material including niobium, hafnium, and oxygen (e.g., niobium-doped hafnium oxide). However, in other embodiments, any other materials which exhibit FE or AFE behavior at thin dimensions may be used as the FE/AFE material 416 and are within the scope of the present disclosure. A layer of the FE/AFE material 416 may be a thin-film material and may have a thickness between about 0.5 nanometers and 15 nanometers, including all values and ranges therein (e.g., between about 1 and 10 nanometers, or between about 0.5 and 5 nanometers).A gate stack 400C, shown in FIG. 4C , illustrates an embodiment where any of the gate stacks of the GAA transistors 220, the FinFETs 230, or the TFTs of the 1T-1X memory cells 340 may include a stack of the gate electrode material 254, the FE/AFE material 416, an intermediate material 418, and the gate dielectric material 252. In such embodiments, the gate dielectric material 252 is still between the gate electrode material 254 and the corresponding channel material 222/232/344, as in the gate stacks 400A and 400B. More specifically, the gate dielectric material 252 may be between the intermediate material 418 and the corresponding channel material 222/232/344, the intermediate material 418 may be between the gate dielectric material 252 and the FE/AFE material 416, and the FE/AFE material 416 may be between the intermediate material 418 and the gate electrode material 254. In some such embodiments, one side of the gate dielectric material 252 may be in contact with the channel material 222/232/344 while the opposite side of the gate dielectric material 252 may be in contact with the intermediate material 418. Similarly, one side of the intermediate material 418 may be in contact with the gate dielectric material 252 and the opposite side of the intermediate material 418 may be in contact with the FE/AFE material 416. Furthermore, one side of the FE/AFE material 416 may be in contact with the intermediate material 418 and the opposite side of the FE/AFE material 416 may be in contact with the gate electrode material 254.IC devices with FinFETs integrated over GAA transistors, as described herein, may be fabricated using any suitable techniques, e.g., subtractive, additive, damascene, dual-damascene, etc. Some of such technique may include suitable deposition and patterning techniques. As used herein, "patterning" may refer to forming a pattern in one or more materials using any suitable techniques (e.g., applying a resist, patterning the resist using lithography, and then etching the one or more material using dry etching, wet etching, or any appropriate technique).FIG. 5 is a flow diagram of a method 500 of manufacturing an IC device with FinFETs integrated over GAA transistors (e.g., any embodiments of the IC device 100, described herein), according to some embodiments of the present disclosure. The example fabrication method shown in FIG. 5 may include other operations not specifically shown in FIG. 5 , such as various cleaning or planarization operations as known in the art. For example, in some embodiments, any of the layers of the IC device, or any of individual IC structures provided within the IC device, may be cleaned prior to, after, or during any of the processes of the fabrication method described herein, e.g., to remove oxides, surface-bound organic and metallic contaminants, as well as subsurface contamination. In some embodiments, cleaning may be carried out using e.g., a chemical solutions (such as peroxide), and/or with ultraviolet (UV) radiation combined with ozone, and/or oxidizing the surface (e.g., using thermal oxidation) then removing the oxide (e.g., using hydrofluoric acid (HF)). In another example, the top surfaces of the IC devices described herein may be planarized prior to, after, or during any of the processes of the fabrication method described herein, e.g., to remove overburden or excess materials. In some embodiments, planarization may be carried out using either wet or dry planarization processes, e.g., planarization be a chemical mechanical planarization (CMP), which may be understood as a process that utilizes a polishing surface, an abrasive and a slurry to remove the overburden and planarize the surface.As shown in FIG. 5 , the fabrication method 500 may include a process 502, that includes providing a GAA transistor layer over a support structure. The GAA transistor layer provided in the process 502 may include any embodiments of the GAA transistor layer 120, described herein, and the support structure used in the process 502 may include any embodiments of the support structure 110, described herein. The method 500 may further include a process 504, that includes performing a layer transfer to provide a FinFET layer over the GAA transistor layer provided in the process 502. The FinFET layer provided in the process 504 may include any embodiments of the FinFET layer 130, described herein. In some embodiments, the layer transfer performed in the process 504 may include growing a layer of a semiconductor material, e.g., of a substantially single-crystalline semiconductor material, on a semiconductor substrate/wafer, e.g., using epitaxial growth, and then transferring the layer over the GAA transistor layer provided in the process 502. In some such embodiments, a bonding interface may be detectable between the GAA transistor layer provided in the process 502 and the semiconductor material transferred in the process 504. The method 500 may also include a process 506 in which thin-film memory may be provided as well. The thin-film memory provided in the process 506 may include any embodiments of the thin-film memory cells 340 or any embodiments of the thin-film memory layer 140, described herein.Arrangements with FinFETs integrated over GAA transistors as disclosed herein may be included in any suitable electronic device. FIGS. 6-10 illustrate various examples of devices and components that may include one or more FinFETs integrated over GAA transistors as disclosed herein, e.g., that may include any embodiments of the IC devices 100, described herein.FIGS. 6A-6B are top views of a wafer 2000 and dies 2002 that may include one or more FinFETs integrated over GAA transistors (including FinFETs and TFT-based memory integrated over GAA transistors) in accordance with any of the embodiments disclosed herein. In some embodiments, the dies 2002 may be included in an IC package, in accordance with any of the embodiments disclosed herein. For example, any of the dies 2002 may serve as any of the dies 2256 in an IC package 2200 shown in FIG. 8 . The wafer 2000 may be composed of semiconductor material and may include one or more dies 2002 having IC structures formed on a surface of the wafer 2000. Each of the dies 2002 may be a repeating unit of a semiconductor product that includes any suitable IC (e.g., ICs including one or more FinFETs integrated over GAA transistors, e.g., any embodiments of the IC devices 100, as described herein). After the fabrication of the semiconductor product is complete (e.g., after manufacture of one or more FinFETs integrated over GAA transistors as described herein), the wafer 2000 may undergo a singulation process in which each of the dies 2002 is separated from one another to provide discrete "chips" of the semiconductor product. In particular, devices that include one or more FinFETs integrated over GAA transistors as disclosed herein may take the form of the wafer 2000 (e.g., not singulated) or the form of the die 2002 (e.g., singulated). The die 2002 may include a plurality of transistors (e.g., GAA transistors 220, FinFETs 230, and, optionally, the TFTs of the thin-film memory layer 140) and/or supporting circuitry to route electrical signals to the transistors, as well as any other IC components. In some embodiments, the wafer 2000 or the die 2002 may implement or include a memory device (e.g., a static random-access memory (SRAM) device), a logic device (e.g., an AND, OR, NAND, or NOR gate), or any other suitable circuit element. Multiple ones of these devices may be combined on a single die 2002. For example, a memory array formed by multiple memory devices may be formed on a same die 2002 as a processing device (e.g., the processing device 2402 of FIG. 10 ) or other logic that is configured to store information in the memory devices or execute instructions stored in the memory array.FIG. 7 is a cross-sectional side view of an IC device 2100 that may include one or more FinFETs integrated over GAA transistors (including FinFETs and TFT-based memory integrated over GAA transistors) in accordance with any of the embodiments disclosed herein. For example, the IC device 2100 may be, or may include, the IC device 100, described above, implementing one or more memory arrays which may include one or more FinFETs integrated over GAA transistors according to any embodiments described herein. In particular, different transistors of the one or more FinFETs integrated over GAA transistors as described herein may be implemented in any of the BEOL layers of the IC device 2100, e.g., in any of the interconnect layers 2106-2110 shown in FIG. 7 . Because there are various possibilities where such FinFETs integrated over GAA transistors (including FinFETs and TFT-based memory integrated over GAA transistors) may be integrated in the IC device 2100, the FinFETs integrated over GAA transistors are not specifically shown in FIG. 7 . In some embodiments, the IC device 2100 may serve as any of the dies 2256 in the IC package 2300.As shown in FIG. 7 , the IC device 2100 may be formed on a substrate 2102 (e.g., the wafer 2000 of FIG. 6A ) and may be included in a die (e.g., the die 2002 of FIG. 6B ). The substrate 2102 may include any material that may serve as a foundation for an IC device 2100, or, in general, as a foundation for forming one or more FinFETs integrated over GAA transistors according to any embodiments described herein. In some embodiments, the substrate 2102 may be a semiconductor substrate composed of semiconductor material systems including, for example, N-type or P-type material systems. The substrate may include, for example, a crystalline substrate formed using a bulk silicon or a silicon-on-insulator (SOI) structure. In some embodiments, the substrate 2102 may be formed using alternative materials, which may or may not be combined with silicon, that include, but are not limited to, germanium, silicon germanium, indium antimonide, lead telluride, indium arsenide, indium phosphide, gallium arsenide, aluminum gallium arsenide, aluminum arsenide, indium aluminum arsenide, aluminum indium antimonide, indium gallium arsenide, gallium nitride, indium gallium nitride, aluminum indium nitride or gallium antimonide, or other combinations of group III-N or group IV materials. Further materials classified as group II-VI or group III-V may also be used to form the substrate 2102 on which logic devices, e.g., the GAA transistors 220 and/or the transistors 2140 as shown in FIG. 7 , may be formed. In some embodiments, the substrate 2102 may be non-crystalline. In some embodiments, the substrate 2102 may be a printed circuit board (PCB) substrate. Although a few examples of the substrate 2102 are described here, any material or structure that may serve as a foundation upon which an IC device 2100 may be built falls within the scope of the present disclosure. The substrate 2102 may be part of a singulated die (e.g., the die 2002 of FIG. 6B ) or a wafer (e.g., the wafer 2000 of FIG. 6A ).The IC device 2100 may include one or more device layers 2104 disposed on the substrate 2102. The device layer 2104 may include features of one or more transistors 2140 (e.g., metal-oxide-semiconductor field-effect transistors (MOSFETs)) formed on the substrate 2102. The device layer 2104 may include, for example, one or more S/D regions 2120, a gate 2122 to control current flow in the transistors 2140 between the S/D regions 2120, and one or more S/D contacts 2124 to route electrical signals to/from the S/D regions 2120. The transistors 2140 may include additional features not depicted for the sake of clarity, such as device isolation regions, gate contacts, and the like. In some embodiments, the transistors 2140 may include the GAA transistors 220 as described herein. In other embodiments, the transistors 2140 may be provided in addition to the GAA transistors 220 as described herein.Each transistor 2140 may include a gate 2122 formed of at least two layers, a gate dielectric layer and a gate electrode layer. Generally, the gate dielectric layer of a transistor 2140 may include one layer or a stack of layers, and may include any of the materials described above with reference to the gate dielectric material 252. In some embodiments, an annealing process may be carried out on the gate dielectric of the gate 2122 to improve its quality when a high-k material is used.The gate electrode may be formed on the gate dielectric and may include at least one P-type work function metal or N-type work function metal, depending on whether the transistor 2140 is to be a PMOS or an NMOS transistor. In some implementations, the gate electrode may include a stack of two or more metal layers, where one or more metal layers are work function metal layers and at least one metal layer is a fill metal layer. Further metal layers may be included for other purposes, such as a barrier layer. The gate electrode of the gate 2122 may include any of the materials described above with reference to the gate electrode material 254.In some embodiments, when viewed as a cross-section of the transistor 2140 along the source-channel-drain direction, the gate electrode of the gate 2122 may include a U-shaped structure that includes a bottom or a top portion substantially parallel to the surface of the substrate and two sidewall portions that are substantially perpendicular to the top surface of the substrate. In other embodiments, at least one of the metal layers that form the gate electrode may simply be a planar layer that is substantially parallel to the top surface of the substrate and does not include sidewall portions substantially perpendicular to the top surface of the substrate. In other embodiments, the gate electrode may include a combination of U-shaped structures and planar, non-U-shaped structures. For example, the gate electrode may include one or more U-shaped metal layers formed atop one or more planar, non-U-shaped layers. In some embodiments, the gate electrode may include a V-shaped structure (e.g., when the fin of a FinFET does not have a "flat" upper surface, but instead has a rounded peak).In some embodiments, a pair of sidewall spacers may be formed on opposing sides of the gate stack to bracket the gate stack. The sidewall spacers may be formed from a material such as silicon nitride, silicon oxide, silicon carbide, silicon nitride doped with carbon, and silicon oxynitride. Processes for forming sidewall spacers are well known in the art and generally include deposition and etching process steps. In some embodiments, a plurality of spacer pairs may be used; for instance, two pairs, three pairs, or four pairs of sidewall spacers may be formed on opposing sides of the gate stack.The S/D regions 2120 may be formed within the substrate 2102, e.g., adjacent to the gate of each transistor 2140. The S/D regions 2120 may be formed using an implantation/diffusion process or an etching/deposition process, for example.Various transistors 2140 are not limited to the type and configuration depicted in FIG. 7 and may include a wide variety of other types and configurations such as, for example, planar transistors, non-planar transistors (e.g., FinFETs, nanowire, nanosheet, or nanoribbon transistors), or a combination of both.Electrical signals, such as power and/or IO signals, may be routed to and/or from the transistors 2140 of the device layer 2104 through one or more interconnect layers disposed on the device layer 2104 (illustrated in FIG. 7 as interconnect layers 2106-2110). For example, electrically conductive features of the device layer 2104 (e.g., the gate 2122 and the S/D contacts 2124) may be electrically coupled with the interconnect structures 2128 of the interconnect layers 2106-2110. The one or more interconnect layers 2106-2110 may form an ILD stack 2119 of the IC device 2100.The interconnect structures 2128 may be arranged within the interconnect layers 2106-1210 to route electrical signals according to a wide variety of designs (in particular, the arrangement is not limited to the particular configuration of interconnect structures 2128 depicted in FIG. 7 ). Although a particular number of interconnect layers 2106-1210 is depicted in FIG. 7 , embodiments of the present disclosure include IC devices having more or fewer interconnect layers than depicted.In some embodiments, the interconnect structures 2128 may include trench structures 2128A (sometimes referred to as "lines") and/or via structures 2128B (sometimes referred to as "holes") filled with an electrically conductive material such as a metal. The trench structures 2128A may be arranged to route electrical signals in a direction of a plane that is substantially parallel with a surface of the substrate 2102 upon which the device layer 2104 is formed. For example, the trench structures 2128A may route electrical signals in a direction in and out of the page from the perspective of FIG. 7 . The via structures 2128B may be arranged to route electrical signals in a direction of a plane that is substantially perpendicular to the surface of the substrate 2102 upon which the device layer 2104 is formed. In some embodiments, the via structures 2128B may electrically couple trench structures 2128A of different interconnect layers 2106-2110 together.The interconnect layers 2106-2110 may include a dielectric material 2126 disposed between the interconnect structures 2128, as shown in FIG. 7 . In some embodiments, the dielectric material 2126 disposed between the interconnect structures 2128 in different ones of the interconnect layers 2106-2110 may have different compositions; in other embodiments, the composition of the dielectric material 2126 between different interconnect layers 2106-2110 may be the same. The dielectric material 2126 may include any of the materials described above with reference to the dielectric material 252.A first interconnect layer 2106 (referred to as Metal 1 or "M1") may be formed directly on the device layer 2104. In some embodiments, the first interconnect layer 2106 may include trench structures 2128A and/or via structures 2128B, as shown. The trench structures 2128A of the first interconnect layer 2106 may be coupled with contacts (e.g., the S/D contacts 2124) of the device layer 2104.A second interconnect layer 2108 (referred to as Metal 2 or "M2") may be formed directly on the first interconnect layer 2106. In some embodiments, the second interconnect layer 2108 may include via structures 2128B to couple the trench structures 2128A of the second interconnect layer 2108 with the trench structures 2128A of the first interconnect layer 2106. Although the trench structures 2128A and the via structures 2128B are structurally delineated with a line within each interconnect layer (e.g., within the second interconnect layer 2108) for the sake of clarity, the trench structures 2128A and the via structures 2128B may be structurally and/or materially contiguous (e.g., simultaneously filled during a dual-damascene process) in some embodiments.A third interconnect layer 2110 (referred to as Metal 3 or "M3") (and additional interconnect layers, as desired) may be formed in succession on the second interconnect layer 2108 according to similar techniques and configurations described in connection with the second interconnect layer 2108 or the first interconnect layer 2106.Although not specifically shown in FIG. 7 , further metal layers may be present in the IC device 2100.The IC device 2100 may include a solder resist material 2134 (e.g., polyimide or similar material) and one or more bond pads 2136 formed above the top interconnect layers of the IC device. The bond pads 2136 may be electrically coupled with the interconnect structures 2128 and configured to route the electrical signals of the transistor(s) 2140 to other external devices. For example, solder bonds may be formed on the one or more bond pads 2136 to mechanically and/or electrically couple a chip including the IC device 2100 with another component (e.g., a circuit board). The IC device 2100 may have other alternative configurations to route the electrical signals from the interconnect layers 2106-2110 than depicted in other embodiments. For example, the bond pads 2136 may be replaced by or may further include other analogous features (e.g., posts) that route the electrical signals to external components.FIG. 8 is a side, cross-sectional view of an example IC package 2200 that may include one or more FinFETs integrated over GAA transistors (including FinFETs and TFT-based memory integrated over GAA transistors) in accordance with any of the embodiments disclosed herein. In some embodiments, the IC package 2200 may be a system-in-package (SiP).The package substrate 2252 may be formed of a dielectric material (e.g., a ceramic, a buildup film, an epoxy film having filler particles therein, etc.), and may have conductive pathways extending through the dielectric material between the face 2272 and the face 2274, or between different locations on the face 2272, and/or between different locations on the face 2274. These conductive pathways may take the form of any of the interconnect structures 2128 discussed above with reference to FIG. 7 .The package substrate 2252 may include conductive contacts 2263 that are coupled to conductive pathways 2262 through the package substrate 2252, allowing circuitry within the dies 2256 and/or the interposer 2257 to electrically couple to various ones of the conductive contacts 2264 (or to other devices included in the package substrate 2252, not shown).The IC package 2200 may include an interposer 2257 coupled to the package substrate 2252 via conductive contacts 2261 of the interposer 2257, first-level interconnects 2265, and the conductive contacts 2263 of the package substrate 2252. The first-level interconnects 2265 illustrated in FIG. 8 are solder bumps, but any suitable first-level interconnects 2265 may be used. In some embodiments, no interposer 2257 may be included in the IC package 2200; instead, the dies 2256 may be coupled directly to the conductive contacts 2263 at the face 2272 by first-level interconnects 2265.The IC package 2200 may include one or more dies 2256 coupled to the interposer 2257 via conductive contacts 2254 of the dies 2256, first-level interconnects 2258, and conductive contacts 2260 of the interposer 2257. The conductive contacts 2260 may be coupled to conductive pathways (not shown) through the interposer 2257, allowing circuitry within the dies 2256 to electrically couple to various ones of the conductive contacts 2261 (or to other devices included in the interposer 2257, not shown). The first-level interconnects 2258 illustrated in FIG. 8 are solder bumps, but any suitable first-level interconnects 2258 may be used. As used herein, a "conductive contact" may refer to a portion of electrically conductive material (e.g., metal) serving as an interface between different components; conductive contacts may be recessed in, flush with, or extending away from a surface of a component, and may take any suitable form (e.g., a conductive pad or socket).In some embodiments, an underfill material 2266 may be disposed between the package substrate 2252 and the interposer 2257 around the first-level interconnects 2265, and a mold compound 2268 may be disposed around the dies 2256 and the interposer 2257 and in contact with the package substrate 2252. In some embodiments, the underfill material 2266 may be the same as the mold compound 2268. Example materials that may be used for the underfill material 2266 and the mold compound 2268 are epoxy mold materials, as suitable. Second-level interconnects 2270 may be coupled to the conductive contacts 2264. The second-level interconnects 2270 illustrated in FIG. 8 are solder balls (e.g., for a ball grid array arrangement), but any suitable second-level interconnects 2270 may be used (e.g., pins in a pin grid array arrangement or lands in a land grid array arrangement). The second-level interconnects 2270 may be used to couple the IC package 2200 to another component, such as a circuit board (e.g., a motherboard), an interposer, or another IC package, as known in the art and as discussed below with reference to FIG. 9 .The dies 2256 may take the form of any of the embodiments of the die 2002 discussed herein (e.g., may include any of the embodiments of the IC device 100 as described herein). In embodiments in which the IC package 2200 includes multiple dies 2256, the IC package 2200 may be referred to as a multi-chip package (MCP). The dies 2256 may include circuitry to perform any desired functionality. For example, one or more of the dies 2256 may be logic dies (e.g., silicon-based dies), and one or more of the dies 2256 may be memory dies (e.g., high-bandwidth memory), including dies with the IC devices as described herein. In some embodiments, any of the dies 2256 may include one or more FinFETs integrated over GAA transistors, e.g., as discussed above; in some embodiments, at least some of the dies 2256 may not include any FinFETs integrated over GAA transistors.The IC package 2200 illustrated in FIG. 8 may be a flip chip package, although other package architectures may be used. For example, the IC package 2200 may be a ball grid array (BGA) package, such as an embedded wafer-level ball grid array (eWLB) package. In another example, the IC package 2200 may be a wafer-level chip scale package (WLCSP) or a panel fan-out (FO) package. Although two dies 2256 are illustrated in the IC package 2200 of FIG. 8 , an IC package 2200 may include any desired number of the dies 2256. An IC package 2200 may include additional passive components, such as surface-mount resistors, capacitors, and inductors disposed on the first face 2272 or the second face 2274 of the package substrate 2252, or on either face of the interposer 2257. More generally, an IC package 2200 may include any other active or passive components known in the art.FIG. 9 is a cross-sectional side view of an IC device assembly 2300 that may include components having one or more FinFETs integrated over GAA transistors (including FinFETs and TFT-based memory integrated over GAA transistors) in accordance with any of the embodiments disclosed herein. The IC device assembly 2300 includes a number of components disposed on a circuit board 2302 (which may be, e.g., a motherboard). The IC device assembly 2300 includes components disposed on a first face 2340 of the circuit board 2302 and an opposing second face 2342 of the circuit board 2302; generally, components may be disposed on one or both faces 2340 and 2342. In particular, any suitable ones of the components of the IC device assembly 2300 may include any of one or more FinFETs integrated over GAA transistors in accordance with any of the embodiments disclosed herein; e.g., any of the IC packages discussed below with reference to the IC device assembly 2300 may take the form of any of the embodiments of the IC package 2200 discussed above with reference to FIG. 8 (e.g., may include one or more FinFETs integrated over GAA transistors provided on a die 2256).In some embodiments, the circuit board 2302 may be a PCB including multiple metal layers separated from one another by layers of dielectric material and interconnected by electrically conductive vias. Any one or more of the metal layers may be formed in a desired circuit pattern to route electrical signals (optionally in conjunction with other metal layers) between the components coupled to the circuit board 2302. In other embodiments, the circuit board 2302 may be a non-PCB substrate.The IC device assembly 2300 illustrated in FIG. 9 includes a package-on-interposer structure 2336 coupled to the first face 2340 of the circuit board 2302 by coupling components 2316. The coupling components 2316 may electrically and mechanically couple the package-on-interposer structure 2336 to the circuit board 2302, and may include solder balls (e.g., as shown in FIG. 9 ), male and female portions of a socket, an adhesive, an underfill material, and/or any other suitable electrical and/or mechanical coupling structure.The package-on-interposer structure 2336 may include an IC package 2320 coupled to an interposer 2304 by coupling components 2318. The coupling components 2318 may take any suitable form for the application, such as the forms discussed above with reference to the coupling components 2316. The IC package 2320 may be or include, for example, a die (the die 2002 of FIG. 6B ), an IC device (e.g., the IC device 100/300), or any other suitable component. In particular, the IC package 2320 may include one or more FinFETs integrated over GAA transistors (including FinFETs and TFT-based memory integrated over GAA transistors) as described herein. Although a single IC package 2320 is shown in FIG. 9 , multiple IC packages may be coupled to the interposer 2304; indeed, additional interposers may be coupled to the interposer 2304. The interposer 2304 may provide an intervening substrate used to bridge the circuit board 2302 and the IC package 2320. Generally, the interposer 2304 may spread a connection to a wider pitch or reroute a connection to a different connection. For example, the interposer 2304 may couple the IC package 2320 (e.g., a die) to a BGA of the coupling components 2316 for coupling to the circuit board 2302. In the embodiment illustrated in FIG. 9 , the IC package 2320 and the circuit board 2302 are attached to opposing sides of the interposer 2304; in other embodiments, the IC package 2320 and the circuit board 2302 may be attached to a same side of the interposer 2304. In some embodiments, three or more components may be interconnected by way of the interposer 2304.The interposer 2304 may be formed of an epoxy resin, a fiberglass-reinforced epoxy resin, a ceramic material, or a polymer material such as polyimide. In some implementations, the interposer 2304 may be formed of alternate rigid or flexible materials that may include the same materials described above for use in a semiconductor substrate, such as silicon, germanium, and other group III-V and group IV materials. The interposer 2304 may include metal interconnects 2308 and vias 2310, including but not limited to through-silicon vias (TSVs) 2306. The interposer 2304 may further include embedded devices 2314, including both passive and active devices. Such devices may include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, sensors, electrostatic discharge (ESD) protection devices, and memory devices. More complex devices such as radio frequency (RF) devices, power amplifiers, power management devices, antennas, arrays, sensors, and microelectromechanical systems (MEMS) devices may also be formed on the interposer 2304. The package-on-interposer structure 2336 may take the form of any of the package-on-interposer structures known in the art.The IC device assembly 2300 may include an IC package 2324 coupled to the first face 2340 of the circuit board 2302 by coupling components 2322. The coupling components 2322 may take the form of any of the embodiments discussed above with reference to the coupling components 2316, and the IC package 2324 may take the form of any of the embodiments discussed above with reference to the IC package 2320.The IC device assembly 2300 illustrated in FIG. 9 includes a package-on-package structure 2334 coupled to the second face 2342 of the circuit board 2302 by coupling components 2328. The package-on-package structure 2334 may include an IC package 2326 and an IC package 2332 coupled together by coupling components 2330 such that the IC package 2326 is disposed between the circuit board 2302 and the IC package 2332. The coupling components 2328 and 2330 may take the form of any of the embodiments of the coupling components 2316 discussed above, and the IC packages 2326 and 2332 may take the form of any of the embodiments of the IC package 2320 discussed above. The package-on-package structure 2334 may be configured in accordance with any of the package-on-package structures known in the art.FIG. 10 is a block diagram of an example computing device 2400 that may include one or more components with one or more FinFETs integrated over GAA transistors (including FinFETs and TFT-based memory integrated over GAA transistors) in accordance with any of the embodiments disclosed herein. For example, any suitable ones of the components of the computing device 2400 may include a die (e.g., the die 2002 of FIG. 6B ) including one or more FinFETs integrated over GAA transistors in accordance with any of the embodiments disclosed herein. Any of the components of the computing device 2400 may include any embodiments of the IC device 100, the IC device 2100 of FIG. 7 , any combination of these IC devices, and/or an IC package 2200 of FIG. 8 . Any of the components of the computing device 2400 may include an IC device assembly 2300 of FIG. 9 .A number of components are illustrated in FIG. 10 as included in the computing device 2400, but any one or more of these components may be omitted or duplicated, as suitable for the application. In some embodiments, some or all of the components included in the computing device 2400 may be attached to one or more motherboards. In some embodiments, some or all of these components are fabricated onto a single SoC die.Additionally, in various embodiments, the computing device 2400 may not include one or more of the components illustrated in FIG. 10 , but the computing device 2400 may include interface circuitry for coupling to the one or more components. For example, the computing device 2400 may not include a display device 2406, but may include display device interface circuitry (e.g., a connector and driver circuitry) to which a display device 2406 may be coupled. In another set of examples, the computing device 2400 may not include an audio input device 2418 or an audio output device 2408, but may include audio input or output device interface circuitry (e.g., connectors and supporting circuitry) to which an audio input device 2418 or audio output device 2408 may be coupled.The computing device 2400 may include a processing device 2402 (e.g., one or more processing devices). As used herein, the term "processing device" or "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory. The processing device 2402 may include one or more digital signal processors (DSPs), application-specific ICs (ASICs), central processing units, GPUs, cryptoprocessors (specialized processors that execute cryptographic algorithms within hardware), server processors, or any other suitable processing devices. The computing device 2400 may include a memory 2404, which may itself include one or more memory devices such as volatile memory (e.g., DRAM), nonvolatile memory (e.g., read-only memory (ROM)), flash memory, solid state memory, and/or a hard drive. In some embodiments, the memory 2404 may include memory that shares a die with the processing device 2402. This memory may be used as cache memory and may include embedded memory, e.g., a memory with FinFETs and TFT-based memory integrated over GAA transistors as described herein.In some embodiments, the computing device 2400 may include a communication chip 2412 (e.g., one or more communication chips). For example, the communication chip 2412 may be configured for managing wireless communications for the transfer of data to and from the computing device 2400. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a nonsolid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not.The communication chip 2412 may implement any of a number of wireless standards or protocols, including but not limited to Institute for Electrical and Electronic Engineers (IEEE) standards including Wi-Fi (IEEE 802.11 family), IEEE 802.16 standards (e.g., IEEE 802.16-2005 Amendment), Long-Term Evolution (LTE) project along with any amendments, updates, and/or revisions (e.g., advanced LTE project, ultramobile broadband (UMB) project (also referred to as "3GPP2"), etc.). IEEE 802.16 compatible Broadband Wireless Access (BWA) networks are generally referred to as WiMAX networks, an acronym that stands for Worldwide Interoperability for Microwave Access, which is a certification mark for products that pass conformity and interoperability tests for the IEEE 802.16 standards. The communication chip 2412 may operate in accordance with a Global System for Mobile Communication (GSM), General Packet Radio Service (GPRS), Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Evolved HSPA (E-HSPA), or LTE network. The communication chip 2412 may operate in accordance with Enhanced Data for GSM Evolution (EDGE), GSM EDGE Radio Access Network (GERAN), Universal Terrestrial Radio Access Network (UTRAN), or Evolved UTRAN (E-UTRAN). The communication chip 2412 may operate in accordance with Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced Cordless Telecommunications (DECT), Evolution-Data Optimized (EV-DO), and derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The communication chip 2412 may operate in accordance with other wireless protocols in other embodiments. The computing device 2400 may include an antenna 2422 to facilitate wireless communications and/or to receive other wireless communications (such as AM or FM radio transmissions).In some embodiments, the communication chip 2412 may manage wired communications, such as electrical, optical, or any other suitable communication protocols (e.g., the Ethernet). As noted above, the communication chip 2412 may include multiple communication chips. For instance, a first communication chip 2412 may be dedicated to shorter-range wireless communications such as Wi-Fi or Bluetooth, and a second communication chip 2412 may be dedicated to longer-range wireless communications such as global positioning system (GPS), EDGE, GPRS, CDMA, WiMAX, LTE, EV-DO, or others. In some embodiments, a first communication chip 2412 may be dedicated to wireless communications, and a second communication chip 2412 may be dedicated to wired communications.The computing device 2400 may include battery/power circuitry 2414. The battery/power circuitry 2414 may include one or more energy storage devices (e.g., batteries or capacitors) and/or circuitry for coupling components of the computing device 2400 to an energy source separate from the computing device 2400 (e.g., AC line power).The computing device 2400 may include a display device 2406 (or corresponding interface circuitry, as discussed above). The display device 2406 may include any visual indicators, such as a heads-up display, a computer monitor, a projector, a touchscreen display, a liquid crystal display (LCD), a light-emitting diode display, or a flat panel display, for example.The computing device 2400 may include an audio output device 2408 (or corresponding interface circuitry, as discussed above). The audio output device 2408 may include any device that generates an audible indicator, such as speakers, headsets, or earbuds, for example.The computing device 2400 may include an audio input device 2418 (or corresponding interface circuitry, as discussed above). The audio input device 2418 may include any device that generates a signal representative of a sound, such as microphones, microphone arrays, or digital instruments (e.g., instruments having a musical instrument digital interface (MIDI) output).The computing device 2400 may include a GPS device 2416 (or corresponding interface circuitry, as discussed above). The GPS device 2416 may be in communication with a satellite-based system and may receive a location of the computing device 2400, as known in the art.The computing device 2400 may include an other output device 2410 (or corresponding interface circuitry, as discussed above). Examples of the other output device 2410 may include an audio codec, a video codec, a printer, a wired or wireless transmitter for providing information to other devices, or an additional storage device.The computing device 2400 may include an other input device 2420 (or corresponding interface circuitry, as discussed above). Examples of the other input device 2420 may include an accelerometer, a gyroscope, a compass, an image capture device, a keyboard, a cursor control device such as a mouse, a stylus, a touchpad, a bar code reader, a Quick Response (QR) code reader, any sensor, or a radio frequency identification (RFID) reader.The computing device 2400 may have any desired form factor, such as a handheld or mobile computing device (e.g., a cell phone, a smart phone, a mobile internet device, a music player, a tablet computer, a laptop computer, a netbook computer, an ultrabook computer, a personal digital assistant (PDA), an ultramobile personal computer, etc.), a desktop computing device, a server or other networked computing component, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a vehicle control unit, a digital camera, a digital video recorder, or a wearable computing device. In some embodiments, the computing device 2400 may be any other electronic device that processes data.The following paragraphs provide various examples of the embodiments disclosed herein.Example 1 provides an IC device that includes a support structure (e.g., a substrate, a die, a wafer, or a chip); a first layer, including a plurality of GAA transistors; a second layer, including a plurality of FinFETs; and a third layer, including a memory array that includes a plurality of memory cells, where an individual cell of the plurality of memory cells includes a transistor with a channel region including a thin-film semiconductor material, where the first layer is between the support structure and the second layer (i.e., the second layer is further away from the support structure than the first layer), and the second layer is either at least partially overlaps with the third layer (i.e., the third layer may be located at approximately the same level with respect to the support structure as the second layer) or is between the first layer and the third layer (i.e., the third layer may be further away from the support structure).Example 2 provides the IC device according to example 1, where the plurality of FinFETs includes a first group of FinFETs and a second group of FinFETs, an individual FinFET of the first group includes a gate dielectric of a first thickness, an individual FinFET of the first group includes a gate dielectric of a second thickness, and the second thickness is greater than the first thickness. Thus, the plurality of FinFETs may include relatively low-voltage transistors (the ones of the first group) as well as relatively high-voltage transistors (the ones of the second group).Example 3 provides the IC device according to example 2, where one or more of the FinFETs of the first group are coupled to one or more of the GAA transistors. The relatively low-voltage FinFETs may be coupled to the GAA transistors to provide an XPU circuit over the support structure.Example 4 provides the IC device according to examples 2 or 3, where one or more of the FinFETs of the second group are coupled to one or more of the memory cells. The relatively high-voltage FinFETs may be coupled to the memory cells to provide logic circuits for controlling operation of the backend memory implemented in the third layer.Example 5 provides the IC device according to any one of the preceding examples, where an average grain size of the thin-film semiconductor material is smaller than about 0.1 millimeter, e.g., smaller than about 0.05 millimeter, which means that the thin-film semiconductor material may be polymorphous or polycrystalline, due to the relatively low-temperature deposition used to provide such a material in the backend layer of the IC device.Example 6 provides the IC device according to any one of the preceding examples, where channel regions of the FinFETs include one or more semiconductor materials with an average grain size greater than about 1 millimeter, which means that the semiconductor materials used to form FinFETs are single-crystalline materials, and, therefore, also means that the semiconductor materials used to form the FinFETs must have been integrated in the IC device using layer transfer.Example 7 provides the IC device according to any one of the preceding examples, where channel regions of the GAA transistors include one or more semiconductor materials with an average grain size greater than about 1 millimeter, which means that the semiconductor materials used to form the GAA transistors are single-crystalline.Example 8 provides the IC device according to any one of the preceding examples, where the individual cell of the plurality of memory cells further includes a capacitor to store a bit value, the capacitor coupled to the transistor.Example 9 provides the IC device according to any one of the preceding examples, where the GAA transistors include nanoribbon transistors.Example 10 provides the IC device according to any one of the preceding examples, further including a bonding interface between the first layer and the second layer.Example 11 provides an IC device that includes a support structure (e.g., a substrate, a die, a wafer, or a chip); a first layer, including a first plurality of transistors, the first plurality of transistors including nanoribbon transistors, nanosheet transistors, or both nanoribbon and nanosheet transistors; and a second layer, where channel regions of the second plurality of transistors includes one or more semiconductor materials with an average grain size greater than about 1 millimeter, where the first layer is between the support structure and the second layer (i.e., the second layer is further away from the support structure than the first layer).Example 12 provides the IC device according to example 11, where channel regions of the first plurality of transistors includes one or more semiconductor materials with an average grain size greater than about 1 millimeter.Example 13 provides the IC device according to examples 11 or 12, where the second plurality of transistors includes FinFETs.Example 14 provides the IC device according to any one of examples 11-13, where the second plurality of transistors includes a first group of transistors and a second group of transistors, an individual transistor of the first group includes a gate dielectric of a first thickness, an individual transistor of the first group includes a gate dielectric of a second thickness, and the second thickness is greater than the first thickness. Thus, the second plurality of transistors may include relatively low-voltage transistors (the ones of the first group) as well as relatively high-voltage transistors (the ones of the second group).Example 15 provides the IC device according to example 14, where one or more transistors of the first group are coupled to one or more transistors of the first plurality of transistors. The relatively low-voltage transistors may be coupled to the nanoribbon/nanosheet transistors to provide an XPU circuit over the support structure.Example 16 provides the IC device according to examples 14 or 15, where the IC device further includes a plurality of memory cells, and one or more transistors of the second group are coupled to one or more memory cells of the plurality of memory cells. The relatively high-voltage transistors may be coupled to the memory cells.Example 17 provides the IC device according to example 16, where the memory cells include one or more of DRAM cells, SRAM cells, magnetoresistive random-access memory (MRAM) cells, or resistive random-access memory (RRAM) cells.Example 18 provides the IC device according to any one of examples 11-17, further including a bonding interface between the first layer and the second layer.Example 19 provides an IC package that includes an IC device according to any one of the preceding examples; and a further IC component, coupled to the IC device.Example 20 provides the IC package according to example 19, where the further IC component includes one of a package substrate, an interposer, or a further IC die.In various further examples, the IC device according to any one of the preceding examples may include, or be a part of, at least one of a memory device, a computing device, a wearable device, a handheld electronic device, and a wireless communications device.Example 21 provides an electronic device that includes a carrier substrate; and one or more of the IC device according to any one of the preceding examples and the IC package according to any one of the preceding examples, coupled to the carrier substrate.Example 22 provides the electronic device according to example 21, where the carrier substrate is a motherboard.Example 23 provides the electronic device according to example 21, where the carrier substrate is a PCB.Example 24 provides the electronic device according to any one of examples 21-23, where the electronic device is a wearable electronic device (e.g., a smart watch) or handheld electronic device (e.g., a mobile phone).Example 25 provides the electronic device according to any one of examples 21-24, where the electronic device further includes one or more communication chips and an antenna.Example 26 provides the electronic device according to any one of examples 21-25, where the electronic device is an RF transceiver.Example 27 provides the electronic device according to any one of examples 21-25, where the electronic device is one of a switch, a power amplifier, a low-noise amplifier, a filter, a filter bank, a duplexer, an upconverter, or a downconverter of an RF communications device, e.g., of an RF transceiver.Example 28 provides the electronic device according to any one of examples 21-25, where the electronic device is a computing device.Example 29 provides the electronic device according to any one of examples 21-28, where the electronic device is included in a base station of a wireless communication system.Example 30 provides the electronic device according to any one of examples 21-28, where the electronic device is included in a user equipment device (i.e., a mobile device) of a wireless communication system.Example 31 provides a method of fabricating an IC device. The method includes providing a first layer of transistors over a support structure, the first layer including a plurality of GAA transistors; performing a layer transfer to provide a second layer of transistors over the first layer, the second layer including a plurality of FinFETs; and providing a third layer over the second layer, the third layer including a plurality of memory cells, where an individual cell of the plurality of memory cells includes a transistor with a channel region comprising a thin-film semiconductor material.Example 32 provides the method according to example 31, where the support structure is a first support structure, and where performing the layer transfer includes transferring a layer of a substantially single-crystalline semiconductor material grown on a second support structure to be over the first layer over the first support structure, and forming the FinFETs using portions of the substantially single-crystalline semiconductor material transferred to be over the first layer over the first support structure as channel regions of the FinFETs.Example 33 provides the method according to examples 31 or 32, where the plurality of FinFETs includes a first group of FinFETs and a second group of FinFETs, and where the method further includes coupling one or more of the FinFETs of the first group to one or more of the GAA transistors, and coupling one or more of the FinFETs of the second group are coupled to one or more of the memory cells.Example 34 provides the method according to any one of examples 31-33, where the GAA transistors include nanoribbon transistors or nanosheet transistors.Example 35 provides the method according to any one of examples 31-34, further including processes for forming the IC device according to any one of the preceding examples (e.g., for forming the IC device according to any one of examples 1-18).Example 36 provides the method according to any one of examples 31-35, further including processes for forming the IC package according to any one of the preceding examples (e.g., for forming the IC package according to any one of examples 19-20).Example 37 provides the method according to any one of examples 31-36, further including processes for forming the electronic device according to any one of the preceding examples (e.g., for forming the electronic device according to any one of examples 21-30).The above description of illustrated implementations of the disclosure, including what is described in the Abstract, is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. While specific implementations of, and examples for, the disclosure are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the disclosure, as those skilled in the relevant art will recognize. These modifications may be made to the disclosure in light of the above detailed description.
A natural and immersive data annotation system for space-time artificial intelligence in robotic and smart spaces. An annotation device may receive 4D sensor data representing a first scene, and the 4D sensor data of the first scene includes points representing a human limb in the first scene. The annotation device may also receive 4D data representing a second scene, the 4D data of the second scene including points representing features in the second scene. In addition, the annotation device may generate a first tree data structure representing occupancy of the human limb in the first scene based on the point, and generate a second tree data structure representing occupancy of the second scene based on the plurality of points. The annotation device may map the first tree data structure and the second tree data structure to a reference frame. The annotation device may determine whether there is a tree-to-tree structure intersection of the feature and the human limb within the reference coordinates, and may annotate the feature based on the tree-to-tree structure intersection.
1.A system including an annotation device comprising:a memory having computer-readable instructions stored thereon; anda processor operably coupled to the memory and configured to read and execute the computer-readable instructions to perform or control the performance of operations including:receiving four-dimensional 4D sensor data representing a first scene, the 4D sensor data including points representing human limbs in the first scene;receiving 4D data representing a second scene and including a plurality of points representing features in the second scene;A first tree-like data structure representing the occupancy of the human limb in the first scene is generated based on the points, and a second tree-like data structure representing the occupancy of the second scene is generated based on a plurality of points ;mapping the first tree-like data structure and the second tree-like data structure to a reference coordinate system;determining whether a tree-to-tree data structure intersection of the feature and the human limb exists within the reference coordinate system; andThe features are annotated based on the tree-to-tree data structure intersection.2.2. The system of claim 1, wherein the plurality of points comprises a second plurality of points, the points form part of the first plurality of points, and wherein the 4D sensor data comprises representations of the first scene In a coordinate system at a particular time, receiving 4D sensor data representing the first scene includes:generating a plurality of point clouds, each of the plurality of point clouds comprising a portion of the first plurality of points;determining a timestamp associated with the particular time; andThe point representing the human limb is identified.3.2. The system of any one of claims 1-2, wherein the 4D sensor data further comprises data corresponding to the points according to at least one of an RGB color space, an HSV color space, and a LAB color space color data.4.3. The system of claim 2, wherein the first plurality of points comprises a plurality of 4D points, the operations further comprising determining parameters of each 4D point of the plurality of 4D points.5.5. The system of claim 4, wherein determining the parameter for each of the plurality of 4D points comprises:determining an X coordinate, a Y coordinate, a Z coordinate, and a time coordinate of each of the plurality of 4D points relative to the first scene; andA color is determined for each 4D point of the plurality of 4D points.6.The system of any one of claims 1, 4, and 5, further comprising a 3D sensor and a color sensor configured to generate 4D sensor data, and the operations further comprising:determining the physical location of the 3D sensor relative to the color sensor; andThe 4D sensor data is calibrated based on the physical location of the 3D sensor relative to the color sensor.7.7. The system of claim 6, wherein the 3D sensor includes an accelerometer and a gyroscope, the operations further comprising using the accelerometer to determine a physical location of the 3D sensor relative to the first scene and The highest point corresponding to the first scene.8.7. The system of claim 6, wherein the 4D sensor data includes a plurality of coordinate systems representing the first scene, the operations further comprising:determining movement of the 3D sensor relative to a previous one of the plurality of coordinate systems; andThe 4D sensor data is calibrated based on movement of the 3D sensor relative to the previous coordinate system.9.The system of claim 1, wherein generating the first tree-like data structure representing occupancy of a human limb in the first scene based on the points comprises:generating a motion coordinate system representing the 4D sensor data; andThe motion coordinate system is mapped to a predefined reference coordinate system, wherein the predefined reference coordinate system corresponds to the first scene.10.3. The system of claim 1, wherein the 4D data includes a plurality of coordinate systems representing the second scene, and wherein receiving the 4D data representing the second scene comprises converting the plurality of coordinate systems The partial coordinate systems in are aggregated into a single coordinate system that includes points representing features in each of the partial coordinate systems.11.10. The system of claim 1, wherein determining whether there is an intersection of the feature and a tree-to-tree data structure of the human limb within the reference coordinate system comprises: determining the feature and all whether the tree-to-tree data structure intersection of the human limb includes a surface description indicating that a continuous surface within the second scene is to be annotated, wherein the feature is located within the continuous surface.12.A method that includes:receiving four-dimensional 4D sensor data representing a first scene, the 4D sensor data including points representing human limbs in the first scene;receiving 4D data representing a second scene and including a plurality of points representing features in the second scene;A first tree-like data structure representing the occupancy of the human limb in the first scene is generated based on the points, and a second tree-like data structure representing the occupancy of the second scene is generated based on a plurality of points ;mapping the first tree-like data structure and the second tree-like data structure to a reference coordinate system;determining whether a tree-to-tree data structure intersection of the feature and the human limb exists within the reference coordinate system; andThe features are annotated based on the tree-to-tree data structure intersection.13.13. The method of claim 12, wherein the plurality of points comprises a second plurality of points, the points form part of the first plurality of points, and wherein the 4D sensor data comprises representations of the first scene The coordinate system at a particular time, and receiving 4D sensor data representing the first scene includes:generating a plurality of point clouds, each of the plurality of point clouds comprising a portion of the first plurality of points;determining a timestamp associated with the particular time; andThe point representing the human limb is identified.14.14. The method of any of claims 12-13, wherein the 4D sensor data further comprises a pixel corresponding to the point according to at least one of an RGB color space, an HSV color space, and a LAB color space color data.15.14. The method of claim 13, wherein the first plurality of points comprises a plurality of 4D points, the method further comprising determining parameters of each 4D point of the plurality of 4D points.16.16. The method of claim 15, wherein determining the parameter of each 4D point in the plurality of 4D points comprises:determining an X coordinate, a Y coordinate, a Z coordinate, and a time coordinate of each of the plurality of 4D points relative to the first scene; andA color for each 4D point of the plurality of 4D points is determined.17.The method of any one of claims 12, 15 and 16, further comprising:Determine the physical location of the 3D sensor relative to the color sensor; andThe 4D sensor data is calibrated based on the physical location of the 3D sensor relative to the color sensor.18.18. The method of claim 17, wherein the 4D sensor data includes a plurality of coordinate systems representing the first scene, the method further comprising:determining movement of the 3D sensor relative to a previous one of the plurality of coordinate systems; andThe 4D sensor data is calibrated based on movement of the 3D sensor relative to the previous coordinate system.19.13. The method of claim 12, wherein generating the first tree-like data structure representing occupancy of a human limb in the first scene based on the points comprises:generating a motion coordinate system representing the 4D sensor data; andThe motion coordinate system is mapped to a predefined reference coordinate system, wherein the predefined reference coordinate system corresponds to the first scene.20.13. The method of claim 12, wherein the 4D data comprises a plurality of coordinate systems representing the second scene, and wherein receiving the 4D data representing the second scene comprises converting the plurality of coordinate systems into The partial coordinate systems of are aggregated into a single coordinate system that includes points representing features in each of the partial coordinate systems.21.A system that includes:means for receiving four-dimensional 4D sensor data representing a first scene, the 4D sensor data including points representing human limbs in the first scene;means for receiving 4D data representing a second scene and including a plurality of points representing features in the second scene;for generating a first tree-like data structure representing occupancy of the human limb in the first scene based on the points, and generating a second tree-like data structure representing the occupancy of the second scene based on a plurality of points a device for a data structure;means for mapping the first tree-like data structure and the second tree-like data structure to a reference coordinate system;means for determining whether a tree-to-tree data structure intersection of the feature and the human limb exists within the reference coordinate system; andMeans for annotating the feature based on the tree-to-tree data structure intersection.22.21. The system of claim 21, wherein the plurality of points comprises a second plurality of points, the points form part of the first plurality of points, and wherein the 4D sensor data comprises representations of the first scene In a coordinate system at a particular time, the means for receiving 4D sensor data representing the first scene comprises:an apparatus for generating a plurality of point clouds, each of the plurality of point clouds comprising a portion of the first plurality of points;means for determining a timestamp associated with the particular time; andmeans for identifying the point representing the human limb.23.The system of claim 21, further comprising:means for determining the physical location of the 3D sensor relative to the color sensor; andMeans for calibrating the 4D sensor data based on the physical location of the 3D sensor relative to the color sensor.24.21. The system of claim 21, wherein means for generating the first tree data structure representing occupancy of a human limb in the first scene based on the points comprises:means for generating a motion coordinate system representing the 4D sensor data; andMeans for mapping the motion coordinate system to a predefined reference coordinate system, wherein the predefined reference coordinate system corresponds to the first scene.25.21. The system of claim 21, wherein the 4D data includes a plurality of coordinate systems representing the second scene, and the means for receiving 4D data representing the second scene includes An apparatus for aggregating partial coordinate systems of a plurality of coordinate systems into a single coordinate system, the single coordinate system including points representing features in each of the partial coordinate systems.
Natural and Immersive for Space-Time Artificial Intelligence in Robotics and Smart Spaces Data Annotation Systemtechnical fieldAspects discussed in this disclosure relate to natural and immersive data annotation systems for spatiotemporal artificial intelligence for robotics and smart spaces.Background techniqueUnless otherwise indicated in this disclosure, the materials described in this disclosure are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.The computing device may perform supervised machine learning (SML) using the annotated data, the annotated data including labels identifying features within the annotated data. Annotated data can be generated by grouping raw data into segments, regions, or intervals based on labels. For example, raw data may be grouped based on features (eg, physical objects, degrees of freedom, discrete events). Users can rate raw data and identify features to determine which labels to associate with features. The autonomous device can use the SML model to control the operation of the autonomous device. The autonomous device may identify features in the current operating environment based on the SML model.The subject matter claimed in this disclosure is not limited to aspects that address any disadvantages, or that operate only in environments such as those described above. Rather, this background is provided only to illustrate one example technology area in which some of the described aspects of this disclosure may be practiced.Description of drawingsExample aspects will be described and explained with additional features and details through the use of the accompanying drawings, in which:1 illustrates a block diagram of an example environment for data annotation of raw data;2 illustrates a volumetric representation of an example environment including a three-dimensional (3D) workspace for data annotation;Figure 3 illustrates an example volumetric representation of raw data that may be displayed in the 3D workspace of Figures 1 and 2;FIG. 4 illustrates example surface manifolds selectable by a user within the 3D workspace of FIGS. 1 and 2 .5 illustrates an example flow diagram of a method of annotating raw data using raw data and a volumetric representation of a 3D workspace;FIG. 6 illustrates an example system for providing a perceptual user interface (PUI); and7 illustrates an example flow diagram for annotating features within raw data;All of this is in accordance with at least one aspect described in this disclosure.Detailed waysThe following Detailed Description refers to the accompanying drawings, which show, by way of illustration, exemplary details in which aspects of the present disclosure may be practiced.The word "exemplary" is used in this application to mean "serving as an example, instance, or illustration." Any aspect or design described in this application as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects or designs.Throughout the drawings, it should be noted that, unless otherwise indicated, the same reference numerals are used to depict the same or similar elements, features and structures.The phrases "at least one" and "one or more" can be understood to include quantities greater than or equal to one (eg, one, two, three, four, [...], etc.). The phrase "at least one" in reference to a group of elements may be used herein to mean at least one element from the group of elements. For example, the phrase "at least one of" in reference to a set of elements may be used herein to mean a choice of: one of the listed elements, one of a plurality of the listed elements, a plurality of the listed elements An individual listed element, or a plurality of individual listed elements.The words "plural" and "multiple" in the specification and claims expressly refer to quantities greater than one. Thus, any phrase that specifically refers to a number of elements by invoking the above words (eg, "plurality of [elements]", "plurality of [elements]") specifically refers to more than one of said element. For example, the phrase "aplurality" may be understood to include a quantity greater than or equal to two (eg, two, three, four, five, [...], etc.).Phrases in the specification and in the claims "group of (...)", "set of (...)", "collection of (...)", "series of (...)", "(of...) ) sequence", "grouping of (...)", etc., if present, refer to a quantity equal to or greater than one, ie, one or more. The terms "appropriate subset", "reduced subset", and "smaller subset" refer to a subset of a set that is not equal to the set, and, illustratively, refer to a set that contains less than the set A subset of elements.The term "data" as used herein may be understood to include information in any suitable analog or digital form, eg, as a file, part of a file, collection of files, signal or stream, part of a signal or stream, signal or A collection of streams, etc. to provide information. Further, the term "data" may also be used to mean a reference to information, eg in the form of a pointer. However, the term "data" is not limited to the above examples, and may take various forms and represent any information as understood in the art.For example, the terms "processor" or "controller" as used herein may be understood as any kind of technical entity that allows processing of data. Data may be processed according to one or more specific functions performed by the processor or controller. Further, a processor or controller as used herein may be understood to mean any kind of circuit, such as any kind of analog or digital circuit. A processor or controller may thus be or include analog circuits, digital circuits, mixed-signal circuits, logic circuits, processors, microprocessors, central processing units (CPUs), graphics processing units (GPUs), digital signal processors ( DSP), field programmable gate array (FPGA), integrated circuit, application specific integrated circuit (ASIC), etc., or any combination thereof. Any other kind of implementation of the corresponding functions, which will be described in further detail below, may also be understood as a processor, a controller or a logic circuit. It should be understood that any two (or more) of the processors, controllers, or logic circuits detailed herein may be implemented as a single entity or the like having equivalent functionality, and conversely, the Any single processor, controller, or logic circuit may be implemented as two (or more) separate entities with equivalent functionality, or the like.As used herein, "memory" is understood to mean a computer-readable medium (eg, a non-transitory computer-readable medium) in which data or information may be stored for retrieval. References to "memory" included herein may thus be understood to refer to volatile or non-volatile memory, including random access memory (RAM), read only memory (ROM), flash memory, solid state storage, magnetic tape , hard drive, optical drive, 3D XPointTM, etc., or any combination thereof. Herein, registers, shift registers, processor registers, data buffers, etc. may also be encompassed by the term memory. The term "software" refers to any type of executable instructions, including firmware.Unless explicitly specified, the term "transmission" encompasses both direct (point-to-point) and indirect (via one or more intermediate points) transmission. Similarly, the term "receive" covers both direct and indirect receiving. Furthermore, the terms "transmit," "receive," "transfer," and other similar terms encompass both physical transmissions (eg, transmission of radio signals) and logical transmissions (eg, transmission of digital data through logical software level connections). For example, a processor or controller may transmit or receive data in the form of radio signals through a software-level connection to another processor or controller, where physical transmission and reception are performed by radios such as RF transceivers and antennas Layer components are handled, and logical transmission and reception through software level connections are performed by a processor or controller. The term "transfer" encompasses either or both of transmission and reception, ie, unidirectional or bidirectional transmission in one or both of the incoming and outgoing directions. The term "computation" encompasses both 'direct' computations via mathematical expressions/equations/relationships and 'indirect' computations via lookup or hash tables and other array indexing or search operations.The computing device may perform supervised machine learning (SML) (eg, correction learning) using the annotated data that includes labels that identify features within the annotated data. Examples of SML may include backpropagation in neural networks, deep neural networks, Gaussian processes, or any other suitable SML. Annotated data can be generated by labeling the raw data to enhance the raw data into segments, regions, or intervals based on the labels. For example, raw data may be grouped based on features (eg, physical objects, degrees of freedom, discrete events).To generate annotated data, the user can rate the raw data and identify features to determine which labels to associate with the features. Users can select tags from predefined tag categories. In some aspects, the predefined label classifications may be based on the application of SML. The computing device may perform SML using the annotated data to identify features in the environment that are identical or similar to features marked in the annotated data.Computing devices can generate SML models that can be used to control autonomous vehicles, guide robotic devices, or other types of autonomous devices. The autonomous device may identify features in the current operating environment based on the SML model. Additionally, the autonomous device can determine actions to perform with respect to these features based on the SML model. For example, an autonomous device can determine whether to stop, bypass, or accelerate beyond a feature in the environment based on the SML model.During the annotation process, some data annotation techniques can display the representation of the raw data as a two-dimensional (2D) representation. Additionally, these data annotation techniques may receive user input via a 2D graphical user interface (GUI) (eg, via mouse clicks and keystrokes). Displaying data as used in this disclosure (eg, displaying raw data, sensor data, annotated data, etc.) includes displaying a representation of the data via a display device.When the raw data includes four-dimensional (4D) (eg, space-time) data, these data annotation techniques may produce artifacts or other numerical confusion. For example, these data annotation techniques may display and annotate each coordinate system of the raw data (eg, time slicing may not be performed). As another example, these data techniques may only display raw data as a one-dimensional (1D), 2D, or 2.5D perspective, without performing color coding or transparency/opacity. These data annotation techniques may generate ambiguity in the feature representation within the raw data.Some data annotation techniques may allow the user to alternate between views and navigation modes of the raw data (eg, between annotation views, configuration views, color views, etc.) to identify features within the raw data. These data annotation techniques can hinder effective annotation of raw data including 4D data and can increase the labor, time and cost associated with the annotation of 4D data.Some data annotation techniques can display raw data as a stereoscopic view via a head mounted display (eg, a virtual reality (VR) head mounted device or an augmented reality (AR) head mounted device). These data annotation techniques can include controllers that provide a limited number of degrees of freedom to label features within the raw data.Some data annotation techniques can generate skeletal representations of users to annotate raw data. These data annotation techniques can generate skeleton representations based on sensor data. However, since it is not a volumetric representation of sensor data, the skeleton representation may be unstable (eg, the skeleton representation may wobble or disappear depending on the lighting of the environment or the pose of the human). Additionally, these data annotation techniques may not display the raw data as a 3D representation (eg, a volumetric representation) that the user can interact with.These data annotation techniques may include limited labeling capabilities. For example, some controllers may only include six degrees of freedom (eg, joystick states) for selecting and labeling a feature. Additionally, these data annotation techniques may rely on the user's controller-eye coordination. For example, the user's controller-eye coordination may determine the efficiency of selecting features within the raw data using a joystick, mouse, or some other controller. Further, these data annotation techniques may increase the physical demands on the user (eg, the load paid by the controller) to label features.These data annotation techniques can cause the controller to draw power, which can drain the controller's battery. Charging or replacing the battery within the controller may increase the amount of time it takes to annotate raw data. These data annotation techniques can cause users to spend time learning system protocols and menu sequences, which can increase the amount of time annotating raw data.Some aspects described in this disclosure may annotate raw data based on controllerless gestures, movements, virtual manipulations, or some combination thereof, performed by the user relative to the volumetric representation of the raw data within the 3D workspace. These aspects can implement computational geometry and machine vision to capture gestures, motion, and virtual manipulation of raw data to annotate raw data and generate annotated data.Some aspects described in this disclosure may generate a volumetric digital representation of a human limb (eg, a human limb such as a hand, arm, leg, finger, or any other body part) that is physically positioned within a 3D workspace. These Aspects can also display volumetric numerical representations of raw data within the 3D workspace. These facets can annotate features based on an octree-to-octree intersection of the volumetric representations of human limbs and the volumetric representations of features within subspaces of the 3D workspace An octree may include a tree-like data structure, including multiple internal nodes (eg, parent nodes, child nodes, or nodes of any other suitable generation). In some aspects, each internal node of a tree-like data structure may include eight children Nodes. In these and other respects, each node in the octree can subdivide the 3D workspace into eight octants.Additionally, the raw data may include multiple coordinate systems representing the occupancy of the environment at different time periods. Some aspects described in this disclosure may perform time slicing to generate a single coordinate system that represents an aggregation of coordinate systems within the raw data. For example, a single coordinate system may display aggregated features that represent the location and occupancy of the aggregated features in all coordinate systems.Some aspects described in this disclosure may include systems. The system may include an annotation device and one or more sensors. The annotation device may include a memory and a processor. The memory may include computer readable instructions. The processor may be operatively coupled to the memory. Additionally, the processor may read and execute computer readable instructions to perform or control the performance of the operations of the annotation device.Annotation devices can receive 4D sensor data. The 4D sensor data may represent a 3D workspace (eg, a first scene). In some aspects, the 4D sensor data may include a sequential collection of 3D datasets. In these and other aspects, the 3D dataset may form at least a portion of the 4D sensor data. The 4D sensor data may include a plurality of points representing any part of a human (eg, a human limb) physically positioned within the 3D workspace. The annotation device may also receive raw data (eg, 4D data representing the second scene). The raw data may include multiple points representing features in the raw data. Additionally, the annotation device may generate a first octree representing the human occupancy within the 3D workspace. In some aspects, a first octree can be generated for each point cloud captured. The annotation device may also identify the portion of the human within the 3D workspace that corresponds to the human limb. The annotation device may generate the first octree based on the points within the 4D sensor data. The annotation device may generate a second octree representing the occupancy of the features in the original data. The annotation device may generate a second octree based on the points in the original data.The annotation device maps the first octree and the second octree to the reference coordinate system. In some aspects, the first octree and the second octree may be mapped to a reference coordinate system as 3D information in the sensor data domain. The reference coordinate system may include the aggregated coordinate system discussed elsewhere in this disclosure. The annotation device can also determine whether there is an octree-to-octree intersection between the features in the raw data and the human limb within the reference frame. The annotation device may annotate the features based on the octree-to-octree intersection of the first octree and the second octree.At least one aspect of the annotation device described in this disclosure can annotate 4D raw data while reducing the workload on the user, the annotation device, or some combination thereof. Additionally, the annotation device can associate complex markup commands with individually specified gestures to increase the user's freedom. Increasing the user's freedom can make the annotation process more effective and efficient. Additionally, annotation devices and sensors can eliminate the use of controllers, controller-eye coordination, and the learning curve of annotation devices, which can reduce the user's workload. Reducing the user's workload can reduce the amount of time spent annotating data. Further, annotation devices and sensors can reduce or eliminate hardware maintenance, which can reduce the amount of downtime during the annotation process.These and other aspects of the present disclosure will be explained with reference to the accompanying drawings. It is to be understood that the drawings are diagrammatic and schematic representations of such example aspects and are not limiting, nor are they necessarily drawn to scale. In the drawings, unless otherwise indicated, like numbered features indicate similar structures and functions.1 illustrates a block diagram of an example environment 100 for data annotation of raw data 110 in accordance with at least one aspect described in this disclosure. Environment 100 may include annotation device 102 , graphical user interface (GUI) 108 , raw data 110 , domain classification data 112 , annotated data 114 , first sensor 116 a , second sensor 116 b , and 3D workspace 118 . The first sensor 116a and the second sensor 116b are generally referred to as sensors 116 in this disclosure.The annotation device 102 may include a human-centric representation 104 and a PUI 106 . Additionally, the annotation device 102 may include a memory (not shown) and a processor (not shown). The memory may include computer-readable instructions stored thereon. The processor may be operatively coupled to the memory. The processor may read and execute computer readable instructions stored in the memory to perform or control the performance of the operations of the annotation device 102 .The raw data 110 may include 4D data representing features within the environment (eg, the second scene) over a period of time. The raw data 110 may include multiple coordinate systems representing the environment during the time period. For example, the raw data 110 may include 4D data obtained from a multi-modal and multi-instance (MMI) arrangement of sensors within the operating environment of the mobile robot. In some aspects, the 4D data may include a representation of the height (eg, Y coordinate) of the feature, the width of the feature (eg, the X coordinate), the depth of the feature (eg, the Z coordinate), and the time coordinate (eg, the current coordinate system) corresponding to the current coordinate system. For example, the T coordinate).Domain classification data 112 may include unstructured tags corresponding to specific applications of SML. For example, domain classification data 112 may include tags corresponding to navigating autonomous devices within the environment.3The D workspace 118 may correspond to a second scene (eg, a physical scene or tangible space) and any features that are physically located within the physical scene. In some aspects, a 3D workspace may include a volume containing a physical scene. In these and other aspects, the 3D workspace may not be depicted in the physical world, but may be depicted in a virtual representation of the physical world. For example, 3D workspace 118 may only be depicted in a virtual representation displayed in human-centric representation 104 via a VR headset, AR headset, or any other suitable display device.Sensors 116 may be physically positioned relative to 3D workspace 118 . Additionally, the sensors 116 may generate 4D sensor data corresponding to the 3D workspace 118 . For example, the first sensor 116a may include a 3D sensor, and the second sensor 116b may include a color sensor that generates information representing coordinates and colors of features within the 3D workspace 118 . In some aspects, the information representing the colors of features within the 3D workspace 118 may include colors according to the RGB color space, the HSV color space, the LAB color space, or some combination thereof. In some aspects, the information representing the color of the feature may indicate one or more coordinates associated with the particular color.Additionally, sensors 116 may include accelerometers, gyroscopes, or some combination thereof. The accelerometer, gyroscope, or combination thereof may indicate movement of the sensor 116, physical positioning of the sensor 116 relative to the highest point corresponding to the 3D workspace, or some combination thereof.In some aspects, the 4D sensor data may include points representing features within the 3D workspace 118 . In these and other aspects, the 4D sensor data may include multiple coordinate systems representing the 3D workspace at different time periods.GUI 108 may include fields for the user to select tags from the domain classification data 112 . Additionally, GUI 108 may include fields for starting, stopping, pausing, or some combination of the annotation process. Further, GUI 108 may include fields for associating a user's particular gesture with a particular tag from domain classification data 112 . In some aspects, GUI 108 may be displayed to a user via a monitor (eg, a computer monitor, VR headset, AR headset, etc.). In some aspects, GUI 108 may provide user instructions to the user.The annotation device 102 may display the volumetric representation of the raw data as a human-centric representation 104 . The human-centric representation 104 may include a 3D representation of the raw data for the user to interact with during the annotation process. Additionally, the display of the human-centric representation 104 may include the PUI 106 . The PUI 106 may include fields for the user to select tags from the domain classification data 112 . The PUI 106 may also include fields for starting, stopping, pausing, or some combination of the annotation process. Further, the PUI 106 may include a field for associating a user's particular gesture with a particular tag from the domain classification data 112 . In some aspects, the PUI 106 may provide user instructions to the user.Annotation device 102 may receive 4D sensor data representing 3D workspace 118 from sensor 116 . In some aspects, the annotation device may determine the physical location of the first sensor 116a relative to the second sensor 116b based on the 4D sensor data. Additionally, the annotation device 102 may calibrate the 4D sensor data based on the physical locations of the sensors 116 relative to each other. In some aspects, sensors 116 may calibrate 4D sensor data based on the physical locations of sensors 116 relative to each other.The annotation device 102 can determine the movement of the sensor relative to the previous coordinate system within the 4D sensor data. For example, the annotation device 102 may determine whether the first sensor 116a is moving relative to the second sensor 116b between the first coordinate system and the second coordinate system. Additionally, the annotation device 102 may calibrate the 4D sensor data based on the movement of the sensors 116 relative to each other between coordinate systems. In some aspects, sensors 116 may calibrate 4D sensor data based on movement of sensors 116 relative to each other between coordinate systems.The annotation device 102 may determine the physical location of the sensor 116 relative to the 3D workspace. In some aspects, annotation device 102 may determine the physical location of sensor 116 based on sensor data generated by sensor 116's accelerometer, gyroscope, or some combination thereof. Additionally, the annotation device 102 may calibrate the 4D sensor data based on the physical location of the sensor 116 relative to the 3D workspace 118 .The annotation device 102 may capture a point cloud based on points within the 4D sensor data. In some aspects, each point cloud may include a portion of points within the 4D sensor data. Additionally, the annotation device 102 may determine the time corresponding to each coordinate system of the 4D sensor data. For example, the annotation device 102 may determine timestamps associated with one or more coordinate systems within the 4D sensor data. The annotation device 102 may identify points, point clouds, or some combination thereof within the 4D sensor data that represent the occupancy of the 3D workspace 118 .The annotation device 102 may receive raw data 110 (eg, 4D data representing the second scene). The annotation device 102 may determine parameters of one or more 4D points within the raw data 110 . For example, the annotation device 102 may determine the height of the 4D point (eg, the Y coordinate), the width of the 4D point (eg, the X coordinate), the depth of the 4D point (eg, the Z coordinate), the time corresponding to the 4D point (eg, T coordinate), the color of the 4D point, or some combination thereof.The annotation device 102 may aggregate portions of the coordinate system within the raw data. In some aspects, annotation device 102 may perform time slicing by aggregating features within multiple coordinate systems into a single aggregated feature that includes points representing each of the features.Annotation device 102 may generate a first octree representing 4D sensor data (eg, 3D workspace 118). The first octree may indicate the occupancy of the human limb within the 3D workspace 118 . The annotation device 102 may generate the first octree based on the points within the 4D sensor data. The first octree may include discrete volume units (eg, volume elements or voxels) that include radii and dimensions.The annotation device 102 may also generate a second octree representing the original data (eg, the second scene). The second octree may indicate the occupancy of the feature within the second scene. The annotation device 102 may generate a second octree based on the points within the raw data 110 . The second octree may include discrete volume units (eg, volume elements or voxels) that include radius and size.The annotation device 102 may map the first octree and the second octree to a reference coordinate system. In some aspects, the reference coordinate system may include a single radial dimension and discrete volume unit dimensions. The annotation device 102 may map the first octree and the second octree to a reference coordinate system such that the radius and discrete volume unit sizes are unified.The annotation device 102 may determine whether there is an octree-to-octree intersection of the feature in the raw data 110 and the human limb within the 3D workspace 118 based on the reference coordinate system. In some aspects, the annotation device 102 may determine whether the discrete volume units of the first octree and the discrete volume units of the second octree intersect the same or similar subspaces within the reference coordinate system.In response to the annotation device 102 determining that an octree-to-octree intersection exists, the annotation device 102 may annotate corresponding features within the raw data 110 based on the octree-to-octree intersection. For example, the annotation device 102 may label corresponding features based on gestures, human limbs, or other actions of the user within the 3D workspace 118 . The annotation device 102 may generate the annotated data 114 based on discrete volume units of octree-to-octree intersection within the reference coordinate system. The annotated data 114 may include buckets or other methods of organizing the corresponding features together within the annotated data 114, arranged, sorted, segmented, or in any other suitable order.FIG. 2 illustrates a volumetric representation 200 of an example environment including a 3D workspace 202 for data annotation, according to at least one aspect described in the present disclosure. The 3D workspace 202 may correspond to the 3D workspace 118 of FIG. 1 . 3D workspace 202 is illustrated in FIG. 2 for purposes of example. In some aspects, 3D workspace 202 may not be depicted in volumetric representation 200 . In other aspects, the 3D workspace 202 may be depicted in the volumetric representation 200 . The volumetric representation 200 may include virtual representations of features within the environment. The volume representation 200 may be generated based on 4D sensor data.The volume representation 200 may include a first subject 204 and a second subject 208 (both illustrated as humans in FIG. 2 ). The second subject 208 may be physically positioned outside or outside the 3D workspace 202 . Portions 214 of the first subject 204 (eg, torso, legs, parts of the head, and arms) may be physically positioned outside or outside the 3D workspace 202 . The volume representation 200 may also include a background surface 212 . In some aspects, the background surface 212 may form the boundary of the 3D workspace 202 . In other aspects, the background surface 212 may be physically positioned at some distance away from the boundaries of the 3D workspace 202 .As illustrated in FIG. 2 , portions of the first body 204 may be physically positioned within the 3D workspace 202 . For example, the portion of first subject 204 that is physically positioned within 3D workspace 202 may include portions of arm 206 and head 210 .Portions outside of environment 200 or outside of 3D workspace 202 may be represented as non-discrete volume unit representations. For example, the second subject 208, the background surface 212, or some combination thereof may be represented as a non-discrete volume unit representation that is indicative of the characteristics of the second subject 208, the background surface 212, or some combination thereof. Non-discrete volume unit representations may include lines, shading, or other representations that indicate corresponding features. In some aspects, portions of the environment outside or outside the 3D workspace 202 may not be included in the volumetric representation 200 .3Portions of environment 200 within D workspace 202 may be illustrated as discrete volume unit representations indicating corresponding features. For example, as illustrated in Figure 2, portions of arm 206 and head 210 within 3D workspace 202 are illustrated as discrete volume unit representations. Discrete volume unit representations may include voxels (eg, cubes, other volume-based shapes) that represent corresponding features.3D workspace 202 may define a portion of environment 200 in which volumetric representations of raw data may be displayed. For ease of illustration, the volumetric representation of the raw data is not illustrated in FIG. 2 . As illustrated in FIG. 2 , arm 206 may interact with portions of the volumetric representation of raw data within 3D workspace 202 . An octree-to-octree intersection of the features of the arm 206 and the raw data can be determined, as discussed elsewhere in this disclosure.In some aspects, features of the subject physically positioned within 3D workspace 202 (eg, portions of head 210 ) may be identified as not corresponding to selected limbs, and may be filtered, as in the present disclosure discussed elsewhere.3 illustrates an example volumetric representation 300 of raw data that may be displayed in the 3D workspaces 118, 202 of FIGS. 1 and 2 in accordance with at least one aspect described in the present disclosure. Volume representation 300 may include virtual representations of features within the raw data.The volume representation 300 may include a first feature 301 and a second feature 303 (both illustrated as a vehicle in FIG. 3 ). FIG. 3 also illustrates a detailed view 302 of a portion of the first feature 301 and a detailed view 304 of a portion of the second feature 303 . The volume representation 300 may also include a third feature 305 . The raw data may represent the environment from the perspective of the first feature 301 (eg, a vehicle represented as the first feature 301 may include sensors for generating 4D raw data as it traverses the environment). In some aspects, the third feature 305 may represent a sign, pedestrian, animal, tree, or any other suitable feature within the environment.In some aspects, a user can interact with the volume representation 300 within the 3D workspace to annotate raw data and label features, as discussed elsewhere in this disclosure. For example, the user may label the second feature 303 as a vehicle (specifically, the user may select the detailed view 304 of the second feature 303 as the view corresponding to the tire of the second feature 303), as discussed elsewhere in this disclosure . As another example, the user may mark the detailed view 302 of the first feature 301 as corresponding to the side view mirror of the first feature 301, as discussed elsewhere in this disclosure. As yet another example, the user may label the third feature 305 as a sign, pedestrian, animal, tree, or any other suitable feature.4 illustrates example surface manifolds 402, 404 selectable by a user within the 3D workspaces 118, 202 of FIGS. 1 and 2 in accordance with at least one aspect described in the present disclosure. The surface manifolds 402, 404 may be generated based on user input within the 3D workspace. For example, the user input may select multiple points within the volume representation 300 of FIG. 3 to form a continuous surface (eg, surface manifolds 402, 404). Each feature of the volume representation 300 that lies within the surface manifolds 402, 404 may be labeled accordingly.5 illustrates an example flow diagram of a method 500 for annotating raw data using raw data and a volumetric representation of a 3D workspace in accordance with at least one aspect described in the present disclosure. The method 500 may be performed by any suitable system, apparatus, or device for annotating raw data. For example, annotation device 102 , sensor 116 , GUI 108 , PUI 106 , or some combination thereof in FIG. 1 may perform or direct the performance of one or more of the operations associated with method 500 . The method 500 may include one or more of blocks 502 , 504 , 506 , 508 , 510 , 512 , 514 , 516 , 518 , 520 , 522 , 524 , 526 and 528 . Although illustrated with discrete blocks, depending on the particular implementation, operations associated with one or more blocks of method 500 may be divided into additional blocks, combined into fewer blocks, or eliminated.At block 502, the annotation device may receive 3D and RGB sensor signals. In some aspects, the 3D and RGB sensor signals may correspond to 4D sensor data. In some aspects, the sensor may generate 4D sensor data to indicate the depth and color of features within the 3D workspace. In some aspects, the sensor may generate 4D sensor data to include a point cloud (eg, a collection of points) corresponding to the current time of the coordinate system. An annotation device can capture and represent point clouds according to Equation 1.{Xi∈R3}0≤i<nEquation 1In Equation 1, n represents the number of points within the corresponding point cloud, Xi represents a point in 3D space, i represents an integer indicating the current point, and R3 represents Euclidean space over real numbers. In some aspects, a Euclidean space can include n-1 dimensions. An annotation device can capture and represent point streams of point clouds of multiple coordinate systems over a period of time according to Equation 2.{Xi∈R4}0≤i<nEquation 2In Equation 2, Xi represents the current point in the Euclidean space, i represents an integer indicating the current point, R4 represents the Euclidean space including the time dimension on the real number, and n represents the corresponding point cloud within the number of points. The sensor can perform time slicing according to Equation 2, which can aggregate multiple coordinate systems into a single static coordinate system. In some aspects, the sensors may provide 4D sensor data indicative of the texture or appearance of features within the 3D workspace. The annotation device may determine, according to Equation 3, 4D sensor data indicative of the texture or appearance of features within the 3D workspace over a period of time.I[t0, t1](u, v)→{C∈Rh} Equation 3In Equation 3, C represents color, R represents Euclidean space, I[t0, t1] represents the temporal extent of the current coordinate system, and h represents an integer indicating the number of dimensions within the Euclidean space. Block 502 may be followed by block 510 .At block 504, the annotation device or sensor may perform 3D and RGB sensor calibration. For example, an annotation device or sensor may calibrate 4D sensor data based on the physical location of the sensors relative to each other, the 3D workspace, or some combination thereof.The annotation device or sensor can perform a motion transformation from the sensor coordinate system to the point cloud coordinate system according to Equation 4 and Equation 5.K∈R{3x3} Equation 5In Equation 4, P represents the point cloud coordinate system, C represents the sensor coordinate system, T represents the 4x4 rigid transformation matrix, and SE3 represents the rigid transformation. In Equation 5, K represents the projection matrix of the motion transformation. A sensor or annotation device can determine the color associated with each point according to Equation 6.In Equation 6, Xi∈R3 denotes the point stream, K∈R{3x3} denotes the motion transform of Equation 4 and Equation 5, Xi denotes the position of the current point, Ci denotes the color of the current point, and denotes the color and depth data , R denotes a Euclidean space, h denotes an integer indicating the number of dimensions within the Euclidean space, and R3 denotes a Euclidean space including a time dimension on the real numbers. Block 504 may be followed by block 510 .At block 506, the annotation device may receive inertial sensor signals. In some aspects, sensors may include inertial measurement units (IMUs) (eg, accelerometers, gyroscopes, or some combination thereof). The IMU may provide linear acceleration, rotational velocity, or some combination thereof of the sensor used to determine the motion transformation. The annotation device can calibrate the 4D sensor data based on linear acceleration, rotational speed, or some combination thereof. The annotation device or sensor may determine the current motion coordinate system as compared to the previous motion coordinate system (eg, the initial earth motion coordinate system) according to Equation 7.In Equation 7, represents the current motion coordinate system oriented relative to the apex, represents the previous motion coordinate system oriented relative to the apex, represents the rotational acceleration between the coordinate systems, w represents the relative velocity between the coordinate systems, and represents the linear acceleration between coordinate systems. Block 506 may be followed by block 512 .At block 508, the annotation device or sensor may perform inertial sensor calibration. In some aspects, the annotation device or sensor can calibrate the 4D sensor data based on the orientation of the highest point relative to the sensor corresponding to the 3D workspace. For example, an annotation device or sensor can calibrate 4D sensor data relative to the Earth's horizon. Annotation devices or sensors can filter out noisy inertial measurements from 4D sensor data. Block 508 may be followed by block 512 .At block 510, the annotation device may generate a scene XYZ-RGB point cloud. In some aspects, the annotation device can determine the physical location and corresponding color of each point in the 4D sensor data. The physical locations and corresponding colors of points in the 4D sensor data can represent the scene. Block 510 may be followed by block 514 .At block 512, the annotation device may perform sensor gesture translation and rotation. In some aspects, the annotation device may transform the motion coordinate system representing the 3D workspace into a reference coordinate system according to Equation 8.In Equation 8, represents the motion coordinate system oriented relative to the highest point, Gc represents the application space bounding coordinate system (eg, the calibration matrix), and Tk represents the mapping of 3D points of the 3D workspace to the application space bounding coordinate system (eg, the calibration matrix) , the mapped reference coordinate system) combined transformation. Block 512 may be followed by block 514 .At block 514, the annotation device may perform human limb XYZ-RGB sub-cloud segmentation to identify features within the 4D sensor data corresponding to the human limb. In some aspects, the annotation device may use a classifier according to Equation 9 to identify features corresponding to human limbs.In Equation 9, P[t0,t1] represents the current point cloud, Xi represents the position of the current point, Ci represents the color of the current point, represents the color and depth data, K represents the projection matrix of the motion transformation, and I[t0, t1] represents the time range of the current coordinate system.In some aspects, the annotation device may use a classifier according to Equation 10 to identify features corresponding to human limbs.In Equation 10, P[t0,t1] represents the previous point cloud, P[t1,t2] represents the current point cloud, Xi represents the position of the current point, Ci represents the color of the current point, LA represents the left arm feature, RA represents the right arm feature, RL represents the right leg feature, LL represents the left leg feature, 0 represents the unidentified feature, and N represents the set of positive integers. In some aspects, N may represent a set of positive integers excluding 0. In other aspects, N may represent a set of positive integers including zero.In some aspects, the annotation device can map colors from one color space to another. For example, an annotation device may map colors from the RGB color space, the HSV color space, the LAB color space, or some combination thereof, to a different color space. In some aspects, the annotation device can perform surface modeling to identify features corresponding to human limbs. Block 514 may be followed by block 516 .At block 516, the annotation device may create a human limb octree. In some aspects, the annotation device may generate a first octree (eg, a human limb octree) based on features corresponding to human limbs. In these and other aspects, the first octree may include a plurality of discrete volume units (eg, voxels). The size of the discrete volume units may be variable based on application or 4D sensor data.The first octree may include root nodes (eg, eight root nodes). The annotation device may determine whether each of the root nodes is occupied (eg, including points corresponding to human limbs). If the root node is occupied, the annotation device may divide the corresponding root node into multiple child nodes (eg, eight child nodes). The annotation device may repeat this process in each generation of nodes until a predefined number of generations of nodes are generated.The first octree may include discrete volume unit representations of human limbs within the 3D workspace such that Equation 11 is satisfied.In Equation 11, P[t0,t1] represents the previous point cloud, P[t1,t2] represents the current point cloud, Xi represents the position of the current point, Ci represents the color of the current point, and p represents the Registered voxel world.The annotation device may create a root node corresponding to the first of the points according to Equation 12.In Equation 12, and represent basis vectors spanning Euclidean space, X0 represents the center of the discrete volume unit, R0 represents the radius of the discrete volume unit, and x, y and z represent the corresponding determined coordinates.In some aspects, the annotation device may generate child nodes according to rerouting according to Equation 13 if .V(X0, R0 2m) Equation 13In Equation 13, m represents an integer greater than or equal to 1, X0 represents the point center in Euclidean space, and R0 represents the radius of the root discrete volume unit. If a point is contained within a root node, but it is not stored within a leaf node, the annotation device can perform discrete volume unit interpolation according to Equation 14.In Equation 14, Xa represents the first point in the corresponding 3D space, and Xb represents the second point in the corresponding 3D space. In some aspects, the annotation device may use Equation 14 (eg, function H) to determine the insertion index of a node from one orientation to another. In some aspects, the annotation device may perform the insertion process recursively. In these and other aspects, the function H may be fixed based on whether an index-to-space mapping is created. Block 516 may be followed by block 524 .At block 518, the annotation device may generate a scene time range point cloud. In some aspects, the annotation device can determine the physical location and corresponding color of each point in the raw data. The physical location and corresponding color of each point in the raw data can represent the scene.The annotation device may divide the raw data into time intervals (eg, time slices). In some aspects, the annotation device may perform time slicing of the raw data to generate a number of single aggregated coordinate systems, each of the single aggregated coordinate systems representing multiple coordinate systems within the raw data. A single coordinate system may include aggregated features representing each feature within the respective coordinate system. The aggregated features can be displayed in the volumetric representation of the raw data as if each feature in the corresponding coordinate system occurred simultaneously. Block 518 may be followed by block 520 .At block 520, the annotation device may create a scene data octree. In some aspects, the annotation device may generate a second octree (eg, a scene data octree) based on features within the original data. In these and other aspects, the second octree may include a plurality of discrete volume units (eg, voxels). The size of the discrete volume units may be variable based on application or 4D sensor data.The second octree may include root nodes (eg, eight root nodes). The annotation device may determine whether each of the root nodes is occupied (eg, including a point corresponding to a feature). If the root node is occupied, the annotation device may divide the corresponding root node into multiple child nodes (eg, eight child nodes). The annotation device may repeat this process in each generation of nodes until a predefined number of generations of nodes are generated.The second octree may include discrete volume unit representations of features within the raw data, such that Equation 11 is satisfied using the raw data rather than the 4D sensor data. The annotation device can use raw data instead of 4D sensor data to create the root node according to Equation 12. Annotation devices can indicate points within the root node as discrete volume units. In some aspects, the annotation device may generate child nodes following rerouting according to Equation 13 using raw data instead of 4D sensor data. If a point is contained within a root node, but it is not stored within a leaf node, the annotation device can perform discrete volume unit interpolation according to Equation 14 using raw data instead of 4D sensor data. Block 520 may be followed by block 524 .At block 522, the annotation device may determine whether the second octree intersects the previous octree. In some aspects, the annotation device may identify a previous octree related to the second octree. The annotation device may compare the previous octree to the second octree to determine whether a feature within the second octree has been annotated. In some aspects, if the feature is already annotated, the annotation device can prevent any further annotation. Block 522 may be followed by block 524 .At block 524, the annotation device may perform 3D subspace annotation of the first octree and the second octree based on the intersection of the first octree and the second octree. In some aspects, the annotation device can map the first octree and the second octree to a reference coordinate system. In these and other aspects, the annotation device may determine a scalar volume created by the first octree and another scalar volume created by the second octree. The annotation device may map the first octree and the second octree to the reference coordinate system based on the scalar volume.In some aspects, if discrete volume units are occupied, the annotation device may output uniformly sized discrete volume units according to Equation 15.In Equation 15, xa represents the center of the discrete volume unit in the first octree, ra represents the radius of the discrete volume unit in the first octree, and xb represents the center of the discrete volume unit in the second octree , rb denotes the radius of the discrete volume units in the second octree, and %m denotes the predefined target radius of the reference coordinate system.In some aspects, the annotation device may determine according to Equation 16 whether two discrete volume units within the reference coordinate system comprise the same or similar subspaces.In Equation 16, xa represents the center of the first discrete volume unit, xb represents the center of the second discrete volume unit, ra represents the radius of the first discrete volume unit, and rb represents the radius of the second discrete volume unit.In some aspects, if the two discrete volume units include an octree-to-octree intersection, the annotation device can annotate corresponding features in the raw data accordingly. Block 524 may be followed by block 526 and block 528.At block 526, the annotation device may perform contact estimation. In some aspects, the annotation device may determine an amount by which the first octree and the second octree intersect. The annotation device may implement an ordered list of data points based on the distance from the outermost node to the inner node where the first and second octrees intersect.At block 528, the annotation device may execute the shape descriptor. In some aspects, the annotation device may determine whether the user indicated that the surface manifold is to be generated based on discrete volume units of octree-to-octree intersection. The annotation device can determine push-pull surface operators based on the surface manifold.Modifications, additions, or omissions may be made to method 500 without departing from the scope of the present disclosure. For example, the operations of method 500 may be performed in a different order. Additionally or alternatively, two or more operations may be performed at the same time. Furthermore, the outlined operations and actions are provided by way of example only, and some of the operations and actions may be optional, may be combined into fewer operations and actions, or expanded into additional operations and actions without departing from The nature of the aspects disclosed.6 illustrates an example system 600 for providing a PUI 606 in accordance with at least one aspect described in this disclosure. The system 600 may include an annotation system 602 and a plurality of applications 612a-n. Annotation system 602 may include sensor 608 , IMU 610 , PUI 606 and display 614 . Displays may include VR displays, AR displays, or any other type of display. Sensors 608 may include cameras, light detection and ranging (LIDAR) sensors, radio detection and ranging (RADAR) sensors. IMU 610 may include an accelerometer, gyroscope, or any other suitable inertial sensor.The user can interact with the PUI 606 to generate annotated data. Applications 612a-n may include different machine learning algorithms to perform SML using the annotated data.7 illustrates an example flow diagram for annotating features within raw data in accordance with at least one aspect described in this disclosure. The method 700 can include: receiving 4D sensor data representing a first scene, the 4D sensor data including points 702 representing human limbs in the first scene; receiving 4D data representing a second scene and including features representing features in the second scene a plurality of points 704 of the mapping the octree and the second octree to the reference coordinate system 708; determining whether there is an octree-to-octree intersection 710 of the feature and human limb within the reference coordinate system; and based on the octree-to-octree intersection Annotate the feature.The computing device may generate an SML model based on the annotated data. To generate annotated data, a user can rate the raw data and identify features to determine which labels to associate with features within the raw data. Users can select tags from predefined tag categories. In some aspects, the predefined label classifications may be based on the application of SML. The computing device may perform SML using the annotated data to identify features in the environment that are identical or similar to features marked in the annotated data.In some aspects, a human-centric representation of the raw data can be generated that reduces the human perception workload and increases the efficiency of the annotation process. These and other aspects can extend human-computer interaction (HCI) by generating and displaying volumetric representations of raw data that allow users to interact with the representation. Additionally, these and other aspects can extend HCI by generating volumetric representations of human limbs within a 3D workspace.In some aspects, the raw data may include 4D sensor data generated by a 3D multimodal sensor and a color camera. Annotation devices can bi-directionally combine an immersive and interactive representation of raw data with a physical representation of the user within the 3D workspace. In some aspects, the annotation device can combine these representations through the user's Boolean supervoxel manipulation interaction model and the volumetric representation of the raw data. For example, an annotation device can determine the volumetric discretization of a human limb physically positioned within a 3D workspace through dense visual reconstruction and sparse voxelization.Annotation devices can display raw data as immersive and interactive representations based on virtual objects to provide users with effective annotation control and feedback. For example, a user can virtually grab and manipulate features of implicit surfaces that define orientation as a means for marking features.The annotation device may perform discrete spatial management via discrete volume units (eg, volume elements or voxels) including radii and dimensions. The annotation device can perform union, intersection, subtraction, inversion, or any other suitable operation to identify features to be labeled in the raw data. For example, annotation devices can perform point and feature touch, 3D/4D area selection, 3D/4D area enclosure, envelope push and pull, and sculpt modifiers.The annotation device may segment, concatenate, merge, or some combination of the 4D sensor data and raw data via flexible mathematical transformations such as Boolean set expressions, generalized continuous projection, and sweeping discrete volumes. Sensors can capture user actions within the 3D workspace. Sensors can generate 4D sensor data. Annotation devices can calibrate 4D sensor data to adjust visual control points by reshaping directional implicit functions for segmentation, apply push or pull sculpt modifiers to finely bend 3D/4D segmentation marker boundaries, translate, scale and rotate oracles The entities (geometry primitives and control gadgets) of the annotation process, or some combination thereof.The system may include an annotation device, sensors, and a PUI for receiving user input and providing user instructions. The annotation device may include a memory and a processor. The memory may include computer-readable instructions stored thereon. The processor may be operably coupled to the memory. The processor may read and execute computer readable instructions to perform or control the performance of the operations of the annotation device.Sensors can generate 4D sensor data. In some aspects, the sensor may comprise a 3D sensor, a color sensor, a 3D motion camera, a stereo camera, LIDAR, RADAR, or some combination thereof. The sensors may be configured to capture and generate 4D sensor data via computational geometry and machine vision based on 4D space occupancy, user gestures, user actions, user's virtual manipulation, or some combination thereof. Additionally, one or more of the sensors may include an accelerometer, a gyroscope, or some combination thereof.Annotation devices can receive 4D sensor data. The 4D sensor data may represent a first scene. The 4D sensor data may include points representing human limbs in the first scene. In some aspects, the first scene may correspond to a 3D workspace. The 4D sensor data may include structural information of the 3D workspace to capture the physical scene. In some aspects, the points within the 4D sensor data may comprise 4D points.In some aspects, the 4D sensor data may include color data corresponding to points within the 4D sensor data. In some aspects, the color data may be generated according to at least one of an RGB color space, an HSV color space, and a LAB color space. The 4D sensor data may include a coordinate system representing the 3D workspace over a period of time. Each coordinate system within the raw data can represent a 3D workspace at a particular point in time. The 4D sensor data may include a set of 3D points depicting a first scene containing the user and some empty space within the 3D workspace.Annotation devices can determine the physical location of sensors relative to each other. For example, an annotation device can determine the physical location of a 3D sensor relative to a color sensor. Annotation devices can calibrate 4D sensor data based on the physical locations of the sensors relative to each other. In some aspects, the annotation device may calibrate the sensor according to Equation 6, the 4D sensor data, or some combination thereof. In some aspects, the sensor may perform the calibration steps described in this disclosure.The annotation device can determine the movement of the sensors between coordinate systems relative to each other, the 3D workspace, or some combination thereof. For example, the annotation device may determine the movement of the 3D sensor relative to the color sensor between the previous coordinate system and the current coordinate system within the 4D sensor data. The annotation device may calibrate the 4D sensor data based on the movement of the sensors between coordinate systems relative to the 3D workspace, each other, or some combination thereof.In some aspects, the annotation device can determine parameters for each 4D point in the 4D sensor data. In these and other aspects, the annotation device may determine the X coordinate, Y coordinate, Z coordinate, time coordinate, or some combination thereof, of each 4D point relative to the 3D workspace. Additionally, the annotation device may determine a color corresponding to one or more of the 4D points in the 4D sensor data.The annotation device may identify points within the 4D sensor data that correspond to human limbs within the 3D workspace. In some aspects, the annotation device can identify points corresponding to human limbs according to Equation 10. In some aspects, Equation 10 may include a method for mapping a point Xi in 3D space with an associated color Ci by utilizing the current point cloud P[t1,t2] and the previous point cloud P[t0,t1] function. The previous point cloud and the current point cloud can operate as contextual cues to allow the annotation device to determine whether the current point belongs to the set {LA=left arm class, RA=right arm class, LL=left leg class, and RL=right leg class} digital label. In some aspects, if the current point does not belong to a digital tag, the annotation device may mark the point as "0", indicating that the current point does not belong to a digital tag.In some aspects, the annotation device can determine the physical location of the sensor relative to the highest point of the 3D workspace. For example, a sensor may implement an accelerometer to detect the physical location of the sensor relative to the highest point of the 3D workspace.Annotation devices can capture point clouds within 4D sensor data. Each point cloud may include a portion of points within the 4D sensor data. In some aspects, the annotation device can capture and represent point clouds according to Equation 1. The annotation device can determine the timestamp of each point. In some aspects, the annotation device may capture and represent a point stream of point clouds of multiple coordinate systems over a period of time according to Equation 2.An annotation device can identify points within the 4D sensor data that correspond to human limbs. In some aspects, the annotation device may determine, according to Equation 3, 4D sensor data indicative of the texture or appearance of features within the 3D workspace over a period of time. In some aspects, the annotation device may use a classifier according to Equation 9 to identify features corresponding to human limbs. In other aspects, the annotation device may use a classifier according to Equation 10 to identify features corresponding to human limbs.The annotation device may receive raw data (eg, 4D data) representing the second scene. The raw data may include points representing features in the second scene. In some aspects, the raw data may include multiple coordinate systems representing the second scene. In some aspects, the annotation device can aggregate different sets of coordinate systems into a different single coordinate system. A single coordinate system may include points representing features in a corresponding set of coordinate systems. The raw data may include points representing features in the second scene.The annotation device may generate a first octree representing the occupancy of the human limb in the 3D workspace. The annotation device may generate the first octree based on the points within the 4D sensor data. The annotation device can generate a motion coordinate system that represents the 4D sensor data. In some aspects, the annotation device may perform a motion transformation from the sensor coordinate system to the point cloud coordinate system according to Equation 4 and Equation 5.The annotation device can map the motion coordinate system to a predefined reference coordinate system. In some aspects, the annotation device may map the motion coordinate system to a predefined reference coordinate system according to Equation 8. For example, the annotation device may map 3D points of the 3D workspace to an annotation companion coordinate system (eg, a reference coordinate system). The annotation device may compare the current motion coordinate system to the previous motion coordinate system (eg, the initial earth motion coordinate system) according to Equation 7.The annotation device may generate a plurality of root nodes based on the 4D sensor data according to Equation 12. The annotation device can determine whether each node is occupied. If a node is occupied, the annotation device may divide the corresponding node into a plurality of child nodes. Each point within the root node and child nodes may include a representation of discrete volume units (eg, voxels) of a human limb in the 3D workspace. The annotation device can generate child nodes according to Equation 13.In some aspects, if the point is contained within the root node, but not within the leaf node of the first octree, the annotation device may perform discrete volume unit interpolation according to Equation 14. The first octree may include discrete volume unit representations of human limbs within the 3D workspace such that Equation 11 is satisfied.The annotation device may generate a second octree representing the occupancy of the second scene based on the plurality of points. The annotation device may generate nodes within the second octree according to Equation 12 based on the raw data. The annotation device can use Equation 12 to create a volume description as the root node of the first octree. In Equation 12, and may represent the unitary basis vectors [1,0,0], [0,1,0] and [0,0,1], respectively.The annotation device can determine whether each node within the second octree is occupied. In response to a node being occupied, the annotation device may divide the corresponding node into a plurality of child nodes. The annotation device may generate the second octree such that each point within the node is contained within a discrete volume unit representing a feature in the second scene. The annotation device may generate a second octree to include discrete volume unit representations of human limbs within the 3D workspace such that Equation 11 is satisfied.In some aspects, the annotation device may align time between coordinate systems within the 4D sensor data, raw data, or some combination thereof. The annotation device can align the time between the 4D sensor data and the raw data via the time horizon. Time alignment between 4D sensor data and raw data may allow the user to select a time window to annotate.The annotation device can map the first octree and the second octree to the reference coordinate system. The annotation device can transform the motion coordinate system representing the 3D workspace into a reference coordinate system according to Equation 8. In addition, the annotation device may transform the motion coordinate system representing the raw data into the reference coordinate system according to Equation 8.The annotation device may determine a first scalar volume of the first octree and a second scalar volume of the second octree. The annotation device may compare the first scalar volume to the second scalar volume. Additionally, the annotation device may map the first octree and the second octree to each other based on the comparison. In some aspects, the annotation device may resize at least one of the nodes in the first octree and at least one of the nodes in the second octree according to Equation 15, such that the discrete in the reference coordinate system The radius and size of the volume unit are uniform.The annotation device can determine whether there is an octree-to-octree intersection of features and human limbs within the reference frame. In some aspects, the annotation device can determine whether a node in the first octree and a node in the second octree include similar subspaces within the reference coordinate system. The annotation device may determine whether the node in the first octree and the node in the second octree comprise similar subspaces within the reference coordinate system according to Equation 17.⊙(Voctree)→R+ Equation 17In Equation 17, Voctree represents the entire first octree or the entire second octree, and R+ represents an integer greater than zero. The annotation device may determine the octree-octree intersection based on nodes in the first octree and in the second octree occupying the same or similar subspace within the reference coordinate system. An annotation device may annotate features based on octree-to-octree intersection.The annotation device may determine whether the user input indicates a surface description indicating that the continuous surface within the second scene is to be annotated. In some aspects, the annotation device can annotate each feature within the continuous surface accordingly.In some aspects, the annotation device may recognize (eg, sensors may capture and generate 4D sensor data to indicate) different gestures of the user's limb to label different features with different labels. In some aspects, the annotated tags may include elements of SML-based smart sensor fusion and multimodal perception models.The volumetric representation of the PUI and raw data can be displayed via a VR headset, AR display, 3D hologram, or any other suitable volume-based display. The annotation device may select the type of display medium based on the density of information in the raw data. In some aspects, information density may include a ratio of features (eg, meaningful content per byte).In the following, various aspects of the present disclosure will be explained:Example 1 can include a system that includes an annotation device. The annotation device may include a memory having computer-readable instructions stored thereon; and a processor operably coupled to the memory and configured to read and execute the computer-readable instructions to perform or controlling performance comprising operations of: receiving 4D sensor data representing a first scene, the 4D sensor data including points representing human limbs in the first scene; receiving a second scene and including features representing the second scene 4D data of a plurality of points of ; generating a first octree representing the occupancy of the human limb in the first scene based on the point, and generating a second octree representing the occupancy of the second scene based on the plurality of points; mapping the first octree and the second octree to the reference coordinate system; determining whether there is an octree-to-octree intersection of the feature and the human limb within the reference coordinate system; and determining based on the octree-to-octree intersection Annotate the feature.Example 2 can include the system of example 1, wherein the plurality of points includes a second plurality of points that form part of the first plurality of points, and the 4D sensor data includes a coordinate system representing the first scene at a particular time, receiving the representation The operations on the 4D sensor data of the first scene include: generating a plurality of point clouds, each point cloud of the plurality of point clouds including a portion of the first plurality of points; determining a timestamp associated with a particular time; and identifying a representation representing a human being Limb point.Example 3 may include the system of example 2, wherein: a plurality of point clouds are captured and represented according to the following equation:{Xi∈R3}0≤i<nwhere n represents the number of points within the corresponding point cloud, Xi represents a point in 3D space, i represents an integer indicating the current point, and R3 represents a Euclidean space including a time dimension on real numbers; and the timestamps are based on Determined by the following equation:{Xi∈R4}0≤i<nAmong them, Xi represents a point in 3D space, i represents an integer indicating the current point, R4 represents a Euclidean space including a time dimension on a real number, and n represents the number of points in the corresponding point cloud.Example 4 may include the system of any of Examples 1-3, wherein the 4D sensor data further includes color data corresponding to the point according to at least one of an RGB color space, an HSV color space, and a LAB color space.Example 5 can include the system of any of Examples 2-4, wherein the first plurality of points includes a plurality of 4D points, the operations further comprising determining parameters of each 4D point of the plurality of 4D points.Example 6 may include the system of Example 5, wherein the operation of determining the parameters of each of the plurality of 4D points includes: determining an X coordinate, a Y coordinate, a Z coordinate of each 4D point of the plurality of 4D points relative to the first scene , and time coordinates; and determining the color of each of the plurality of 4D points.Example 7 can include the system of any of Examples 1-6, further comprising a sensor configured to generate 4D sensor data.Example 8 may include the system of Example 7, wherein the sensor includes a 3D sensor and a color sensor, the operations further comprising: determining a physical location of the 3D sensor relative to the color sensor; and calibrating the 4D sensor data based on the physical location of the 3D sensor relative to the color sensor .Example 9 can include the system of example 8, wherein the 4D sensor data is calibrated according to the following equation:where Xi∈R3 denotes point stream, K∈R{3x3} denotes motion transformation, Xi denotes point in 3D space, R3 denotes Euclidean space including time dimension over real numbers, denotes color and depth data, SE3 denotes rigidity transformation, K represents the projection matrix of the motion transformation, R{3x3} represents the 3x3 matrix, Ci represents the color of the current point, and R represents the Euclidean space, and h represents an integer indicating the number of dimensions within the Euclidean space.Example 10 may include the system of any of Examples 8 and 9, wherein the 3D sensor includes an accelerometer and a gyroscope, the operations further comprising using the accelerometer to determine a physical location of the 3D sensor relative to and corresponding to the first scene the highest point.Example 11 may include the system of any of Examples 8-10, wherein the 4D sensor data includes a plurality of coordinate systems representing the first scene, the operations further comprising: determining the 3D sensor relative to a previous coordinate system of the plurality of coordinate systems movement; and calibrating the 4D sensor data based on the movement of the 3D sensor relative to the previous coordinate system.Example 12 may include the system of any of Examples 1-11, wherein generating a first octree representing the occupancy of the human limb in the first scene based on the points includes generating a motion coordinate system representing the 4D sensor data ; and mapping the motion coordinate system to a predefined reference coordinate system, wherein the predefined reference coordinate system corresponds to the first scene.Example 13 may include the system of Example 12, wherein the motion coordinate system is mapped to the predefined reference coordinate system according to the following equation:where represents the motion coordinate system oriented relative to the highest point, Gc represents the application space boundary coordinate system, and Tk represents the combined transformation that maps 3D points of the 3D workspace to the application space boundary coordinate system.Example 14 may include the system of any of Examples 1-13, wherein receiving 4D sensor data representing the first scene includes identifying points representing human limbs in the first scene according to the following equation:Where Xi represents a point in 3D space, Ci represents the color of the current point, P[t0,t1] represents the previous point cloud, P[t1,t2] represents the current point cloud, LA represents the left arm, RA represents the right arm, and RL represents the right leg, and LL for left leg.Example 15 can include the system of any of Examples 1-14, wherein the 4D data includes a plurality of coordinate systems representing the second scene, and the operation of receiving the 4D data representing the second scene includes converting a portion of the plurality of coordinate systems to the coordinate system. Aggregate into a single coordinate system that includes points representing features in each of the partial coordinate systems.Example 16 may include the system of any of Examples 1-15, wherein generating a second octree representing the occupancy of the second scene based on the plurality of points includes generating the plurality of nodes according to the following equation:where and represent the basis vectors spanning the Euclidean space, X0 the point center of the Euclidean space, R0 the radius of the root discrete volume unit, and R3 the Euclidean space including the time dimension on the real numbers; and determining whether each node is occupied, in response to the node being occupied, dividing the corresponding node of the plurality of nodes into another plurality of nodes, wherein each point within the plurality of nodes and the other plurality of nodes is contained in the Within discrete volume units, the discrete volume units represent features in the second scene.Example 17 may include the system of any of Examples 1-16, wherein generating a first octree representing occupancy of a human limb in the first scene based on the plurality of points includes generating a plurality of nodes:where, and represent basis vectors spanning the Euclidean space, X0 represents the center of the discrete volume unit, R0 represents the radius of the root discrete volume unit, and R3 represents the Euclidean space including the time dimension over the real numbers; and determining each Whether the node is occupied, in response to the node being occupied, the corresponding nodes of the plurality of nodes are divided into another plurality of nodes, wherein each point within the plurality of nodes and the other plurality of nodes is the human limb in the first scene Voxelized representation.Example 18 may include the system of any of Examples 1-17, wherein the operation of mapping the first octree and the second octree to the reference coordinate system comprises: determining a first scalar volume of the first octree; determining a second scalar volume of the second octree; comparing the first scalar volume to the second scalar volume; mapping the first and second octrees to each other based on the comparison; adjusting according to the following equation Dimensions of at least one of the nodes in the first octree and the nodes in the second octree so that the dimensions are uniform:where xa represents the center of the discrete volume unit in the first octree, ra represents the radius of the discrete volume unit in the first octree, xb represents the center of the discrete volume unit in the second octree, and rb represents the first octree The radius of the discrete volume units in the binary octree, and %m represents the predefined target radius of the reference coordinate system.Example 19 may include the system of example 18, wherein determining whether there is an octree-to-octree intersection of the feature and the human limb within the reference coordinate system includes determining the node in the first octree and the first octree according to the following equation: Does another node in the binary octree include a similar subspace within the reference frame:⊙(Voctree)→R+where Voctree represents the first octree or the second octree, and R+ represents an integer greater than zero, where the intersection of the octree and the octree is based on the nodes in the first octree and the nodes in the second octree The nodes of include similar subspaces within the reference coordinate system.Example 20 may include the system of any of Examples 1-19, wherein determining whether an octree-to-octree intersection of the feature and the human limb exists within the reference coordinate system includes determining an octree-to-octree intersection Whether to include an indication surface description that indicates that a contiguous surface within the second scene is to be annotated, where the feature is located within the contiguous surface.Example 21 can include the system of any of Examples 1-20, wherein the system further includes a perceptual user interface for receiving user input and providing user instructions.Example 22 may include a non-transitory computer-readable medium having computer-readable instructions stored thereon executable by a processor to perform or control the performance of operations comprising: receiving a 4D sensor data, the 4D sensor data including points representing human limbs in the first scene; receiving 4D data representing the second scene and including a plurality of points representing features in the second scene; generating based on the points representing the first scene a first octree of the occupancy of a human limb in the scene, and a second octree representing the occupancy of the second scene is generated based on a plurality of points; mapping the first and second octrees to a reference coordinate system ; determine whether an octree-to-octree intersection of a feature and a human limb exists within the reference coordinate system; and annotate the feature based on the octree-to-octree intersection.Example 23 can include the non-transitory computer-readable medium of Example 22, wherein the plurality of points includes a second plurality of points that form part of the first plurality of points, and the 4D sensor data includes representations of the first scene at a particular time , the operation of receiving 4D sensor data representing the first scene includes: generating a plurality of point clouds, each of the plurality of point clouds comprising a portion of the first plurality of points; determining a time associated with a particular time stamps; and marking points representing human limbs.Example 24 can include the non-transitory computer-readable medium of example 23, wherein: a plurality of point clouds are captured and represented according to the following equation:{Xi∈R3}0≤i<nwhere n represents the number of points within the corresponding point cloud, Xi represents a point in 3D space, i represents an integer indicating the current point, and R3 represents a Euclidean space including a time dimension on real numbers; and the timestamps are based on Determined by the following equation:{Xi∈R4}0≤i<nAmong them, Xi represents a point in 3D space, i represents an integer indicating the current point, R4 represents a Euclidean space including a time dimension on a real number, and n represents the number of points in the corresponding point cloud.Example 25 can include the non-transitory computer-readable medium of any of Examples 22-24, wherein the first plurality of points includes a plurality of 4D points, the operations further comprising determining parameters for each 4D point of the plurality of 4D points.Example 26 can include the non-transitory computer-readable medium of example 25, wherein the operation of determining the parameter of each 4D point of the plurality of 4D points comprises: determining an X coordinate of each 4D point of the plurality of 4D points relative to the first scene , a Y coordinate, a Z coordinate, and a time coordinate; and determining a color for each of the plurality of 4D points.Example 27 may include the non-transitory computer-readable medium of any of Examples 22-26, the operations further comprising determining a physical location of the 3D sensor relative to the color sensor; and determining the 4D sensor based on the physical location of the 3D sensor relative to the color sensor sensor data for calibration.Example 28 can include the non-transitory computer-readable medium of example 27, wherein the 4D sensor data is calibrated according to the following equation:where Xi∈R3 denotes point stream, K∈R{3x3} denotes motion transformation, Xi denotes point in 3D space, R3 denotes Euclidean space including time dimension over real numbers, denotes color and depth data, SE3 denotes rigidity transformation, K represents the projection matrix of the motion transformation, R{3x3} represents the 3x3 matrix, Ci represents the color of the current point, and R represents the Euclidean space, and h represents an integer indicating the number of dimensions within the Euclidean space.Example 29 may include the non-transitory computer-readable medium of any of Examples 22-28, the operations further comprising: determining a physical location of the 3D sensor relative to the first scene and a highest point corresponding to the first scene.Example 30 may include the non-transitory computer-readable medium of any of Examples 22-29, wherein the 4D sensor data includes a plurality of coordinate systems representing the first scene, the operations further comprising: determining the 3D sensor relative to the plurality of coordinate systems and calibrating the 4D sensor data based on the movement of the 3D sensor relative to the previous coordinate system.Example 31 may include the non-transitory computer-readable medium of any of Examples 22-30, wherein generating a first octree representing occupancy of a human limb in the first scene based on the points includes generating a 4D representation a motion coordinate system for sensor data; and mapping the motion coordinate system to a predefined reference coordinate system, wherein the predefined reference coordinate system corresponds to the first scene.Example 32 may include the non-transitory computer-readable medium of Example 31, wherein the motion coordinate system is mapped to the predefined reference coordinate system according to the following equation:where represents the motion coordinate system oriented relative to the highest point, Gc represents the application space boundary coordinate system, and Tk represents the combined transformation that maps 3D points of the 3D workspace to the application space boundary coordinate system.Example 33 may include the non-transitory computer-readable medium of any of Examples 22-32, wherein receiving 4D sensor data representing the first scene includes identifying points representing human limbs in the first scene according to the following equation :Where Xi represents a point in 3D space, Ci represents the color of the current point, P[t0,t1] represents the previous point cloud, P[t1,t2] represents the current point cloud, LA represents the left arm, RA represents the right arm, and RL represents the right leg, and LL for left leg.Example 34 can include the non-transitory computer-readable medium of any of Examples 22-33, wherein the 4D data includes a plurality of coordinate systems representing the second scene, and receiving the 4D data representing the second scene includes converting the plurality of coordinates The partial coordinate systems in the system are aggregated into a single coordinate system that includes points representing features in each of the partial coordinate systems.Example 35 may include the non-transitory computer-readable medium of any of Examples 22-34, wherein generating a second octree representing the occupancy of the second scene based on the plurality of points comprises: according to the following equation Generate multiple nodes:where and represent the basis vectors spanning the Euclidean space, X0 the point center of the Euclidean space, R0 the radius of the root discrete volume unit, and R3 the Euclidean space including the time dimension on the real numbers; and determining whether each node is occupied, in response to the node being occupied, dividing the corresponding node of the plurality of nodes into another plurality of nodes, wherein each point within the plurality of nodes and the other plurality of nodes is contained in the Within discrete volume units, the discrete volume units represent features in the second scene.Example 36 may include the non-transitory computer-readable medium of any of Examples 22-35, wherein generating a first octree representing occupancy of the human limb in the first scene based on the plurality of points comprises: according to The following equations are used to generate multiple nodes:where, and represent basis vectors spanning the Euclidean space, X0 represents the center of the discrete volume unit, R0 represents the radius of the root discrete volume unit, and R3 represents the Euclidean space including the time dimension over the real numbers; and determining each Whether the node is occupied, in response to the node being occupied, the corresponding nodes of the plurality of nodes are divided into another plurality of nodes, wherein each point in the plurality of nodes and the other plurality of nodes are the human limbs in the first scene. Voxelized representation.Example 37 may include the non-transitory computer-readable medium of any of Examples 22-36, wherein the mapping of the first octree and the second octree to the reference coordinate system comprises: determining the first octree a first scalar volume of ; determining a second scalar volume of the second octree; comparing the first scalar volume with the second scalar volume; ; adjust the dimensions of at least one of the nodes in the first octree and the nodes in the second octree so that the dimensions are uniform according to the following equation:where xa represents the center of the discrete volume unit in the first octree, ra represents the radius of the discrete volume unit in the first octree, xb represents the center of the discrete volume unit in the second octree, and rb represents the first octree The radius of the discrete volume units in the binary octree, and %m represents the predefined target radius of the reference coordinate system.Example 38 can include the non-transitory computer-readable medium of example 37, wherein determining whether there is an octree-to-octree intersection of the feature and the human limb within the reference coordinate system includes determining a first octree according to the following equation: Does a node in the tree and another node in the second octree include similar subspaces within the reference frame:⊙(Voctree)→R+where Voctree represents the first octree or the second octree, and R+ represents an integer greater than zero, where the intersection of the octree and the octree is based on the nodes in the first octree and the nodes in the second octree The nodes of include similar subspaces within the reference coordinate system.Example 39 may include the non-transitory computer-readable medium of any of Examples 22-38, wherein determining whether an octree-to-octree intersection of the feature and the human limb exists within the reference coordinate system comprises determining an octree Whether the tree-to-octree intersection includes an indication surface description that indicates that a continuous surface within the second scene is to be annotated, wherein the feature is located within the continuous surface.Example 40 can include a method comprising: receiving 4D sensor data representing a first scene, the 4D sensor data including points representing a human limb in the first scene; receiving a second scene and including 4D data for a plurality of points of features of a ; map the first and second octrees to the reference coordinate system; determine whether there is an octree-to-octree intersection of features and human limbs within the reference coordinate system; and based on octree-to-octree Intersection to annotate the feature.Example 41 may include the method of example 40, wherein the plurality of points includes a second plurality of points that form part of the first plurality of points, and the 4D sensor data includes a coordinate system representing the first scene at a particular time, receiving the representation The 4D sensor data for the first scene includes: generating a plurality of point clouds, each point cloud of the plurality of point clouds including a portion of the first plurality of points; determining a timestamp associated with a particular time; and identifying a point.Example 42 may include the method of example 40, wherein: a plurality of point clouds are captured and represented according to the following equation:{Xi∈R3}0≤i<nwhere n represents the number of points within the corresponding point cloud, Xi represents a point in 3D space, i represents an integer indicating the current point, and R3 represents a Euclidean space including a time dimension on real numbers; and the timestamps are based on Determined by the following equation:{Xi∈R4}0≤i<nAmong them, Xi represents a point in 3D space, i represents an integer indicating the current point, R4 represents a Euclidean space including a time dimension on a real number, and n represents the number of points in the corresponding point cloud.Example 43 may include the method of any of Examples 40-42, wherein the first plurality of points includes a plurality of 4D points, the method further comprising determining parameters for each 4D point of the plurality of 4D points.Example 44 can include the method of Example 43, wherein determining the parameters of each 4D point of the plurality of 4D points comprises: determining an X coordinate, a Y coordinate, a Z coordinate, and time coordinates; and determining the color of each of the plurality of 4D points.Example 45 may include the method of any of Examples 40-44, further comprising determining a physical location of the 3D sensor relative to the color sensor; and calibrating the 4D sensor data based on the physical location of the 3D sensor relative to the color sensor.Example 46 may include the method of example 45, wherein the 4D sensor data is calibrated according to the following equation:where Xi∈R3 denotes point stream, K∈R{3x3} denotes motion transformation, Xi denotes point in 3D space, R3 denotes Euclidean space including time dimension over real numbers, denotes color and depth data, SE3 denotes rigidity transformation, K represents the projection matrix of the motion transformation, R{3x3} represents the 3x3 matrix, Ci represents the color of the current point, and R represents the Euclidean space, and h represents an integer indicating the number of dimensions within the Euclidean space.Example 47 may include the method of any of Examples 40-46, further comprising: determining a physical location of the 3D sensor relative to the first scene and a highest point corresponding to the first scene.Example 48 may include the method of any of Examples 40-47, wherein the 4D sensor data includes a plurality of coordinate systems representing the first scene, the method further comprising: determining the 3D sensor relative to a previous coordinate system of the plurality of coordinate systems movement; and calibrating the 4D sensor data based on the movement of the 3D sensor relative to the previous coordinate system.Example 49 may include the method of any of Examples 40-48, wherein generating the first octree representing the occupancy of the human limb in the first scene based on the points comprises: generating a motion coordinate system representing the 4D sensor data; and The motion coordinate system is mapped to a predefined reference coordinate system, wherein the predefined reference coordinate system corresponds to the first scene.Example 50 may include the method of Example 49, wherein the motion coordinate system is mapped to the predefined reference coordinate system according to the following equation:where represents the motion coordinate system oriented relative to the highest point, Gc represents the application space boundary coordinate system, and Tk represents the combined transformation that maps 3D points of the 3D workspace to the application space boundary coordinate system.Example 51 may include the method of any of Examples 40-50, wherein receiving 4D sensor data representing the first scene includes identifying points representing human limbs in the first scene according to the following equation:Where Xi represents a point in 3D space, Ci represents the color of the current point, P[t0,t1] represents the previous point cloud, P[t1,t2] represents the current point cloud, LA represents the left arm, RA represents the right arm, and RL represents the right leg, and LL for left leg.Example 52 may include the method of any of Examples 40-51, wherein the 4D data includes a plurality of coordinate systems representing the second scene, and receiving the 4D data representing the second scene includes aggregating portions of the plurality of coordinate systems into a A single coordinate system that includes points representing features in each of the partial coordinate systems.Example 53 may include the method of any of Examples 40-52, wherein generating a second octree representing the occupancy of the second scene based on the plurality of points includes generating the plurality of nodes according to the following equation:where and represent the basis vectors spanning the Euclidean space, X0 the point center of the Euclidean space, R0 the radius of the root discrete volume unit, and R3 the Euclidean space including the time dimension on the real numbers; and determining whether each node is occupied, in response to the node being occupied, dividing the corresponding node of the plurality of nodes into another plurality of nodes, wherein each point within the plurality of nodes and the other plurality of nodes is contained in the Within discrete volume units, the discrete volume units represent features in the second scene.Example 54 may include the method of any of Examples 40-53, wherein generating the first octree representing the occupancy of the human limb in the first scene based on the plurality of points includes generating the plurality of nodes according to the following equation :where, and represent basis vectors spanning the Euclidean space, X0 represents the center of the discrete volume unit, R0 represents the radius of the root discrete volume unit, and R3 represents the Euclidean space including the time dimension over the real numbers; and determining each Whether the node is occupied, in response to the node being occupied, the corresponding nodes of the plurality of nodes are divided into another plurality of nodes, wherein each point within the plurality of nodes and the other plurality of nodes is the human limb in the first scene Voxelized representation.Example 55 may include the method of any of Examples 40-54, wherein mapping the first octree and the second octree to the reference coordinate system includes: determining a first scalar volume of the first octree; determining a first octree; the second scalar volume of the binary octree; compare the first scalar volume to the second scalar volume; map the first and second octrees to each other based on the comparison; adjust the first octree according to the following equation Dimensions of at least one of the nodes in the octree and the nodes in the second octree so that the dimensions are uniform:where xa represents the center of the discrete volume unit in the first octree, ra represents the radius of the discrete volume unit in the first octree, xb represents the center of the discrete volume unit in the second octree, and rb represents the first octree The radius of the discrete volume units in the binary octree, and %m represents the predefined target radius of the reference coordinate system.Example 56 may include the method of example 55, wherein determining whether there is an octree-to-octree intersection of the feature and the human limb within the reference coordinate system includes determining the node in the first octree and the second octree according to the following equation: Does another node in the fork tree include a similar subspace within the reference frame:⊙(Voctree)→R+where Voctree represents the first octree or the second octree, and R+ represents an integer greater than zero, where the intersection of the octree and the octree is based on the nodes in the first octree and the nodes in the second octree The nodes of include similar subspaces within the reference coordinate system.Example 57 may include the method of any of Examples 40-56, wherein determining whether there is an octree-to-octree intersection of the feature and the human limb within the reference coordinate system includes determining whether the octree-to-octree intersection includes Indicates a surface description that indicates that a continuous surface within the second scene is to be annotated, wherein the feature is located within the continuous surface.Example 58 may include a system comprising: means for receiving 4D sensor data representing a first scene, the 4D sensor data including points representing a human limb in the first scene; for receiving a second scene and Apparatus comprising 4D data of a plurality of points representing features in a second scene; for generating a first tree-like data structure representing occupancy of a human limb in the first scene based on the points, and generating based on the plurality of points Means for representing a second tree-like data structure occupied by a second scene; means for mapping the first tree-like data structure and the second tree-like data structure to a reference coordinate system; for determining whether there is a presence in the reference coordinate system means for the intersection of a feature and a tree-to-tree data structure of a human limb; and means for annotating the feature based on the intersection of the tree-to-tree data structure.Example 59 can include the system of example 58, wherein the plurality of points includes a second plurality of points that form part of the first plurality of points, and the 4D sensor data includes a coordinate system representing the first scene at a particular time for means for receiving 4D sensor data representing a first scene comprising: means for generating a plurality of point clouds, each point cloud of the plurality of point clouds comprising a portion of the first plurality of points; means for time stamping; and means for identifying points representing human limbs.Example 60 may include the system of Example 58, further comprising: means for determining a physical location of the 3D sensor relative to the color sensor; and means for calibrating the 4D sensor data based on the physical location of the 3D sensor relative to the color sensor.Example 61 may include the system of Example 58, further comprising: means for determining a physical location of the 3D sensor relative to the first scene and a highest point corresponding to the first scene.Example 62 can include the system of example 58, wherein the means for generating a first tree-like data structure representing occupancy of a human limb in the first scene based on the points comprises: generating a motion coordinate system representing the 4D sensor data. means; and means for mapping the motion coordinate system to a predefined reference coordinate system, wherein the predefined reference coordinate system corresponds to the first scene.Example 63 can include the system of any one of Examples 58-14, wherein the 4D data includes a plurality of coordinate systems representing the second scene, and the means for receiving the 4D data representing the second scene includes for placing the plurality of coordinate systems in the plurality of coordinate systems. Means for aggregating partial coordinate systems of a into a single coordinate system that includes points representing features in each of the partial coordinate systems.Although the above description and related drawings may depict electronic device components as separate elements, skilled artisans will appreciate various possibilities for combining or integrating the discrete elements into a single element. Such possibilities may include: combining two or more circuits for forming a single circuit, mounting two or more circuits on a common chip or base to form an integrated element, in a common processor core Execute discrete software components on it, and so on. Rather, the skilled artisan will appreciate that a single element can be split into two or more discrete elements, such as splitting a single circuit into two or more separate circuits, splitting a chip or submount into which it was originally disposed of discrete components, dividing a software component into two or more parts and executing each part on a separate processor core, etc.It should be appreciated that the implementation of the methods detailed herein are illustrative in nature and are therefore understood to be capable of being implemented in corresponding devices. Likewise, it should be appreciated that the implementations of the apparatus detailed herein are understood to be capable of being implemented as corresponding methods. Accordingly, it should be understood that an apparatus corresponding to the methods detailed herein may include one or more components configured to perform each aspect of the associated method.All acronyms defined in the above description are additionally included in all claims included herein.
A processor may support a two-dimensional (2-D) gather instruction and a 2-D cache. The processor may perform the 2-D gather instruction to access one or more sub-blocks of data from a two-dimensional (2-D) image stored in a memory coupled to the processor. The two-dimensional (2-D) cache may store the sub-blocks of data in a multiple cache lines. Further, the 2-D cache may support access of more than one cache lines while preserving a two-dimensional structure of the 2-D image.
CLAIMS What is claimed is: 1. A processor comprising: a pre-fetch unit to fetch one or more instructions, a decode unit to decode the one or more instructions, wherein the one or more instructions include a two-dimensional (2-D) gather instruction, an execution unit to perform the 2-D gather instruction to access one or more sub- blocks of data from a two-dimensional (2-D) image stored in a memory coupled to the processor, a two-dimensional (2-D) cache to store the one or more sub-blocks of data in a multiple cache lines and to support access of at least more than one cache line in a single processing cycle while preserving a two-dimensional structure of the 2-D image. 2. The processor of claim 1, wherein the execution unit to cause loading the one or more sub-blocks into the 2-D cache in response to performing the 2-D gather instruction. 3. The processor of claim 2, wherein the 2-D gather instruction to specify a pointer to the 2-D image, a plurality of coordinates to identify the one or more sub-blocks, and a number of elements in the one or more sub-blocks. 4. The processor of claim 1, wherein the 2-D gather instruction is to cause access of sixteen cache lines in one processing cycle. 5. The processor of claim 1, wherein the 2-D cache to support mapping of the one or more sub-blocks to avoid read conflict after loading the one or more sub-blocks. 6. The processor of claim 5, the 2-D cache further comprises an access logic to map the one or more sub-blocks on to a plurality of sets and a plurality of ways. 7. The processor of claim 6, wherein the access logic to determine a set of the plurality of sets on to which a first coordinate of an image pixel of the one or more sub- blocks are mapped, wherein the access logic to use a result of a modulo operation performed on the first coordinate of the 2-D image and a total number of sets available in the 2-D cache. 8. The processor of claim 6, wherein the access logic to determine a way of the plurality of ways on to which a second coordinate of the image pixel within the one or more sub-blocks are mapped, wherein the access logic to use a result of a modulo operation performed on the second coordinate of the 2-D image and a total number of ways available in the 2-D cache. 9. The processor of claim 8, wherein the access logic to perform a 2-D cache lookup, wherein the cache look-up includes the access logic to, identify a memory block within the 2-D cache using the set and the way of the 2-D cache, retrieve content of a tag field within the memory block, and determine if a data stored in the memory block is evicted. 10. The processor of claim 9, the access logic further comprises a read/write logic, wherein the read/write logic to access the tag field from the memory block and determine if contents of the tag field is non-evicted. 11. The processor of claim 10, the access logic further comprises a shuffle logic to rearrange the data in the memory blocks in an order of the addresses provided by the 2-D gather instruction, wherein the memory blocks associated with the tag field that are non- evicted are chosen. 12. A method in a processor comprising: pre-fetching one or more instructions, decoding the one or more instructions, wherein the one or more instructions include a two-dimensional (2-D) gather instruction, performing the 2-D gather instruction to access one or more sub-blocks of data from a two-dimensional (2-D) image stored in a memory coupled to the processor, storing the one or more sub-blocks of data in a multiple cache lines of a 2-D cache, and supporting access of at least more than one cache line in a single processing cycle while preserving a two-dimensional structure of the 2-D image. 13. The method of claim 12 comprises loading the one or more sub-blocks into the 2-D cache in response to performing the 2-D gather instruction. 14. The processor of claim 13 comprises, specifying a pointer to the 2-D image in response to execution of the 2-D gather instruction, and identifying the one or more sub-blocks and a number of elements in the one or more sub-blocks using a plurality of coordinates. 15. The method of claim 12 comprises accessing sixteen cache lines in one processing cycle in response to performing the 2-D gather instruction. 16. The method of claim 12 comprises mapping of the one or more sub-blocks, in the 2-D cache, to avoid read conflict after loading the one or more sub-blocks. 17. The method of claim 16 comprises mapping the one or more sub-blocks on to a plurality of sets and a plurality of ways of the 2-D cache. 18. The method of claim 17 comprises determining a set of the plurality of sets on to which a first coordinate of an image pixel of the one or more sub-blocks are mapped, wherein a result of a modulo operation performed on the first coordinate of the 2-D image and a total number of sets available in the 2-D cache is used to determine the set. 19. The method of claim 17 comprises determining a way of the plurality of ways on to which a second coordinate of the image pixel within the one or more sub-blocks are mapped, wherein a result of a modulo operation performed on the second coordinate of the 2-D image and a total number of ways available in the 2-D cache is used to determine the way. 20. The method of claim 19 comprises performing a 2-D cache lookup, wherein the cache look-up includes, identifying a memory block within the 2-D cache using the set and the way of the 2-D cache, retrieving content of a tag field within the memory block, and determining if a data stored in the memory block is evicted. 21. The method of claim 20 comprises accessing the tag field from the memory block and determining if content of the tag field is non-evicted. 22. The method of claim 21 comprises rearranging the data in the memory blocks in an order of the addresses provided by the 2-D gather instruction, wherein the memory blocks associated with the tag field that are non-evicted are chosen. 23. A system comprising, a memory, a machine readable storage medium, a logic, a plurality of input-output devices, and a processor comprising a plurality of cores and a plurality of caches, wherein the processor to, fetch one or more instructions, decode the one or more instructions, wherein the one or more instructions include a two-dimensional (2-D) gather instruction, perform the 2-D gather instruction to access one or more sub-blocks of data from a two-dimensional (2-D) image stored in a memory coupled to the processor, a two-dimensional (2-D) cache, included in the plurality of caches, tostore the one or more sub-blocks of data in a multiple cache lines and to support access of at least more than one cache line in a single processing cycle while preserving a two-dimensional structure of the 2-D image. 24. The system of claim 23, wherein the processor to load the one or more sub- blocks into the 2-D cache in response to performing the 2-D gather instruction. 25. The system of claim 24, wherein the processor to, specify a pointer to the 2-D image in response to performing the 2-D gather instruction, and identify the one or more sub-blocks and a number of elements in the one or more sub-blocks using a plurality of coordinates. 26. The system of claim 23, wherein the processor to support mapping of the one or more sub-blocks, in the 2-D cache, to avoid read conflict after loading the one or more sub-blocks. 27. The system of claim 26, wherein the processor to support mapping the one or more sub-blocks on to a plurality of sets and a plurality of ways of the 2-D cache. 28. The system of claim 27, wherein the processor to determine a set of the plurality of sets on to which a first coordinate of an image pixel of the one or more sub- blocks are mapped, wherein a result of a modulo operation performed on the first coordinate of the 2-D image and a total number of sets available in the 2-D cache is used to determine the set. 29. The method of claim 27, wherein the processor to determine a way of the plurality of ways on to which a second coordinate of the image pixel within the one or more sub-blocks are mapped, wherein a result of a modulo operation performed on the second coordinate of the 2-D image and a total number of ways available in the 2-D cache is used to determine the way. 30. The system of claim 29, wherein the processor is to support performing a 2-D cache lookup, wherein the cache look-up includes, identifying a memory block within the 2-D cache using the set and the way of the 2-D cache, retrieving content of a tag field within the memory block, and determining if a data stored in the memory block is evicted. 31. The system of claim 30, wherein the processor to support accessing the tag field from the memory block and determining if content of the tag field is non-evicted. 32. The system of claim 31, wherein the processor to support rearranging the data in the memory blocks in an order of the addresses provided by the 2-D gather instruction, wherein the memory blocks associated with the tag field that are non-evicted are chosen.
A 2-D GATHER INSTRUCTION AND A 2-D CACHE BACKGROUND As semiconductor technology continues to scale, more and more functionality is being integrated into the processors in particular. For example, such processors may be capable of performing graphics and media application in addition to performing the conventional tasks. Majority of media processing algorithms use "ID or 2-D region" variation of gather. While a gather loads row or line (1 x m), column (m x 1), or a matrix (m x n) (for example, (2 x 2), (4 x 4), or (8 x 2)), the generic vgather translates this "block load" into 16 offsets and the information in the image (row length) structure is lost. BRIEF DESCRIPTION OF THE DRAWINGS The invention described herein is illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. For example, the dimensions of some elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements. FIG. 1 illustrates a processor 100, which supports a 2-D cache and a 2-D gather instruction according to one embodiment. FIG. 2 illustrates a 2-D cache, which may store the image information stored in a memory according to one embodiment. FIG. 3 illustrates the 2-D cache, which may be represented as a combination of sets and ways according to one embodiment. FIG. 4 illustrates a mapping of the image information (or elements x, y) to (set, way) in the 2-D cache according to one embodiment. FIG. 5 illustrates various fields in a data cache 180 according to one embodiment. FIG. 6 illustrates a tag array logic to determine the appropriate data for each element in the cache according to one embodiment. FIG. 7 illustrates a data array logic to arrange data in an order, which corresponds to the addresses in the gather instruction according to one embodiment. FIG. 8 is a computer system, which may support 2-D gather instruction and a 2-D cache according to one embodiment.DETAILED DESCRIPTION The following description describes embodiments of a 2-D cache and a 2-D gather instruction. In the following description, numerous specific details such as logic implementations, resource partitioning, or sharing, or duplication implementations, types and interrelationships of system components, and logic partitioning or integration choices are set forth in order to provide a more thorough understanding of the present invention. It will be appreciated, however, by one skilled in the art that the invention may be practiced without such specific details. In other instances, control structures, gate level circuits, and full software instruction sequences have not been shown in detail in order not to obscure the invention. Those of ordinary skill in the art, with the included descriptions, will be able to implement appropriate functionality without undue experimentation. References in the specification to "one embodiment", "an embodiment", "an example embodiment", indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Embodiments of the invention may be implemented in hardware, firmware, software, or any combination thereof. Embodiments of the invention may also be implemented as instructions stored on a machine -readable medium, which may be read and executed by one or more processors. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device). For example, a machine-readable medium may include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other similar signals. Further, firmware, software, routines, and instructions may be described herein as performing certain actions. However, it should be appreciated that such descriptions are merely for convenience and that such actions in fact result from computing devices, processors, controllers, and other devices executing the firmware, software, routines, and instructions. In one embodiment, the instruction set may comprise a special gather instruction, which may be referred to as a 2-D gather instruction. In one embodiment, the 2-D gatherinstruction may retain the two dimensional image structure or the image information related to the 2-D image structure. In one embodiment, the 2-D cache may use the image information for a special cache filling policy, which may result in a higher gather performance and low latency as compared to a generic gather instruction. A generic gather may load (or block load) up to 2 or 4 double precision floating point values from the memory address and the generic vgather translates the "block load" into 16 offsets and the information on image structure (i.e, row length) is lost. To overcome the above disadvantage of the losing the image structure, in one embodiment, the 2-D instruction, which may retain the image and region parameters is disclosed. In one embodiment, the 2-D gather instruction may perform double stride gather, which may load the 2-D region such as (1 xl6, 2 x 8, 4 x 4; 8 x 2; or 16 x 1) from the 2D image. In one embodiment, the 2-D cache is based on the idea of 2-D locality. In one embodiment, if a program loads some pixel (x, y) from an image 'A' stored in a memory, then there may be a high likely-hood that the pixels around the pixel (x,y) may be used soon. Also, there may be a high likely-hood that the pixels around the pixel (x,y) may be used multiple times. In one embodiment, to take advantage of the 2-D locality, a number of small rectangular windows ' W of the large image in the memory may be maintained in the cache. In one embodiment, a 2-D cache fill policy may be used to fill the cache with the image information stored in the memory. In one embodiment, the 2D window ' W (i.e., image information) may be mapped on to a 2-D cache so as to avoid possible read conflicts for the 2-D region loads (for example, (1 xl6), (2 x 8), (4 x 4); (8 x 2); or (16 x 1)). In one embodiment, the image element (x, y) may be mapped on to the set and way of the cache, respectively, based on the following Equations (1) and (2) below: Set = X mod Num of Sets Equation (1) Way = Y mod Num_of_Ways Equation (2) wherein 'mod' represents a modulo operator, which determines a reminder of a division of one number by the other. In one embodiment, the 2-D cache lookup may include two tasks - 1) to identify the location in the cache comprising the correct data; and 2) to arrange the data in an order, which may correspond to the order of the addresses in the 2-D gather instruction. In one embodiment, the location in the cache (comprising the correct data) may be identified by comparing the address generated by the address generation unit with the tag associatedwith each set. In one embodiment, the data in the identified locations may be arranged in an order to correspond to an order of the addresses in the 2-D gather instruction. An embodiment of a processor 100, which may support a 2-D cache and a 2-D gather instruction is illustrated in FIG. 1. In one embodiment, the processor 100 may comprise a plurality of cores such as the cores 102-1 to 102-N, a L2 cache 190, and a memory controller hub (MCH) 103. In one embodiment, the core 102-A may comprise a pre-fetch unit 110, an instruction cache 120, an instruction translational look-aside buffer (ITLB) 122, a branch prediction unit 130, a decode unit 140, a reservation station 150, an address generation unit 160, execution units 170, a load and store unit (LSU) 175, a data cache 180, a data translational look-aside buffer (DTLB) 182, a re-order buffer 185, a vertex processing block 191 and a texture processing block 193. The other cores 102-2 to 102-N may include similar blocks as that of the core 102-1. In one embodiment, the pre-fetch unit 110 may fetch instructions from the memory 101 while the others instructions, which were fetched earlier are being executed. The instructions so fetched may be stored in the instruction cache 120. The instruction translational look-aside buffer (ITLB) 122 may be used to translate the virtual address to a physical address. The instructions are then provided to the decode unit 140, which may decode the macro instructions into multiple micro-operations. The micro-operations may be then sent to reservation station 150, which may dispatch the micro-operations (uops) to the one or more of the execution units 170, the vertex processing block 191 or the texture processing block 193. In one embodiment, the instructions may be dispatched to one of the units 170, 191, or 193 based on the type of the instruction. For example, if the processing relates to graphics data the instruction may be performed by the vertex processing block 191 and the texture processing block 193 and by the execution unit 170 if it is non-graphics data. In one embodiment, the instructions may be performed in an out- of-order fashion and the re-order buffer 185 may store the results of such execution in an order to retain the original program order. In one embodiment, the 2-D gather instruction, which may be used to load the 2-D region from the 2-D image to the data cache 180 may be as given by Equation (3) below. An example 2-D gather instruction may be as given below: Zmml = 2-D_gather_16 (plmage, rowWidth, blockX, blockY, blockW, blockH, strideX, stride Y); wherein plmage - is a pointer to image;rowwidth - number of elements in the row; blockX - X coordinate of the left upper corner of the block; blockY - Y coordinate of the left upper corner of the block; blockW - number of elements in the row of the block; strideX - horizontal stride (optional, default = 1); and strideY - vertical stride (optional, default = 1). Structurally, the 2-D gather instruction may have some similarity with the generic vgather instruction, which may be as given in the Equation (4) below: Zmml = vgather (pBase, offsetO, ... offset 15) Equation (4) wherein pBase = [plmage + (row Width * (blockY- 1) + blockX) + sizeofElem]; i = 0; for (y=0; y < blockH; y++) for (x=0; x < blockY; x++) { offset [i] = (x + y * row Width) * sizeofElem; i ++;} Further, the 2-D cache structure, the 2-D cache filling policy, and the 2-D cache look-up are described in detail below with reference to Figures 2-7. FIG. 2 illustrates a cache comprising one or more small windows of image information of an image stored in the memory according to one embodiment. In one embodiment, the image 201, stored in the memory 101, may be divided into a number of windows such as windows 204 and 208. In one embodiment, the windows 204 and 208 may be stored in the cache 180, respectively, as windows 254 and 258. In one embodiment, if the program or an instruction loads a pixel (x,y) from the image 201 there appears to be a high likely-hood that the neighboring pixels around the pixel (x,y) may be used soon after the pixel (x,y) is processed. Also, there appears to be a high likely-hood that the neighboring pixels may be used many times after processing the pixel (x,y). In one embodiment, the high likely-hood of neighboring pixels being used soon after processing the pixel (x,y) and then using the neighboring pixels multiple times soon after that may be referred to as "2-D locality". FIG. 3 illustrates the 2-D cache, which may be represented as a combination of sets and ways according to one embodiment. In one embodiment, the 2-D cache 180 may include an access logic 370, a control logic 380, and a multiple memory blocks arranged in the form of columns (sets) and rows (ways). In one embodiment, the access logic 370may support cache filling and 2-D cache look-up tasks described below. In one embodiment, the control logic 380 may initiate the access logic 370 to perform cache filling and 2-D cache look-up while the 2-D gather instruction may be performed by the execution unit 170. In one embodiment, the 2-D cache 180 may be viewed as a combination of multiple memory blocks each of which may be uniquely identified by a combination of the identifier of a set and a way. In one embodiment, the 2-D cache 180 may include N sets (set 0 to set N) and M ways (way 0 to way M). In one embodiment, each memory block within the 2-D cache uniquely identified by the identifier of the way and the set. In one embodiment, the 2-D cache may be viewed as a sliding window that may slide over the windows (i.e., a group of pixels) in the image stored in the memory 101. In one embodiment, the 2-D cache 180 may store image information of one or more windows such as 204 and 208. In one embodiment, during a first time point the 2-D cache 180 may store the pixels covered by the windows 204 and 208 in the sets and ways. In other embodiment, the 2-D cache 180 may store the pixels covered by the windows 204 and then slide to cover the pixels of the window 208. Like-wise, the 2-D cache 180 may store pixels covered by a first set of windows and then slide to store the pixels covered by the second set of windows. In one embodiment, the pixels in the window 204 in the main memory 101 may be mapped into memory blocks in the 2-D cache 180 and each memory block may be identified by a unique combination of the set number and the way number. For example, the memory block 300 may be uniquely identified by a combination of set number (N = 0) and a way number (M = 0). Similarly, the memory block 312 may be uniquely identified by a combination of set number (N = 1) and the way number (M = 2). In one embodiment, the 2-D cache 180 may adopt a 2-D cache filling policy to fill the memory blocks within the 2-D cache 180. In one embodiment, the 2-D cache includes N sets and M ways and is two dimensional. In one embodiment, the 2-D window ' W such as 204 and/or 208 in the memory 180 may be mapped on to 2-D cache 180 so as to avoid possible read conflicts for the 2-D region loads (for example, (1 xl6), (2 x 8), (4 x 4); (8 x 2); or (16 x 1)). In one embodiment, the image element (x, y) may be mapped on to the set and way of the cache, respectively, based on the Equations (1) and (2) above. For example, the mapping or cache filling may be implemented as Set = address [6...11] and way = Row mod Num_of_Ways.For a 2-D cache with 32 ways, the above example of filling the cache may result in a cache filling depicted in FIG. 4. In one embodiment, the window 204 may be may be mapped (or cache filled) to the 2-D cache 180. For example, the pixels within the coordinates [(0,0) - (15,0)] in the image 204 may be mapped to a memory block 401-00 in the 2-D cache 180 and the pixels within the coordinates [(48,1) - (63,1)] of the image 204 may be mapped to a memory block 401-31 in the 2-D cache 180. Similarly, the pixels within the coordinates [(32,2)-(47,2)], [(48,3)-(63,3)], [(0,5)-(15,5)], and [(16,5)-(31,5)] of the image 204 may be mapped to the memory blocks 401-22, 401-33, 401-05, and 401- 15, respectively. In one embodiment, the 2-D image may be, directly, loaded on to the 2- D cache 180. As the 2-D image may be mapped (or loaded) into the 2-D cache 180, directly, from the memory 101, the need for an intermediate 2-D register file (RF) or a 2-D scratch pad, which may require explicit pre-load may be avoided. In one embodiment, the mapping of the two-dimensional (2-D) image using the 2- D gather instructions allows for a maximum of 2 iterations. For example, the 2-D gather instruction may gather data from a line (1 x 16), column (16 x 1), matrices (8 x 2), (4 x 4), and (2 x 8) and the maximum iterations involved may be equal to 2, 1, 2, 2, and 2 processing cycles, respectively. FIG. 5 illustrates various fields of the 2-D cache according to one embodiment. In one embodiment, each memory block such as 401-00 to 401-NM may include various fields such as tag (510-1 to 510-M), index (520-1 to 520-M), and data (540-1 to 540-M). In one embodiment, the tags 510-1 to 510-M may be used to determine an appropriate data for each element in the 2-D cache 180 after comparing the address provided by the address generation unit 160 with the tags 510-1 to 510-M. FIG. 6 illustrates an arrangement 601, which may determine one or more memory blocks that may be available for filling the image information according to one embodiment. In one embodiment, the arrangement 601 may include the address generation unit 160, sets and ways including memory blocks 620-1 to 620-P, and a tag array logic 600. In one embodiment, the tag array logic 600 may be included in the access logic 370 of FIG.3 and the tag array logic 600 may operate with the address generation unit 160 to determine the one or more memory blocks that may be available for filling the image information. In one embodiment, the tag array 600 may include multiple X-NOR gates 630-1 to 630-P and the output of the X-NOR gates 630-1 to 630-P may be provided as an input to the P-input AND gate 640.In one embodiment, the address generation unit 160 may generate an address Al and at least some of the bits (al, a2, a3,...ak) of the address Al may be provided as a first input to the logic X-NOR gates 630-1 to 630-P. In one embodiment, the bits in the tag may be provided as a second input to the X-NOR logic gates 630-1 to 630-P. In one embodiment, if there is a position- wise match in the bits in the tag with the bits in the address (i.e., if the bit values provided to the ex-Nor are the same), the output generated by each of the X-NOR gate 630-1 to 630-P may be logic 1. In one embodiment, if the output of all the X-NOR gates 630-1 to 630-P are equal to 1, the output generated by the AND gate 640 may be equal to logic 1 as well. In one embodiment, the tag array 600 may thus determine the memory block, which includes a tag that is equal to the address generated by the address generation unit 610. FIG. 7 illustrates an arrangement 700, which may be used to arrange the data stored in the memory blocks to be arranged in an order corresponding to the order of the addresses accessed by performing the 2-D gather instruction according an embodiment. In one embodiment, the arrangement 700 may include the address generation unit 160, the sets and ways including the memory blocks 620-1 to 620-P, and the access logic 370 including a read/write logic 720 and a shuffle unit 750. In one embodiment, the 2-D cache look-up may be performed based on a technique, which may be referred to as 'direct map with tag comparison per way'. In one embodiment, such a technique may include identifying the memory blocks, which may be uniquely identified by a set and a way of the 2-D cache 180, retrieve the content of the tag field, and determine if the data stored in the memory block identified by a unique combination of set and way is evicted or replaced. In one embodiment, a memory block such as 300 or 312 or 401-00, 401-05, 401-15, 401- 22, 401-31, or 401-33 may be identified as described above with reference to FIG. 6. After identifying the memory blocks such as 401-00, 401-05, 401-15, 401-22, 401- 31, or 401-33, the content or the image information in the memory blocks may be provided to the read/write logic 720 and the shuffle unit 750. In one embodiment, the read/write logic 720 may access the tag portions of the memory blocks 401-00, 401-05, 401-15, 401-22, 401-31, or 401-33 and determine if the tags are still relevant (i.e., not evicted or replaced). In one embodiment, the shuffle unit 750 may rearrange the data in the non-evicted memory blocks in an order of the addresses provided by the 2-D gather instruction. In one embodiment, the access logic 370 may access more than one cache lines, which may include non-evicted data. In one embodiment, the 2-D cache 180 may supportaccess of up to 16 separate cache lines per single processing cycle unlike the prior art caches, which may allow one cache line to be accessed per processing cycle. In one embodiment, the data stored in the relevant memory blocks within these cache lines may be extracted by the access logic 370 and arranged by the shuffle unit 750 to generate the 2- D gather data. As a result, the 2-D cache 180 may access more than one ways per port, for example if multiple elements may be stored in the same physical bank but, within different sets. In one embodiment, the cache filling technique and the 2-D gather technique described above may minimize bank conflicts during the 2-D region loads. The operation of the 2-D gather instruction and the 2-D cache is described with reference to the 2-D data cache 180, for example. However, the techniques described above may be performed in other caches such as L2 cache 190 or any other cache or any other memory as well. FIG. 8 is a computer system, which may support 2-D gather instruction and a 2-D cache according to one embodiment. In one embodiment, the computer system 800 may comprise a processor 802, which may include a single instruction multiple data (SIMD), reduced instruction set (RISC), and such other similar general purpose central processing unit 803 and a graphics processor unit (GPU) 805 and a cache 806. The processor 802, in one embodiment, may store a sequence of instructions, to provide and process the data bits to perform multi-bit error correction in machine readable storage medium 825. However, the sequence of instructions may also be stored in the memory 820 or in any other suitable storage medium. The processor 802 is shown to include the CPU 802 and GPU 805, however, other embodiments are possible. One such embodiment may include the processor 802 comprising multiple cores, wherein each core may be capable of performing the functions of both the CPU and the GPU. In other embodiment, the CPU 802 and GPU 805 may be fabricated on a single die. In yet another embodiment, the CPU 802 and the GPU 805 may be fabricated on separate dies. Such other embodiments may support the 2- D gather instruction and the 2-D cache as well. The processor 802 that operates the computer system 800 may be one or more processor cores coupled to logic 830. In one embodiment, the processor 810 may comprise a central processing unit 803 and a memory subsystem MSS 804. In one embodiment, the CPU 802 or the GPU 803 may perform the 2-D gather instruction describe above and the cache 806 may support the 2-D cache structure, 2-D cache filling, and the 2-D gather techniques described above.The logic 830, for example, could be chipset logic in one embodiment. The logic 830 is coupled to the memory 820, which can be any kind of storage, including optical, magnetic, or semiconductor storage. The I/O devices 860 may allow the computer system 800 to interface with the devices such as network devices or users of the computer system 800. Certain features of the invention have been described with reference to example embodiments. However, the description is not intended to be construed in a limiting sense. Various modifications of the example embodiments, as well as other embodiments of the invention, which are apparent to persons skilled in the art to which the invention pertains are deemed to lie within the spirit and scope of the invention.
The application relates to a trench isolated capacitor. An integrated trench capacitor and method for making the trench capacitor is disclosed. The method includes forming (1105) a trench (102) in a silicon layer (106), forming (1110) a first dielectric (110) on the exposed surface of the trench, performing (1115) an anisotropic etch of the first dielectric to expose silicon at the bottom of the trench, implanting (1120) a dopant (112) into exposed silicon at the bottom of the trench, forming a first polysilicon layer (116) over the first dielectric (1125), forming (1130) a second dielectric (118) over the first polysilicon layer, and forming (1135) a second polysilicon layer (120) over the second dielectric to fill the trench.
1.A method of forming a trench capacitor in a semiconductor wafer includes:Forming a trench in the silicon layer;Forming a first dielectric on the exposed surface of the trench;Performing anisotropic etching of the first dielectric to expose silicon at the bottom of the trench;Implanting dopants into the exposed silicon at the bottom of the trench;Forming a first polysilicon layer over the first dielectric;Forming a second dielectric over the first polysilicon layer; andA second polysilicon layer is formed over the second dielectric to fill the trenches.2.The method of claim 1, further comprising etching a surface of the wafer to remove the first dielectric layer and the second dielectric layer and the first polysilicon layer and the second polysilicon from the surface of the semiconductor wafer Floor.3.The method of claim 2, further comprising depositing a passivation layer on a surface of the semiconductor wafer.4.The method of claim 3, further comprising forming a first contact that contacts the first polysilicon layer and forming a second contact that contacts the second polysilicon layer.5.The method of claim 4, wherein the first contact is a substrate contact formed simultaneously with the trench capacitor.6.The method of claim 4, wherein the first contact is formed in a neck of the trench capacitor, the neck having a smaller width than a body of the capacitor.7.The method of claim 4, further comprising forming a mask layer on the silicon layer prior to forming the trench.8.The method of claim 1, wherein the trench capacitor is formed simultaneously with a substrate contact on the semiconductor wafer.9.The method of claim 4, wherein providing the first dielectric comprises growing a first thermal oxide.10.The method of claim 3, wherein providing the second dielectric includes forming an oxide/nitride/oxynitride combination on a sidewall of the trench.11.An integrated circuit includes:Substrate; andAn integrated trench capacitor formed in the substrate, the integrated trench capacitor comprising:a first dielectric that serves as a liner for the first deep trench in the semiconductor,a first polysilicon layer covering the first dielectric, wherein the first polysilicon layer is shorted to the semiconductor at the bottom of the first deep trench,A second polysilicon layer filling the central portion of the first deep trench, the second polysilicon layer being separated from the first polysilicon layer by a second dielectric.a first contact that provides electrical connection to the first polysilicon layer, andA second contact provides electrical connection to the second polysilicon layer.12.The integrated circuit of claim 11 wherein the second contact comprises a substrate contact formed in a trench adjacent to the integrated trench capacitor.13.The integrated circuit of claim 11 wherein the second contact is formed in a portion of the first polysilicon layer in a narrow portion of the first deep trench.14.The integrated circuit of claim 13 wherein the first deep trench has a width of about three microns.15.The integrated circuit of claim 14, wherein the narrow portion of the first deep trench has a width of about one micron.16.The integrated circuit of claim 11, further comprising a substrate contact including a third dielectric formed in a second deep trench and a third polysilicon covering the third dielectric A layer, wherein the third polysilicon layer is shorted to the semiconductor at the bottom of the second deep trench.17.The integrated circuit of claim 17, wherein the substrate contact is adjacent to the integrated trench capacitor.18.A method of simultaneously forming a trench capacitor and a substrate contact in a semiconductor wafer, the method comprising:Forming a first trench and a second trench in the silicon layer;Forming a first dielectric on exposed surfaces of the first trench and the second trench;Performing anisotropic etching of the first dielectric to expose silicon at the bottom of the first trench and the second trench;Implanting dopants into the exposed silicon at the bottom of the first trench and the second trench;Forming a first polysilicon layer over the first dielectric, the first polysilicon layer serving as a liner for the first trench and filling the second trench;Forming a second dielectric over the first polysilicon layer; andA second polysilicon layer is formed over the second dielectric to fill the first trench.19.The method of claim 18, further comprising depositing a passivation layer on a surface of the semiconductor wafer.20.The method of claim 19, further comprising forming a first contact that contacts the first polysilicon layer formed in the first trench, contacting the second contact in the second trench A second contact of a polysilicon layer and a third contact contacting the second polysilicon layer in the first trench.
Trench isolation capacitorTechnical fieldThe disclosed embodiments generally relate to the field of semiconductor processing. More specifically, but without limitation, the present disclosure is directed to a trench isolation capacitor.Background techniqueIn some integrated circuits there is a demand for high density capacitors. Trench capacitors are good candidates, but they are accompanied by additional costs such as one or more additional process masks and additional processes. In circuits that already have trench structures (such as substrate contacts), adding trench capacitors to the current process adds to the current complexity in order to avoid affecting the current trench structure. An improved method of integrating high-density capacitors into current processes is desired.Summary of the InventionThe disclosed embodiments disclose a high density trench capacitor that can be formed at the same time as a substrate contact formed in a deep trench structure. Also disclosed is a method of fabricating the trench capacitor by using a prior art process for generating a substrate contact. Using existing processes means that the integration of the capacitors does not require new masks and only adds minimal processing to existing processes.In one aspect, an embodiment of a method of forming a trench capacitor in a semiconductor wafer is disclosed. The method includes: forming a trench in a silicon layer; forming a first dielectric on an exposed surface of the trench; performing an anisotropic etch of the first dielectric to expose silicon at a bottom of the trench; injecting a dopant into the trench In the exposed silicon at the bottom; a first polysilicon layer is formed over the first dielectric; a second dielectric is formed over the first polysilicon layer; and a second polysilicon layer is formed over the second dielectric to fill the trenches .In another aspect, an embodiment of an integrated trench capacitor is disclosed. The integrated trench capacitor includes a first dielectric serving as a deep trench pad in a semiconductor; a first polysilicon layer overlaying a first dielectric, wherein the first polysilicon layer is shorted at the bottom of the deep trench In a semiconductor; a second polysilicon layer filling a central portion of the deep trench, the second polysilicon layer being separated from the first polysilicon layer by a second dielectric; and providing an electrical connection to the first polysilicon layer A first contact; and a second contact providing electrical connection to the second polysilicon layer.Description of the drawingsThe embodiments of the present disclosure are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings, in which like references indicate similar elements. It should be noted that different references to "an" or "one" embodiment in this disclosure are not necessarily to the same embodiment, and the reference may mean at least one. Further, when a specific feature, structure, or characteristic is described in conjunction with the embodiment, it claims that such feature, structure, or characteristic combined with other embodiments that are explicitly described or not explicitly described is influenced by those skilled in the art. Common sense.The drawings are incorporated into the specification and form a part of the specification in order to illustrate one or more exemplary embodiments of the present disclosure. Various advantages and features of the present disclosure will be understood by considering the following detailed description and appended claims, with reference to the accompanying drawings, in which:1A-9A depict a process of forming a trench capacitor in accordance with an embodiment of the present disclosure;FIGS. 1B-9B depict a process of forming a substrate contact simultaneously with forming the capacitor of FIGS. 1A-9A; FIGS.10 depicts a top view of a capacitor in accordance with an embodiment of the present disclosure;11A-11E depict a simplified flow diagram for forming an integrated capacitor in accordance with an embodiment of the present disclosure; andFIG. 12 depicts a trench capacitor known in the art.detailed descriptionSpecific embodiments of the present invention will now be described in detail with reference to the accompanying drawings. In the following detailed description of the embodiments of the present invention, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced without these details. In other instances, well-known features have not been described in detail in order to avoid unnecessarily complicating the description.FIG. 12 discloses a trench capacitor 1200 known in the art. As can be seen in this figure, a trench capacitor may be formed in a deep trench etched into doped region 1202 of the semiconductor. Once the trench is formed, an oxide layer or other dielectric 1204 is formed over the exposed surface of the trench, and then the remaining portion of the trench is filled with doped polysilicon 1206 to form capacitor 1200 . When region 1202 is connected to ground through a substrate (not explicitly shown), contacts 1208 are formed into the polysilicon inner layer. The formation of the capacitor 1200 requires a mask to be added to the process and the steps mentioned. When it is desired to have the same substrate contacts in the layout as the capacitors, it may be difficult to provide these features at the same time.The disclosed capacitor is formed at the same time as the deep substrate contact so that integrating the capacitor does not increase the mask and adds only two processing steps to the existing process for forming the substrate contacts. Therefore, the process of forming these two devices will be explained at the same time. Figures 1-9 disclose various points in the process of forming these two deep trench structures. At each point, a drawing with a suffix "A" depicts the formation of a trench capacitor, while a drawing with a suffix "B" depicts the formation of a substrate contact. As can be seen in FIGS. 1A-1B, a trench 102 for a capacitor and a trench 104 for a substrate contact are formed in a silicon layer 106, which may be a substrate of a wafer, on a substrate. The epitaxial layer grown on or a combination of both. In order to form these trenches, a mask layer 108 is formed on the silicon layer 106 and is patterned to have openings over the desired portion of the trench. In at least one embodiment, the mask layer is a metal oxide hard mask, such as titanium oxide, tungsten oxide, and zirconium oxide injected in a spin coating process. In another embodiment, the mask layer may be a photoresist. The hard mask may be particularly indicated when the desired depth of the trench capacitor is large.Etching is performed to generate trenches 102 and 104 and mask layer 108 may be removed. In one embodiment, the depth of the trench is about 20 microns. In other embodiments, the depth of the trenches may be in the range of 5 microns to 70 microns. Although not necessarily visible in these drawings, the diameter or width of the trench 102 is greater than the diameter or width of the trench 104 . It should be appreciated that the drawings of the present application are not necessarily drawn to scale. In one embodiment, the trenches 102 have a width of about 3 microns and the trenches 104 have a width of about 1 micron. In other embodiments, the width of the trenches 102 may be in the range of 2 microns to 10 microns, and the width of the trenches 104 may be in the range of 1 micron to 5 microns. As will be seen below, this difference in width results in large differences in the various layers formed in these different trenches.In FIGS. 2A and 2B, a thin thermal oxide is grown on exposed silicon, and silicon oxide is subsequently subjected to sub-atmospheric chemical vapor deposition to form a liner 110. In one embodiment, the thermal oxide isthick. A dry etch (eg, using reactive ion etching (RIE)) is performed on the trenches 102, 104 to etch the liner 110 through the bottom of the respective trench so as to contact the silicon layer 106. It should be understood that reference herein to the "bottom" of trenches 102, 104 refers to the orientation of the capacitor and substrate contacts as seen in the figures, ie the closed end of the trench. In FIGS. 3A and 3B, dopants are implanted into the silicon at the bottom of the trenches 102, 104 to provide the contacts 112, 114 with a substrate. In at least one embodiment, the substrate is P-type and boron is implanted, although other P-type dopants may also be used. In one embodiment, N-type dopants may be used for the N-type substrate and phosphorus is implanted. In one embodiment, the doping levels of contacts 112, 114 are approximately 1 x 1019/cm3 to 5 x 1019/cm3. To date, this process has produced essentially the same results in both structures, but as the process advances from here, the different widths of the two grooves result in different results.In FIGS. 4A and 4B, a thin, doped polysilicon layer 116 is deposited on the wafer. In one embodiment, polysilicon layer 116 has a thickness of about 0.6 microns. This thickness of polysilicon will completely fill trench 104, which has a diameter of, for example, 1.0 micron, but this thickness of polysilicon will provide only lining on the side of trench 102, trench 102 has, for example, 3 microns. diameter. In one embodiment, the doping level of the polysilicon layer 116 is approximately 1×10 19 /cm 3 to 5×10 19 /cm 3 . In FIGS. 5A and 5B, a dielectric layer 118 is deposited on top of polysilicon layer 116. In the trench 102, the dielectric layer 118 forms a second liner such that the polysilicon layer 116 is sandwiched between the two dielectric layers except where the polysilicon layer 116 is in contact with the silicon layer 106. Since the trench 104 has been filled, the dielectric layer 118 is only deposited on the surface above the substrate contact. In one embodiment, the dielectric layer 118 is an oxide and is deposited to a thickness of. In one embodiment, the dielectric layer 118 is an oxide/nitride/oxynitride (ONO) layer having a thickness of approximately.6A and 6B, the polysilicon layer 120 is deposited. The polysilicon layer 120 completely fills the trenches 102 and forms a layer on the surface above the trenches 104 . It should be appreciated that the thickness of polysilicon layer 120 may vary depending on the width of trench 102 . In one embodiment, polysilicon layer 120 isthick and has a doping level of about 1 x 1019/cm3 to 5 x 1019/cm3. In FIGS. 7A and 7B, the surface of the wafer is reverse etched to remove the polysilicon layer 116, the dielectric layer 118, and the polysilicon layer 120 from the surface of the wafer, leaving the structures shown in these two figures. In FIGS. 8A and 8B, a passivation layer 122 is deposited over the chip, and in FIGS. 9A and 9B, the contact 124 is formed to the polysilicon layer 120 of the capacitor 126 and the contact 128 is formed to the substrate contact 130. The process is completed until the two structures are related. Of course, further processing may be performed on the wafer to form a structure that is not shown in this figure.As can be seen in the capacitor 126, two capacitor plates have been formed, with the polysilicon layer 116 forming the bottom plate and the polysilicon layer 120 forming the top plate. One skilled in the art will recognize that by the polysilicon layer 116 being in contact with the silicon layer 106, the polysilicon layer 116 will be grounded and the polysilicon layer 120 will have a higher voltage. Clearly, we have so far only shown external contacts of polysilicon layer 120 in the drawings. There are at least two different ways to provide the polysilicon layer 116 with contacts. In the first embodiment, the substrate contact 130 is formed next to the capacitor 128 so that the substrate contact provides contact to the bottom plate of the capacitor 128 through the silicon layer 106 . A second embodiment is shown in FIG. 10, which depicts a top-down view of the capacitor 126. In this embodiment, the body of the capacitor 126 has a width M, and the neck of the capacitor 126 has a width N. In at least one embodiment, M equals 3 microns and N equals 0.5 microns. Thus, when the polysilicon layer 116 is deposited, the wider body of the capacitor 126 receives only a thin layer of doped polysilicon, while the narrower neck is completely filled with doped silicon. When contacts are formed, contacts 124 are formed to contact polysilicon layer 120 and smaller contacts 132 are formed to contact polysilicon layer 116 .11A-11E depict a simplified flow diagram of a method of forming a trench capacitor in a semiconductor. In FIG. 11A, the method begins by forming (1105) trenches in the silicon layer. After the trench is created, the method forms (1110) a first dielectric on the exposed surface of the trench and then performs (1115) anisotropic etching of the first dielectric to expose silicon at the bottom of the trench. The dopant is implanted (1120) into the exposed silicon at the bottom of the trench and then a first polysilicon layer is formed (1125) over the first dielectric. A second dielectric layer is formed (1130) over the first polysilicon layer and a second polysilicon layer is formed (1135) over the second dielectric. The second polysilicon layer fills the trench. In FIG. 11B, the method continues to etch (1140) the surface of the wafer to remove the dielectric and polysilicon layers from the wafer surface. In FIG. 11C, the method continues by depositing (1145) a passivation layer over the wafer surface, and in FIG. 11D, the method further continues to form (1150) a first contact that contacts the first polysilicon layer and forms (1155). ) Contact the second contact of the second polysilicon layer. In FIG. 11E, the method forms (1160) a masking layer on the silicon layer prior to forming the trenches.Applicants have disclosed capacitors that can be integrated with existing processes that form substrate contacts in trenches. In at least one embodiment, the disclosed capacitors do not add new masks and only add two additional steps to the existing process, ie, the dielectric layer 118 and the polysilicon layer 120 are formed.Although various embodiments have been shown and described in detail, the claims are not limited to any particular embodiment or example. None of the above-described embodiments should be read to imply that any particular component, element, step, function, or function is essential such that it must be included in the scope of the claims. References to elements in the singular are not intended to mean "one and only one", but rather "one or more" unless expressly stated otherwise. All structural and functional equivalents to the elements of the above-described embodiments that are known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the present claims. Accordingly, those of ordinary skill in the art will recognize that the exemplary embodiments described herein can be practiced with various modifications and variations within the spirit and scope of the appended claims.
Integration schemes are presented which provide for decoupling the placement of deep source/drain (S/D) implants with respect to a selective epitaxial growth (SEG) raised S/D region, as well as decoupling suicide placement relative to a raised S/D feature. These integration schemes may be combined in multiple ways to permit independent control of the placement of these features for optimizing device performance. The methodology utilizes multiple spacers to decrease current crowding effects in devices due to proximity effects between LDD and deep S/D regions in reduced architecture devices.
WHAT IS CLAIMED IS: 1. A method comprising: forming a first sidewall spacer (13) adjacent to a conductive gate (15) overlying a substrate (10); forming an epitaxial layer (14) overlying the substrate (10) adjacent to the first sidewall spacer (13); forming a second sidewall spacer (17) adjacent to the conductive gate (15) and overlying a portion of the epitaxial layer (14); and forming a deep source/drain region (18), wherein the deep source/drain region (18) is offset from the conductive gate (15) by an amount defined by the second sidewall spacer (17). 2. The method of claim 1, further comprising: removing the first sidewall spacer (13) prior to forming the second sidewall spacer (17). 3. The method of claim 2, wherein the width of the second sidewall spacer (17) is greater than the width of the first sidewall spacer (13). 4. The method of claim 2, wherein the width of the second sidewall (17) spacer is less than the width of the first sidewall spacer (13). 5. The method of claim 1, wherein forming the epitaxial layer further comprises forming the epitaxial layer (16) overlying the surface of the conductive gate. 6. A method of manufacturing a semiconductor device comprising: forming a first sidewall spacer (6) adjacent to a conductive gate (25) overlying a substrate (20); forming a second sidewall spacer (23) overlying the first sidewall spacer (6) and adjacent to the conductive gate (25); forming an epitaxial layer (24) overlying a region of the substrate (20) adjacent to the second sidewall spacer (23); forming a deep source/drain region (28), wherein the deep source/drain region (28) is offset from the conductive gate (25) by an amount defined by the first sidewall spacer (6) and the second sidewall spacer (23); forming a third sidewall spacer (21) adjacent to the second sidewall spacer (13); and forming a suicide (30) offset from the conductive gate (25) by an amount defined by the third sidewall spacer (21). 7. The method of claim 6, further comprising forming a thermal layer (22) overlying the second sidewall spacer. 8. The method of claim 6, wherein the step of forming the epitaxial layer (24) further comprises forming the epitaxial layer (31) overlying the surface of the conductive gate. 9. The method of claim 6, wherein the first sidewall spacer (6) comprises a material selectively etchable with respect to an immediately adjacent sidewall spacer material (23). 10. The method of claim 6, wherein the third sidewall spacer comprises a material selectively etchable with respect to an immediately adjacent sidewall spacer material (23).
METHOD OF FORMING A SEMICONDUCTOR DEVICE HAVING AN EPITAXIAL LAYER ANDDEVICE THEREOFTechnical FieldThe present disclosure relates generally to a semiconductor device and manufacturing process and more particularly to transistor formation associated with semiconductor devices.Background ArtTypical semiconductor integration schemes utilize a single spacer, generally a nitride spacer, for defining the location of characteristics and features of the device, such as the deep source/drain (SfD) implants, the suicides, and the raised S/D regions at a single, specific distance, relative to the poly silicon gate edge. The use of single spacers throughout the formation processes in semiconductor device manufacturing is limiting to the design engineer when attempting to make adjustments in defining device characteristics and features such as current crowding, series resistance, overlap capacitance, and junction depth to optimally tune the performance of a device. For example, if the width of a spacer is changes, all of the characteristics and features of a device can change as a result.Therefore, a method that permits independent control of more characteristics and features of a device, such as deep S/D locations, raised S/D regions, and suicide placement would be useful.BRIEF DESCRIPTION OF THE DRAWINGSThe present disclosure may be better understood, and its numerous features and advantages made apparent to those skilled in the art by referencing the accompanying drawings. It will be appreciated that elements illustrated in the figures are not necessarily drawn to scale.FIGS. 1 through 9 illustrate, in cross section, semiconductor device manufacturing process steps according to various embodiments of the present disclosure;FIGS. 10 and 11 illustrate cross-sectional views of portions of semiconductor devices manufactured according to at least one embodiment of the present disclosure; andFIG. 12 is a flow diagram of a method for decoupling raised source/drain, deep source/drain implant, and suicide locations according to at least one embodiment of the present disclosure.The use of the same reference symbols in different drawings indicates similar or identical items.DETAILED DESCRIPTION OF THE DRAWINGSIntegration schemes are presented for decoupling the placement of deep source/drain (S/D) implants with respect to a selective epitaxial growth (SEG) raised S/D region, as well as decoupling suicide placement relative to a raised S/D feature. These integration schemes may be combined in multiple ways to permit independent control of the placement of these processes for optimal device performance. in an embodiment, the present disclosure provides a method for using spacers to independently control the location of a deep source/drain (S/D) implantation and raised S/D suicide locations with respect to the edge of a transistor gate. The method comprises the steps of forming a first sidewall spacer adjacent to a conductive gate, then forming an epitaxial layer overlying the substrate adjacent to the first sidewall spacer. Following this selective epitaxial growth (SEG) process, the first sidewall spacer is removed. A second sidewall spacer is then formed adjacent to the conductive gate and overlying a portion of the epitaxial layer. Deep S/D regions are then formed. These deep S/D regions are offset from the conductive gate structure by an amount defined by the second sidewall spacer. This method is presented with reference to FIGS. 1 through 5.In another embodiment, a method of manufacturing a semiconductor device utilizing a plurality of spacers to control the location of a deep source/drain (S/D) implantation and raised S/D suicide locations with respect to the edge of a transistor gate is presented. This method comprises forming a first sidewall spacer adjacent to a conductive gate overlying a substrate, then forming a second sidewall spacer overlying the first sidewall spacer and adjacent to the conductive gate. Following the formation of the second sidewall spacer, an epitaxial layer is formed overlying a region of the substrate adjacent to the second sidewall spacer. Deep S/D regions are then formed, and these deep S/D regions are offset from the conductive gate by an amount defined by the first and second sidewall spacers (and thermal layer, if utilized). Next, a third sidewall spacer is formed adjacent to the second sidewall spacer, and a suicide is formed which is offset from the conductive gate by an amount defined by the third sidewall spacer. This method is presented with reference to FIGS. 6 through 9.FIG. 1 illustrates a portion of a location 100 of a semiconductor device during a manufacturing process according to an embodiment of the present disclosure. At the manufacturing stage presented in FIG. 1, a substrate 10, conductive gate 15, and a first set of sidewall spacers 13 have been formed. Semiconductor substrate 10 can be a mono-crystalline silicon substrate. Alternatively, substrate 10 can also be a gallium arsenide substrate, a silicon-on-insulator substrate, a silicon on sapphire substrate, a semiconductor-on-insulator (SOI) substrate, or the like. Conductive gate 15 is typically a poly-crystalline or amorphous silicon having a typical critical dimension (CD) length (L) ranging from 300 to 1000 Angstroms (30 to lOOnm), and a typical height ranging from 1000 to 1500 Angstroms.Sidewall spacers 13 will typically comprise an oxide or nitride comprising material. The width of the sidewall spacers 13 typically ranges from between 400 to 800 Angstroms, depending upon the desired location of the raised S/D, as seen in FIG. 2. Although sidewall spacers 13 are shown as being symmetric, sidewall spacers 13 may be asymmetric, depending upon device requirements. Various deposition and masking techniques are known which may be utilized with the teachings of the present disclosure to form sidewall spacers 13 into a desired configuration as regards symmetry and the location of raised S/D regions.FIG. 2 illustrates location 100 following the formation of an epitaxial layer 14 overlying the substrate and adjacent to sidewall spacers 13, typically after a pre-cleaning process, and formation of an epitaxial layer 16 overlying the surface of the conductive gate 15. It should be noted that the epitaxial layer 16 is optional, that is, in embodiments where no epitaxial layer is desired overlying the surface of the gate structure, a mask, e.g., an ARC or BARC may be utilized to prevent the formation of an epitaxial layer overlying the gate structure. The thickness of the epitaxial layer 14 typically ranges from 100 to 300 Angstroms, depending upon device requirements and/or a desired thickness. The epitaxial layer 14 will serve as a raised S/D region for the location 100. After the SEG process to form epitaxial layer 14, location 100 will undergo an etch process to remove sidewall spacers 13, as seen in FIG. 3.FIG. 3 illustrates location 100 following removal of sidewall spacers 13. Etch chemistries and techniques suitable to selectively etch the sidewall spacers are dependent upon the material composition of the spacers 13. For example, in embodiments where spacers 13 are a nitride, a wet etch utilizing hot phosphoric acid may be utilized. If spacers 13 are an oxide, an anisotropic dry etch process utilizing SF6 may be used.FIG. 4 shows location 100 following a deposition and etch process to form second sidewall spacers 17 overlying a portion of the epitaxial layer 14. Sidewall spacers 17 will serve to define the deep S/D edge in a subsequent implantation process. As before, the material composition of sidewall spacers 17 may comprise an oxide material, a polysilicon, or a nitride material such as silicon nitride. The symmetric spacers shown in FIG. 4 are not the only possible outcome of the methodology of the present disclosure. For example, the dotted lines shown within sidewall spacers 17 represent the range of other possible spacer widths which can vary or be asymmetric. In an embodiment, the width of the second sidewall spacer 17 can be greater than the width of the first sidewall spacer (item 13, FIG. 1). In another embodiment, the width of the second sidewall spacer 17 can be less than the width of the first sidewall spacer. As before, device requirements will drive the configuration of sidewall spacers 17 with regard to symmetry and width. Following formation of sidewall spacers 17, an implantation process to form deep source/drain regions is conducted.FIG. 5 illustrates a cross-sectional view of location 100 undergoing an implantation process 19 to form deep S/D regions 18 within the substrate 10. As seen in FIG. 5, the deep S/D regions 18 are offset from the gate structure 15 by an amount defined by the outer edges of the second sidewall spacers 17. Thus by varying the width and/or symmetry of sidewall spacers 17, a process engineer can readily offset the locations of the S/D regions 18 from the edges of the gate structure 15 and the edge of the epitaxial layer 14 to meet a particular design criteria or device technology requirement. The independent placement of the S/D 18 edge and the epitaxial layer 14 can be used to control series resistance and dopant gradient and depth. De-convolving these device controls adds more flexibility in tuning the transistor for a desired performance level.FIGS. 6 through 9 illustrate cross-sectional views of a location 200 undergoing semiconductor device manufacturing process steps according to an embodiment of the present disclosure. In FIG. 6, a portion of a location 200 is shown after undergoing photolithography, deposition, and etch processes to form a conductive gate 25 overlying a substrate 20, first gate sidewall spacer 6, and a second sidewall spacer 23 adjacent to the conductive gate 25. Semiconductor substrate 20 can be a mono-crystalline silicon substrate. Alternatively, substrate 20 can also be a gallium arsenide substrate, a silicon-on-insulator substrate, a silicon on sapphire substrate, a semiconductor-on-insulator (SOI) substrate, or the like. Conductive gate 25 is preferably a poly- crystalline or amorphous silicon having a typical critical dimension length ranging from 300 to 1000 Angstroms, and a typical height ranging from 1000 to 1500 Angstroms.Sidewall spacers 23 are formed immediately adjacent to the first sidewall spacer 6, and comprise a nitride or other suitable material, such as an oxide. Spacers 23 may range in width from 300 to 800 Angstroms. Sidewall spacer 6 may comprise an oxide material or a nitride material, and ranges in width from 100 to 250 Angstroms.FIG. 7 illustrates a cross sectional view of location 200 following a pre-clean and the formation of an epitaxial layer 24 overlying a region of the substrate 20 adjacent to the second sidewall spacer 23, and formation of an optional epitaxial layer 26 overlying the top surface of the conductive gate 25. The thickness of the epitaxial layer 24 typically ranges from 100 to 300 Angstroms, depending upon device requirements and/or a desired thickness. The epitaxial layer 24 will serve as a raised S/D region for the device formed at location 200. After the SEG process to form epitaxial layer 24, location 200 undergoes an implantation process 29 to form deep S/D regions 28. The deep S/D regions 28 are offset from the conductive gate 25 by an amount defined by the first sidewall spacer 6 and the second sidewall spacer 23. Following the implantation process 29, a thermal oxidation process to form a thermal oxide layer 22 overlying the second sidewall spacer 23 may be performed. Alternately, the implantation process 29 may follow the thermal oxidation process to form oxide layer 22, as illustrated in FIG. 7.After implantation 29, third sidewall spacer 21 is formed adjacent to the second sidewall spacer 23. The third sidewall spacer 21 may comprise a nitride material or an oxide material. Spacers 21 range in width from 200 to 400 Angstroms, depending upon the amount of offset desired from the edge of a subsequently formed suicide layer (FIG. 9) from the edges of conductive gate structure 25.FIG. 9 illustrates location 200 after formation of a suicide layer 30 and a silicide cap 31. Suicide layer 30 is offset from the gate structure 25 by an amount defined by the third sidewall spacer 21. The advantage of offsetting the silicide layer 30 from the gate structure 25 is that a process engineer can employ these offsets separately to meet a device technology requirement or particular design criteria. Independent placement of the S/D 28 edge and the silicide layer 30 permits the process engineer to control series resistance and dopant gradient and depth, thus increasing flexibility in tuning the transistor for a desired performance level. Again, it should be noted that although spacers 23 and 21 are shown as being symmetric in figures 6 through 9, numerous other combinations of varying widths are possible. Hence, utilizing the teachings of the present disclosure, it is possible to control the offset of the implant regions of the deep S/D 28 and the offset of the silicide layer 30 independently. Location 200 is now ready to undergo further manufacturing steps toward device completion.Application of the methods as taught herein offers the advantage of permitting variable integration schemes that is suitable for the production of both NFET and PFET devices. For example, utilizing the disclosed methods, one may vary the placement of the silicide for an NFET device closer to the gate edge than would be the case for a PFET device, for which further placement of the silicide from the gate edge is desirable, due to the dopant gradient and concentration in each regime.Generally, the NFET has a higher dopant concentration in the S/D and extension regions. The n-type dopant, e.g., arsenic, does not diffuse to the degree that the p-type dopant, e.g., boron does, thus the n-type junctions are more abrupt. This abrupt junction means that the silicide may be moved closer to the poly gate on the NFET and still maintain a good silicide-to-silicon contact resistance. However, if the silicide is moved too close to the poly gate on a PFET region, the silicide may be into a region that has lower dopant concentration due to diffusion, and the silicid[epsilon]-t[omicron]-silic[omicron]n resistance will increase, degrading device performance. The present disclosure thus enables greater flexibility for a process engineer to tailor the location of the suicide device in order to optimize performance.FIGS. 10 and 11 illustrates cross-sectional views of a portion 400 and a portion 500 respectively of a semiconductor device manufactured according to embodiments of the present disclosure. FIGS. 10 and 11 are simplified diagrams which do not show all of the features of portion 400 and portion 500 in order to keep the illustration from being cluttered.In FIG. 10, other features illustrated include interconnects 441 and 442 connected to vias/contacts 443 and 444 within an interconnect dielectric region 440. A passivation layer 450 has been formed overlying portion 400. The conductive gate structure 425 may include a gate stack comprising a dielectric layer (not shown), in addition to the epitaxial layer 426, which may be a suicide. In FIG. 10, deep source drain regions 428 in the substrate 410, along with suicided epitaxial layer 430 and non-silicided epitaxial layer 424 are shown integrated into a transistor. In FIG. 10, one of the plurality of spacers utilized to form the offset features, e.g., deep S/D 428 or silicide layer 430 has been removed during subsequent fabrication of the device at location 400.In FIG. 11, other illustrated features consist of interconnects 541 and 542 connected to vias/contacts543 and 544 within an interconnect dielectric region 540. As with the manufacturing process of FIG. 10, a passivation layer 550 has been formed overlying portion 500, and the conductive gate structure 525 may include a gate stack comprising a dielectric layer (not shown), in addition to the epitaxial layer 526.In FIG. 11, deep source drain regions 528 in the substrate 510, along with suicided epitaxial layer 530 and non-silicided epitaxial layer 524 are shown integrated into a transistor. The principle differences between the illustrations of FIG. 10 and FIG. 11 are that the second spacer 423 served to define the offset of the edge of the deep S/D region 428 in portion 400, while the third spacer 521 served to define the offset of the edge of the deep S/D region 528 in portion 500. The flexibility provided by the present disclosure permits independent control in the placement of deep S/D regions 428 and 528 and silicide regions 430 and 530 with respect to the gate structure 425 and 525.FIG. 12 is a flow diagram of a method for forming a semiconductor device according to the present disclosure. At step 701, a determination is made as to the desired amount of offset for a deep source/drain implant from a gate structure. At step 702, a determination of a desired offset for a silicide layer from the deep S/D region is made. At step 703 a determination of a desired offset placement of the raised S/D region with respect to the poly gate and the deep implant and silicide regions is made. These determinations are part of an integration scheme to consider a plurality of sidewall spacers with spacer widths and implantation intervals integrated into a process line to produce a desired outcome. At step 704, information is provided to a manufacturing facility to obtain devices based on the results of these determinations.It will be appreciated that the above disclosure can be implemented using a variety of techniques. For example, it will be appreciated that any number of substrate preclean steps can occur before the formation of any epitaxial layer. For example, United States Patent Application having serial number 10/791,346, which is hereby incorporated in its entirety by reterence, discloses several substrate preclean techniques appropriate for cleaning a substrate prior to forming an epitaxial layer.In one example, contaminates on the surface of a substrate are subjected to a cleaning process comprising applying a plasma to a surface of the active regions produce a reduction reaction with the contaminates in an upper portion of the surface of the active regions. In an embodiment, the plasma comprises H2. While the plasma is being applied to the upper portion of the exposed active regions, the resultant products or vapor byproducts of the reduction reaction are removed by the normal vacuum process within the chamber. Therefore, contaminates contained in the vapor byproducts and are vented away, leaving the upper portion of the surface of the active regions suitably clean for the ensuing epitaxial process. In one embodiment, the plasma process parameters comprise a gas flow of 450 seem H2 and 300 seem argon, at a chamber temperature of 400 degrees Celsius, with an high frequency (HF) power setting of 700 W, and a low frequency (LF) power setting of between approximately 50 to 100 W. Chamber pressure is 1 Torr, and the spacing between the surface of the active region and the faceplate of the tool (not shown) should be 300 mils. In other embodiments, plasma process parameters comprise a gas flow ranging from between 100-800 seem H2 and from between 100 and 600 seem argon. Chamber temperatures can range between 300 to 450 degrees Celsius, and HF power settings from between 400-900 W, with LF power settings varying from between 0-150 W. Chamber pressures can range from between 1 mT- 5 Torr, with spacing between the surface of the active region and the faceplate of the tool varying from between 200 to 400 mils. Exposure times for the various embodiments utilizing plasma range from between approximately 10 seconds up to approximately 120 seconds.Various tool types are suitable for this cleaning, for example, CVD (Chemical VaporDeposition) equipment, HDP (High Density Plasma) tools, etch chambers, or the like. Differences in chamber design, power settings, and species, e.g., H2 with or H2 without helium or nitrogen, will result in different thickness of the layer after anneal. Typically the layer after anneal will be between 20 and 50 Angstroms thick. This plasma cleaning process also results in passivation of Si-H bonds in the layer after anneal. No wet cleaning dip with hydrofluoric (HF) acid prior to SEG is necessary.In addition to no longer requiring an HF dip prior to SEG, the reduced temperature of this H2 plasma cleaning treatment results in a reduction of the SEG process thermal budget of more than 100 degrees Celsius. Typically pre-SEG cleaning processes are conducted at approximately 900 degrees Celsius or greater. In an embodiment of the present disclosure, the cleaning process occurs at less than approximately 800 degrees Celsius. In another embodiment, the cleaning process occurs at less than approximately 500 degrees Celsius or less. In addition, the cleaning processes of the present disclosure could be conducted at less than approximately 700 degrees Celsius or less, or even at less than approximately 600 degrees Celsius or less.In another embodiment, location including includes a gate structure and active regions is subjected to a cleaning process utilizing a low-power dry etch to selectively remove an upper atomic layer of material from the active regions. The thickness of the upper atomic layer of material to be removed ranges from between 20 to about 50 Angstroms. In one embodiment, the dry etch process is an anisotropic dry etch utilizing a carbon-free gas as an etchant gas. In another embodiment, the anisotropic dry etch utilizes an oxygen- and carbon-free gas as an etchant gas. The etchant gas can comprise HBr, NF3, SF6, gaseous fluorine-interhalogenics such as CIF3, or any gas containing fluorine, suitable to disassociate F-radicals, which does not contain oxygen and carbon. Prior to undergoing the anisotropic dry etch process, location 200 is subjected to a standard wet etch chemistry process utilizing a dilute HF solution (100:1) at room temperature, e.g., 20 to 26 degrees Celsius, for a time period ranging from 50 to 200 seconds. Following the HF clean, a low-power dry etch utilizing a temperature of approximately 400 degrees Celsius, RF power of approximately 375 W, pressure of approximately 150 mTorr, and a gas flow rate ranging from 50 to 100 seem, is conducted. In other embodiments, the low-power dry etch utilizes a temperature ranging from between 300-500 degrees Celsius, with RF power ranging from between 200-700W, a pressure ranging between 0-1 Torr, and a gas flow rate ranging from between 10-300 seem, for a time ranging between 10 to 60 seconds.This low-power dry etch removes carbon and oxygen contamination, and provides a very clean surface for SEG. The low temperature HF clean followed by the low-power dry etch does not require a high temperature bake. This results in a reduction of thermal budget for SEG of more than 100 degrees Celsius.In another embodiment, a cleaning process is used that forms an oxidation layer of between 20 to 50 Angstroms on an upper surface of the active regions using a plasma to produce the oxidation layer on doped active regions. In an embodiment, the plasma is an O2 plasma. In another embodiment, the plasma is an O3 plasma.An O2 plasma production utilizes O2 gas at a flow rate of 400 seem, a pressure of 5 Torr, an HF of 300 W, an LF of 100 W, and a temperature of 400 degrees Celsius, with the time ranging from between about 10 to about 120 seconds. The spacing between the surface of the active regions and the faceplate of the vapor deposition apparatus (not shown) should be 400 mils. In other embodiments, the plasma production utilizes O2 gas at a flow rate of between 100 and 1000 seem, a pressure ranging from between 2-10 Torr, an HF ranging between 200-500 W, an LF ranging between 50-200 W, a temperature ranging between 300-450 degrees Celsius, for a time ranging from between approximately 10 to approximately 120 seconds. In an embodiment, the spacing between the surface of the active regions and the faceplate of the vapor deposition apparatus ranges from between 200 and 600 mils. The tool type used to generate the plasma could be CVD equipment, HDP tools, or etch chambers. In an embodiment where the plasma is O3, plasma production utilizes O3 gas at a flow rate of 300 seem, a pressure of 5 Torr, an HF of 300 W, an LF of 100 W, and a temperature of 400 degrees Celsius for a time period ranging from between 10 to 120 seconds. The spacing between the surface of the active regions and the faceplate of the vapor deposition apparatus (not shown) should be 400 mils. In other embodiments, plasma production utilizes O3 gas at a flow rate of between 50 and 600 seem, a pressure ranging from between 2-10 Torr, an HF ranging between 200-500 W, an LF ranging between 50-200 W, and a temperature ranging from between 300-450 degrees Celsius for a time period ranging from between about 10 to about 120 seconds. In an embodiment, the spacing between the surface of the active regions and the faceplate of the vapor deposition apparatus ranges from between 200 and 600 mils. As was the case with the O2 plasma, the tool type used to generate the plasma could be HDP tools, CVD equipment, or etch chambers.Forming the oxidation layer facilitates trapping or fixing contamination in the oxide layer overlying the upper layer of the doped active regions for subsequent removal using a wet chemistry process. The wet etch chemistry process utilizes a dilute HF acid solution of 100: 1 at room temperature, e.g. 20 to 26 degrees Celsius, t[omicron]'r a time ranging from 50 to 2Ou seconds. Differences in chamber design, power settings and species employed, e.g., O2 or O3, results in differing thickness of the oxidation layer, hence the wide range in times for the HF dip. The use of an O2 or O3 plasma to create a contamination-trapping oxidation layer for removal by a room temperature HF dip results in a reduction of the thermal input for location 300.One possible pre-clean for use prior to formation of an SEG includes a reduced temperature H2 bake that is performed following formation of any desired spacers, which can comprise one or more nitride or oxide layers and prior to SEG formation. This pre-clean and comprises a first pre-rinse with deionized water, followed by an oxide etch utilizing an aqueous solution of deionized water and hydrofluoric acid (HF or hydrogen fluoride in water) aqueous solution of approximately 30:1 (volumetric ratio) at 21 degrees Celsius, for a time period ranging from between 50-60 seconds. The weight percentage of HF recommended for the HF aqueous solution is 49% in a balance of deionized water (H2O). Bulk HF aqueous solution can be purchased from various chemical suppliers in the HF weight percent range of 10% to 49%. In semiconductor fabrication facilities, this aqueous HF aqueous solution is typically diluted in the range 10:1 to 200:1. A 10:1 HF is 1 part aqueous HF (at 49% weight percent) and 10 parts H2O. It will be appreciated that the etch rate of the HF aqueous solution is substantially linear with respect to both the concentration of the HF aqueous solution and the etch time.Therefore, various combinations of HF concentrations and etch times can be used to accomplish the oxide etch. Additionally, the temperature may vary.After the HF etch, an overflow rinse utilizing deionized water is performed for a period ranging from approximately 120 to 600 seconds with a typical rinse being about 400 seconds. The cleaning process of portion 100 results in etching away of the surface contamination/debris located on substrate 10 resulting from offset spacer formation and/or dopant implantation. The upper semiconductor surface, i.e. silicon surface, of substrate 10 is also slightly etched, for example, from one to several mono layers of silicon, during the HF etch.It should be noted that the amount of material removed during the HF etch is dependent upon the type of material being removed. For example, when native oxide is present, the HF etch will remove approximately 20 to 30 Angstroms of oxide. If a deposited oxide layer is present in addition to a native oxide, an over-etch of approximately 30% is generally desirable. For example, if removal of 100 Angstroms of a chemical vapor deposition (CVD) oxide is desired, the HF etch could be employed to remove approximately 120 to 130 Angstroms oxide removal. This latter example would be applicable in applications where a liner oxide of approximately 100 Angstroms thickness is employed between a conductive gate 25 and a nitride spacer.The next steps in the cleaning process comprise a second pre-rinse with deionized water of approximately 30 seconds duration precedes the performance of a Standard Clean-1 (SC-I), a quick dry rinse (QDR), and a Standard Clean-2 (SC-2). The SC-I and SC-2 components are followed by a second QDR, and an HF: H2O etch, a third rinse, and an isopropyl alcohol (IPA) dry. The amount of material removed by the SC-I and SC-2 components are implemented such that they etch from approximately one monolayer of silicon to approximately 10 to 100 Angstroms of silicon.In an embodiment, the SC-I utilizes an aqueous solution of ammonium hydroxide: hydrogen peroxide: deionized water at a ratio of approximately 1 : 1 -4:6-40, at a temperature of approximately 60 degrees Celsius for approximately 72 minutes, to etch approximately 100 Angstroms of silicon. Synonyms for ammonium hydroxide (NH4OH) include ammonia solution (typically contains between 12% and 44% ammonia before dilution), dilute ammonia, or concentrated ammonia. A first quick dry rinse is conducted for approximately 3 minutes. In an embodiment, the SC-2 utilizes a solution of hydrochloric acid: hydrogen peroxide: deionized water at an initial ratio of approximately 1 : 1 :50 at a temperature of approximately 60 degrees for about 5 minutes. A second quick dry rinse is then conducted. Synonyms for hydrochloric acid (HCl) are hydrogen chloride, anhydrous hydrogen chloride, aqueous hydrogen chloride, chlorohydric acid, spirit of salts, and muriatic acid.In a particular embodiment, the SC-I utilizes a solution of ammonium hydroxide: hydrogen peroxide: deionized water at a ratio of approximately 1 :4:20 at a temperature ranging of approximately 60 degrees Celsius for approximately 72 minutes. The SC-I is the step in the clean sequence that etches the silicon. This occurs because the H2O2 (the oxidizer) becomes depleted in the solution with increasing time and increasing temperature. The methods of the present disclosure allow the initial concentration of hydrogen peroxide to be depleted to facilitate etching of the upper-most semiconductor portion. Depletion of the H2O2 is greatly enhanced when the solution temperature rises above 80 degrees Celsius, which can lead to an etch that is difficult to control if not carefully monitored. The temperature range of the SC-I is expected to be approximately 55 to 85 degrees Celsius, with the etch occurring in a shorter period of time at higher temperatures than at lower temperatures. It is expected that the SC-I etching will be better controlled at temperatures in the range of 55-80 degrees Celsius and better still at temperatures in the range of 55-75 degrees Celsius. Generally, it is expected that the substrate will be exposed to the SC-I etch process for longer that 60 minutes. When the oxidizer stops protecting the silicon surface, the ammonium hydroxide (NH4OH) starts to etch the silicon. Thus, a small amount of silicon can be etched in a controlled manner. The SC-I can be performed in a re-usable bath where the solution is re-circulated and heated to maintain the desired temperature.The mechanism of silicon and SiC<2 etching by a NH4OH/ H2O2 solution occurs when the solution is allowed to be depleted OfH2O2. An alkaline solution, such as NH4OH4 in our example, will attack silicon by water molecules, according to the reaction:Si + 2H2O + 2OH<"> -> Si(OH)2(O<">)2 + 2H2TA passivation layer formed by the H2O2 prevents this attack by the NH4OH. H2O2 decomposes in the course to form O2 and H2O.O2When the concentration OfH2O2 is below 3x10<'3>M3 then silicon will begin to etch, because of the absence of the inhibition layer.As indicated in the above equations, heat is given off as the H2O2 is depleted. If a bath is used that is not recharged with fresh solution all H2O2 will be depleted, thereby no longer releasing heat. Therefore, the temperature can be monitored on the low end to indicate when the solution should be refreshed, while the temperature on the high end is monitored to prevent unusually rapid decomposition of the H2O2, which can lead to a process that is difficult to control.The first quick dry rinse is conducted for approximately 3 minutes. The subsequent SC-2 utilizes a solution of hydrochloric acid: hydrogen peroxide: deionized water at a ratio of approximately 1:1:50 at a temperature of approximately 60 degrees for about 5 minutes. A quick dry rinse with deionized water, followed by an IPA dry process, is performed following the SC-2.The IPA dry process uses a heated IPA vapor at approximately 82 degrees Celsius. The IPA vapor is generated in a separate chamber with 100% N2 bubbled through 100% IPA (heated to 82 degrees Celsius). The IPA condenses on the wafer, and the solution drips off the bottom of the wafer. The IPA vapor concentration is slowly diluted to 100% N2 before the wafers are removed from the rinsing/drying tank.Subsequent to the SC-I and SC-2 processes, the substrate will be further recessed (etched) as a result of the cleaning process. Next, an HF: H2O etch can be conducted at an aqueous solution ratio of 200:1 for about 65 seconds, which typically results in approximately 30 Angstroms of oxide removal. The HF: H2O etch 8 is followed by a rinse with deionized water for approximately 10 minutes duration. The deionized water rinse is followed by an IPA dry as described in the preceding paragraph. At this time, the source/drain regions of the substrate are ready for ion implantation or selective epitaxial growth.In a particular embodiment, the SC-I process comprises a pre-rinse with deionized water of approximately 30 seconds duration. The pre-rinse is followed by a SC-I solution at a ratio of approximately 1:1- 4:6-40, which includes the subranges of 0.25:1:5, 0.5:1:5, 1:1:5, 1:1:6, 1:4:20, and 1:1:40, ammonium hydroxide: hydrogen peroxide: deionized water at a temperature of approximately 60 degrees Celsius for approximately 5 minutes. A quick dry rinse (QDR) is then performed for approximately 3 minutes.Following the SC-I cleaning process, an SC-2 cleaning process is performed. In an embodiment, the SC-2 cleaning process includes utilizing an aqueous solution of hydrochloric acid: hydrogen peroxide: deionized water at a ratio of approximately 1 : 1 : 50 at a temperature of approximately 60 degrees Celsius for approximately 5 minutes. A QDR is then performed, and portion 200 is ready for the third cleaning. The weight percent composition of the hydrochloric acid: hydrogen peroxide: deionized water is 29% (weight percent) hydrochloric acid and 30% (weight percent) hydrogen peroxide in a balance of deionized water.After the SC-I and SC-2, a third cleaning process comprising an approximate 30 second pre-rinse, an oxide etch, an overflow rinse and an IP dry is performed. The oxide etch is accomplished utilizing a solution of deionized water and hydrofluoric acid at a ratio of approximately 200:1 for a time period ranging from between 450-650 seconds. Following the HF etch, an overflow rinse is performed for approximately 10 minutes. A final isopropyl alcohol (IPA) dry is then performed. Approximately 120-140 Angstroms of the surface of substrate 20 is removed in this process. Portion 200 is ready to undergo selective epitaxial growth.The above-described cleaning process has been found to facilitate formation of an epitaxial layer on a semiconductor surface, specifically silicon. Because various etch processes can etch N- and P- type regions at different rates, it can be useful to amorphize an upper-most surface of the source/drain regions prior to the above-described clean to reduce any preferential etch differences between substrate regions of differing dopant types.For example, the above-described clean process can etch the N-type silicon preferentially, as compared to the P-type silicon, resulting in a quality difference of the SEG between the N and P regions after SEG processing. Etch rate differences between N- and P-type regions can allow for contaminates to remain in the lesser-etched region. For example, an etch process that does not etch P-type regions at the same rate as N-type regions can result in P-regions maintaining embedded carbon that is incorporated from previous process steps. Without appropriate etching of silicon in the P-type regions during the clean, the carbon will remain, and the SEG will grow inconsistently. A high bake temperature of 900<0>C can be used to overcome this growth issue on P areas, however, as stated previously, high bake temperatures can be detrimental to the device in that it causes diffusion and deactivation of the dopants. Amorphizing the source/drain regions can reduce etch differences associated with the above-described cleaning process as well as other processes that are used to etch doped substrate regions, thereby improving the quality of both the N and P regions.It has been observed that the selective etching may be P-type over N-type, or N-type over P-type depending on the solution temperature, flow rate of the aqueous ammonia, concentration of the aqueous ammonia, agitation, or illumination of light. By amorphizing the silicon in this manner to a pre-defined depth, it has been observed that unbiased etching to the depth of the amorphized silicon can be achieved.In one embodiment, N- and P-type extensions formed hi the source/drain regions are amorphized by being implanted with the Xe, at a dose of 2El 4 and energy of 10keV, to create an amorphous depth of 10OA.In accordance with another embodiment, a spacer structure having an undercut can be used to reduce or inhibit facet formation during a selective epitaxial growth process. Such a process can allow for greater lateral uniformity of junction or suicide features during implantation or silicidation processes, and can be accomplished by using a spacer formed with a bi-layer of materials, e.g., a thin liner, such as portion 29 of FIG. 1, of one material underlying another layer of material from which the 'main' spacer is formed. The thin liner and other material layer are selected such that the two materials are selectively etchable with respect to the other, for example, a thin oxide liner and a nitride layer. By etching the underlying portion of the spacer, an undercut can be formed that reduces facets during epitaxial formation.In another embodiment, a method of germanium-content engineering can be used during a selective epitaxial growth (SEG) process to form raised source drain regions, such that the germanium-content is engineered to facilitate subsequent cobalt-silicidation, or for nickel suicide processes. For example, United States Patent Application having serial number 10/969,774 (Attorney Docket Number 1458-H1955), which is hereby incorporated in its entirety by reference, discloses such a technique.The SEG formation process commences with a germanium content on the order of between approximately 3-5% to ensure good growth conditions at high growth rates for both N- and PMOS. The germanium content is reduced during growth of the upper portion of the SEG layer (raised source/drain region) to provide a good substrate for subsequent cobalt silicidation. Thus the raised source drain region comprises a iirsi po[pi]ion nearest tne semiconductor substrate having a Ge content greater than a second portion of the raised source drain furthest from the substrate. Due to the reduction in germanium during growth of the upper portion of the SEG layer, the germanium-to-silicon ratio in the first portion closest the substrate will be different (typically greater than) from the germanium-to-silicon ratio in the uppermost portion of the SEG layer.This method permits increased throughput for SEG at reduced thermal budget, as well as the ability to continue using cobalt rather than nickel for the suicide layer. Using a graded SEG as described allows for a self- limiting or self-stopping cobalt silicidation process, due to the higher conversion temperatures required to create cobalt-silicide in the presence of germanium. Utilizing the methods during the manufacture of CMOS devices results in reduced junction leakage. A reduction injunction leakage results in improved device performance.In one embodiment, a dopant profile for a Ge-to-Si ratio where a portion of SEG 38 closest to an underlying substrate will typically have a Ge-to-Si ratio in the range from 15% to 35%. While the portion of SEG 38 furthest from the substrate 10 typically has a reduced Ge-to Si ratio in the range from 0% to 2.5%. Note that listed percentages are atomic percentages unless otherwise stated.Following the formation of the Ge-gradient in the source/drain regions, a silicidation process is carried out to form a suicide layer overlying a portion of the raised source/drain region. In an embodiment, the suicide layer comprises cobalt disilicide (CoSia). In an embodiment, the cobalt disilicide is formed at a temperature ranging from 600 to 800 degrees Celsius. Formation of the cobalt disilicide further comprises depositing a cobalt metal, performing a first anneal at a temperature ranging from 450 to 550 degrees Celsius, performing a wet strip with a sulfuric peroxide mixture followed by a wet strip with an ammonium peroxide mixture, and performing a second anneal at a temperature ranging from 600 to 800 degrees Celsius.In another embodiment, the suicide layer comprises nickel suicide (NiSi). In an embodiment, the nickel suicide is formed at a temperature ranging from 350 to 500 degrees Celsius. The nickel silicide is formed by depositing a nickel metal and performing an anneal at a temperature ranging from 350 to 500 degrees Celsius. Following the anneal, a wet strip with a sulfuric peroxide mixture is performed, followed by a wet strip with an ammonium peroxide mixture. It should be noted that more than one anneal may be utilized to form the nickel silicide, e.g., a two step anneal process such as a first anneal at a temperature ranging from 300 to 400 degrees Celsius following nickel metal deposition, performing the wet strips, then performing a second anneal at a temperature ranging from approximately 400 to 500 degrees Celsius.The method and apparatus herein provides for a flexible implementation. Although described using certain specific examples, it will be apparent to those skilled in the art that the examples are illustrative, and that many variations exist. For example, the disclosure is discussed herein primarily with regard to independent control of the placement of a silicide and the amount of offset of a source/drain region from a gate structure for a CMOS device, however, the invention can be employed with other device technologies to create deep source/drain offsets and determine silicide location during device manufacture. Additionally, various types of deposition and etch devices are currently available which could be suitable for use in employing the method as taught herein. Note also, that although an embodiment of the present invention has been shown and described in detail herein, along with certain variants thereof, many other varied embodiments that incorporate the teachings of the invention may be easily constructed by those skilled in the art. Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or element of any or all the claims. Accordingly, the present invention is not intended to be limited to the specific form set forth herein, but on the contrary, it is intended to cover such alternatives, modifications, and equivalents, as can be reasonably included within the spirit and scope of the invention.
The present disclosure is directed to systems and methods of conductively coupling a plurality of relatively physically small IP core dies to a relatively physically larger base die using an electrical mesh network that is formed in whole or in part in, on, across, or about all or a portion of the base die. Electrical mesh networks beneficially permit the positioning of the IP cores in close proximity to support circuitry carried by the base die. The minimal separation between the IP core circuitry and the support circuitry advantageously improves communication bandwidth while reducing power consumption. Each of the IP cores may include functionally dedicated circuitry such as processor core circuitry, field programmable logic, memory, or graphics processing circuitry. The use of IP core dies beneficially and advantageously permits the use of a wide variety of IP cores, each having a common or similar interface to the electrical mesh network.
1.A semiconductor package comprising:a first semiconductor die having an upper surface and a lower surface, the first semiconductor die including an input/output circuit;a first electrical grid disposed on the upper surface of the first semiconductor die and electrically coupled to circuitry included in the first semiconductor die, the first electrical grid include:a first plurality of conductors, whereinEach of the first plurality of conductors is disposed on the upper surface of the first semiconductor die and spaced apart from the remaining first plurality of conductors;a second plurality of conductors, wherein:Each of the second plurality of conductors is disposed on the upper surface of the first semiconductor die and spaced apart from the remaining second plurality of conductors;Each of the second plurality of conductors intersects at least one of the first plurality of conductors and is electrically coupled;a plurality of second semiconductor dies, each of the plurality of second semiconductor dies comprising a processor core circuit, each of the second semiconductor dies being electrically coupled to one of the first plurality of conductors A node formed by the intersection with one of the second plurality of conductors.2.The semiconductor package of claim 1 wherein each of said first plurality of conductors is disposed perpendicular to at least one of said second plurality of conductors.3.The semiconductor package of claim 1 wherein each of said first plurality of conductors is disposed perpendicular to each of said second plurality of conductors.4.The semiconductor package of claim 1 wherein each of said first plurality of conductors intersects each of said second plurality of conductors and is electrically coupled.5.The semiconductor package of claim 1 wherein said first semiconductor die comprises a plurality of through silicon vias (TSVs), said TSVs to at least one of said first electrical grid and said I/O circuitry A contact pad electrically conductively coupled to the lower surface of the first semiconductor die.6.The semiconductor package of claim 1 wherein said first semiconductor die further comprises at least one transistor disposed proximate said upper surface, said at least one transistor being electrically coupled to said first electrical mesh network.7. The semiconductor package of claim 1Wherein each of the second semiconductor dies includes an upper surface and a lower surface;Wherein each of at least some of the second semiconductor dies includes at least one transistor disposed proximate the lower surface of the respective second semiconductor die.8.The semiconductor package of claim 1 wherein each of said first plurality of conductors comprises a plurality of conductors patterned on said upper surface of said first semiconductor die.9.The semiconductor package of claim 1 wherein each of said second plurality of conductors comprises a plurality of conductors patterned on said upper surface of said first semiconductor die.10.The semiconductor package of claim 1 wherein said circuitry included in said first semiconductor die comprises at least one of: a voltage regulator circuit, a controller circuit, and a memory circuit.11.The semiconductor package of any of claims 1 to 10, wherein the circuitry included in the first semiconductor die comprises a voltage regulator circuit, the voltage regulator circuit being electrically coupled to the plurality The processor core circuit included in at least one of the second semiconductor dies.12.A method comprising:Forming a first plurality of conductors on an upper surface of the first semiconductor die;Forming a second plurality of conductors on the upper surface of the first semiconductor die, wherein:Each of the first plurality of conductors is disposed on the first surface and spaced apart from the remaining first plurality of conductors;Each of the second plurality of conductors is spaced apart from the remaining second plurality of conductors;Each of the first plurality of conductors intersects and is electrically coupled to at least one of the second plurality of conductors to form a first electrical grid, the first electrical grid being conductively coupled to the first semiconductor tube a circuit included in the core;Each of the plurality of second semiconductor dies is electrically coupled to a node formed by the intersection of one of the first plurality of conductors and one of the second plurality of conductors.13.The method of claim 12 wherein forming the second plurality of conductors on the upper surface of the first semiconductor die further comprises:Forming the second plurality of conductors on the upper surface of the first semiconductor die such that each of the second plurality of conductors is disposed perpendicular to at least one of the first plurality of conductors.14.The method of claim 12 wherein forming the second plurality of conductors on the upper surface of the first semiconductor die further comprises:Forming the second plurality of conductors on the upper surface of the first semiconductor die such that each of the second plurality of conductors is disposed perpendicular to each of the first plurality of conductors.15.The method of claim 12 wherein forming the second plurality of conductors on the upper surface of the first semiconductor die further comprises:Forming the second plurality of conductors on the upper surface of the first semiconductor die such that each of the second plurality of conductors intersects each of the first plurality of conductors and is electrically coupled .16.The method of claim 12 further comprising:Forming a plurality of through silicon vias (TSVs) in the first semiconductor die, the TSVs electrically coupling at least one of the first electrical grid and the I/O circuitry to the Contact pads on the lower surface of the first semiconductor die.17. The method of claim 12 further comprising:Forming at least one transistor proximate the upper surface of the first semiconductor die;The at least one transistor is conductively coupled to the electrical grid.18. The method of claim 12 further comprising:Forming at least one transistor proximate a lower surface of at least some of the plurality of second semiconductor dies;Each of the at least one transistor proximate the lower surface of at least some of the plurality of second semiconductor dies is electrically coupled to the electrical grid.19.The method of claim 12 wherein forming the first plurality of conductors on the upper surface of the first semiconductor die further comprises:Patterning each of the first plurality of conductors on the upper surface of the first semiconductor die.20.The method of claim 12 wherein forming the second plurality of conductors on the upper surface of the first semiconductor die further comprises:Patterning each of the second plurality of conductors on the upper surface of the first semiconductor die.21.The method of claim 12 further comprising:At least one of the following is formed: an input/output (I/O) circuit, a voltage regulator circuit, a controller circuit, and a memory circuit in the first semiconductor die.22. The method of claim 12 further comprising:Forming an input/output circuit in the first semiconductor die;The I/O circuitry in the first semiconductor die is conductively coupled to the processor core circuit included in at least one of the plurality of second semiconductor dies via the first electrical grid.23.A system comprising:a component for forming a first plurality of conductors on an upper surface of the first semiconductor die;a component for forming a second plurality of conductors on the upper surface of the first semiconductor die, wherein:Each of the first plurality of conductors is disposed on the first surface and spaced apart from the remaining first plurality of conductors;Each of the second plurality of conductors is spaced apart from the remaining second plurality of conductors;Each of the first plurality of conductors intersects and is electrically coupled to at least one of the second plurality of conductors to form a first electrical grid, the first electrical grid being conductively coupled to the first semiconductor tube a circuit included in the core;Means for electrically coupling each of the plurality of second semiconductor dies to a node formed by the intersection of one of the first plurality of conductors and one of the second plurality of conductors.24.The system of claim 23 wherein the means for forming the second plurality of conductors on the upper surface of the first semiconductor die further comprises:Forming the second plurality of conductors on the upper surface of the first semiconductor die such that each of the second plurality of conductors is disposed perpendicular to at least one of the first plurality of conductors Parts.25.The system of claim 23 wherein the means for forming the second plurality of conductors on the upper surface of the first semiconductor die further comprises:Forming the second plurality of conductors on the upper surface of the first semiconductor die such that each of the second plurality of conductors is disposed perpendicular to each of the first plurality of conductors Parts.
Distributed semiconductor die and package architectureTechnical fieldThe present disclosure relates to semiconductor packages and die structures.Background techniqueNext generation computing devices, programmable logic (FPGA), graphics units (also known as computing devices), and data centers tend to provide systems with greater computing power, operational flexibility, and improved power efficiency. The combination of requirements proposed by next-generation data centers and computing devices poses a significant challenge to current general-purpose servers. The increased demand for reduced system complexity, business agility, and scalability has increased demand for virtualized data center infrastructure that will place additional demands on next-generation data servers. To meet these various requirements, next-generation servers can be designed to target a specific workload matrix. However, this task- or service-oriented design compromises the long-term flexibility of such next-generation servers while improving power efficiency. As a result, servers used in next-generation data centers must be able to provide cost-saving solutions that address current and future computing needs, provide a flexible platform that meets the needs of evolved operations, and provide improved power efficiency over legacy servers.The challenges posed by the growing ubiquity of Internet of Things (IoT) devices are very similar to those presented by next-generation data centers. For almost billions of connected devices, cloud-based infrastructure must quickly evaluate high-bandwidth data streams and determine which data can be processed and which data can be safely discarded.Next-generation platforms share several different requirements: increased bandwidth; increased flexibility to increase functionality; improved power efficiency (or reduced power consumption); and reduced footprint requirements. To date, designers have been able to address these diverse needs by packaging additional components on standard printed circuit boards. A limitation inherent in such single board solutions may be that the multiple demands placed on next generation devices are not satisfactorily addressed. Such limitations include: chip-to-chip bandwidth limitations based on interconnect density; power requirements for long-distance traces between chips; and increased physical size of printed circuit boards that house the chips. Monolithic integration of system components provides a potential solution, but such integration does not readily permit integration of system components that each can evolve at different rates. For example, logic chips built using newer technologies may not be easy to integrate or facilitate monolithic fabrication with memory chips built using older technologies.As a result, conventional solutions are unable to meet the future demands of higher bandwidth, greater power efficiency, increased functionality, and increased operational flexibility, all in a physically smaller package and die architecture.DRAWINGSThe features and advantages of the various embodiments of the claimed subject matter will be apparent from the description of the claims.1 is a schematic diagram of an illustrative semiconductor package and die architecture including an electrical mesh network electrically coupled to a plurality of semiconductor intellectual property cores ("IP cores") in accordance with at least one embodiment described herein. Each, and electrically coupled to a base die comprising a plurality of support circuits;2 is a partial cross-sectional elevation view of an illustrative semiconductor package and die architecture in accordance with at least one embodiment described herein, the architecture including an electrical mesh network communicatively coupling a plurality of IP cores to a base die;3A is a plan view of an illustrative semiconductor package and die architecture including an electrical mesh network including a first plurality of conductors and a second disposed perpendicular to the first conductors, in accordance with at least one embodiment described herein Multiple conductors;3B is a cross-sectional elevation view of the illustrative semiconductor package and die structure shown along section line 3B-3B of FIG. 3A, in accordance with at least one embodiment described herein;4 is a schematic diagram of an illustrative processor-based device including conductively coupling a plurality of IP cores to a substantially die as described in FIGS. 1-3, respectively, in accordance with at least one embodiment described herein. One or more semiconductor packages and die structures of an electrical grid network;5 is a plan view of an illustrative semiconductor package and die architecture in accordance with at least one embodiment described herein, the architecture including an electrical mesh network in a "ring" configuration, wherein the first plurality of conductors are arranged in Positioning the individual conductors end to end to form a closed loop;6 is a plan view of an illustrative semiconductor package and die architecture in accordance with at least one embodiment described herein, including an electrical grid network configured in a "toroidal" network, with the first Each of the conductors included in the conductors 31 and each of the conductors included in the second plurality of conductors "circulate" between a portion of the IP core;7 is a plan view of an illustrative semiconductor package and die architecture including an electrical mesh network configured in a "star" network, wherein the first plurality of conductors are included, in accordance with at least one embodiment described herein Each of the conductors electrically couples each of the peripheral IP cores to the central IP core;8 is an arrangement of a basic die and an IP core electrically coupled to respective nodes of each of a plurality of nodes included in an electrical mesh network disposed on an upper surface of the substantially die, in accordance with at least one embodiment described herein Floor plan9 is a high level logic diagram showing an illustrative method for electrically coupling a plurality of IP cores to a basic die using an electrical mesh network disposed proximate to an upper surface of a substantially die, in accordance with at least one embodiment described herein. flow chart;10 is an illustrative method of coupling an electrical mesh network disposed on at least a portion of an upper surface of a substantially die to one or more conductive structures on a lower surface of a substantially die, in accordance with at least one embodiment described herein Advanced flow chart;11 is one or more of forming an active component and/or support circuit (which includes a region or portion of a substantially die near an upper surface of a substantially die) in accordance with at least one embodiment described herein. High-level flow chart of an illustrative method of active components;12 is one or more active components in a region or portion of an IP core that forms one or more active components and/or circuits that include an IP core proximate to a lower surface of an IP core, in accordance with at least one embodiment described herein. A high-level flow chart of the illustrative method.While the following detailed description is to be considered in the embodimentDetailed waysThe systems and methods described herein include an electrical mesh network that couples multiple semiconductor intellectual property cores (hereinafter referred to as "IP cores" or collectively referred to as "IP cores") to a single basic die (which includes a common support for IP cores) Operating circuit ("Support Circuit")). For example, the basic die can include data storage circuitry, voltage regulation circuitry, and/or input/output (I/O) circuitry that is electrically coupled via an electrical grid network to a plurality of IP cores disposed across an upper surface of the base die . This arrangement beneficially and advantageously permits the mixing of IP cores or IP cores (which addresses specific needs or functionality while still retaining the choice of "standard" or "universal" basic die configurations). Example IP cores can include, but are not limited to, semiconductor dies having: processor core circuits, graphics processing circuits, field programmable gate array circuits, neural network circuits, quantum computing circuits, and the like.Electrically meshed networks are used to electrically couple the IP cores to the base die to beneficially reduce the physical separation between components, thereby improving bandwidth while reducing transmission power losses. Moreover, this architecture provides the flexibility to accommodate the faster evolution of IP core technology by simply attaching the newly developed IP core to the base die, which may evolve at a much slower rate. Thus, evolutionary changes in IP core technology are easily combined with the basic die without the need for a complete semiconductor package redesign (like if the monolithic formation of the IP core circuitry and support circuitry present in the basic die would otherwise be required). For example, patterning an orthogonal electrical mesh network on the upper surface of the basic die can form a plurality of "nodes" in which individual conductors forming an electrical mesh network intersect - the IP core can be electrically coupled to nodes included in the plurality of nodes Some or all of each. In addition, the failure rate of a semiconductor die increases with the number of components, circuits, and systems incorporated in the die (ie, the failure rate typically increases with the size and/or complexity of the semiconductor die). Reducing the component count on the IP core beneficially reduces both the physical size of the die and the failure rate.In the case where conventional solutions position the die on a two-dimensional circuit board, the systems and methods described herein stack the die in three-dimensional space, thereby reducing footprint, improving communication speed, and reducing power consumption. More specifically, the systems and methods disclosed herein place each IP core circuit on a relatively small semiconductor die. A plurality of IP cores can be physically, electrically, and communicatively coupled to a relatively large base die that provides a common support circuit for use by a plurality of conductively coupled IP core circuits. Example support circuits may include, but are not limited to, voltage regulation circuits, input/output circuits, data storage circuits, and the like.A semiconductor package and die architecture is provided. The semiconductor package and die (or dies) can include a base die having an upper surface and a lower surface, the base die including input/output circuitry, and an electrical mesh network disposed proximate to the upper surface of the substantially die, And electrically coupled to an input/output circuit included in the basic die, the electrical mesh network including a first plurality of conductors (where each of the first plurality of conductors is disposed proximate to the upper surface of the substantially die and is more than the remaining first a conductor separated) and a second plurality of conductors (wherein each of the second plurality of conductors is disposed proximate to an upper surface of the substantially die and spaced apart from the remaining second plurality of conductors, and the second plurality of conductors Each intersecting and electrically coupled to at least one of the first plurality of conductors; a plurality of IP cores, each of the plurality of IP cores including a processor core circuit, each of the IP cores being electrically coupled to the first plurality of A node formed by the intersection of one of the conductors and one of the second plurality of conductors.A semiconductor die and packaging method are provided. The method can include: forming a first plurality of conductors proximate an upper surface of the substantially die; forming a second plurality of conductors proximate a lower surface of the substantially die, wherein each of the first plurality of conductors is disposed proximate to the substantially die An upper surface and spaced apart from the remaining first plurality of conductors, each of the second plurality of conductors disposed proximate to the upper surface of the substantially die and spaced apart from the remaining second plurality of conductors, and the first plurality of conductors Each of the plurality of conductors intersecting and electrically coupled to form an electrical mesh network, the electrical mesh network being electrically coupled to circuitry included in the substantially die; and electrically coupling each of the plurality of IP cores And a corresponding node formed by the intersection of one of the first plurality of conductors and one of the second plurality of conductors.An electronic device is provided. The electronic device can include a printed circuit board and a semiconductor package electrically coupled to the printed circuit board, the semiconductor package including: a substantially die having an upper surface and a lower surface, the basic die including an input/output circuit; and an electrical mesh network configured to Adjacent to an upper surface of the substantially die and electrically coupled to circuitry contained in the substantially die, the electrical mesh network comprising a first plurality of conductors (where each of the first plurality of conductors is disposed proximate to an upper surface of the substantially die and Separating from the remaining first plurality of conductors) and a second plurality of conductors (wherein each of the second plurality of conductors is disposed proximate to an upper surface of the substantially die and spaced apart from the remaining second plurality of conductors, and Each of the plurality of conductors intersects and is electrically coupled to at least one of the first plurality of conductors; and a plurality of IP cores, each of the plurality of IP cores including a processor core circuit, each conductively coupled to the IP core a node formed by the intersection of one of the first plurality of conductors and one of the second plurality of conductors.A semiconductor package system is provided. The semiconductor package system can include: a component for forming a first plurality of conductors proximate an upper surface of the substantially die; a component for forming a second plurality of conductors proximate to a lower surface of the substantially die, wherein the first plurality Each of the conductors is disposed proximate to an upper surface of the substantially die and spaced apart from the remaining first plurality of conductors, each of the second plurality of conductors being disposed proximate the upper surface of the substantially die and with the remaining second plurality of conductors Separating, and each of the first plurality of conductors intersecting with at least one of the second plurality of conductors and electrically coupled to form an electrical mesh network electrically coupled to at least I/O comprised in the substantially die a circuit; a component for electrically coupling each of the plurality of IP cores to a node formed by the intersection of one of the first plurality of conductors and one of the second plurality of conductors.A semiconductor package and die architecture is provided. The semiconductor package and die structure can include: an electrical mesh network including a first plurality of conductors; a second plurality of conductors each intersecting at least one of the first plurality of conductors to form a plurality of a network node, each of the network nodes being at an intersection of one of the first plurality of conductors and one of the second plurality of conductors; the basic die comprising an I/O circuit electrically coupled to at least one of the plurality of nodes And a plurality of IP cores, each of the plurality of IP cores including a processor core circuit, each of the plurality of IP cores being electrically coupled to a respective node of the plurality of nodes.The terms "top," "bottom," "upper," "lower," "bottom," and "uppermost" as used herein are intended to convey relative rather than absolute physical configuration when used in relation to one or more elements. . Thus, elements described as "upper layer" or "top element" in a device may form the "bottom element" or "bottom element" in the device when the device is reversed. Similarly, elements described as "bottom element" or "bottom element" in a device may form the "topmost element" or "top element" in the device when the device is reversed.The term "logical association" as used herein, when used with reference to a plurality of objects, systems, or elements, is intended to convey the existence of a relationship between an object, system, or element, such that access to an object, system, or element reveals or The remaining objects, systems, or components that have a "logical association" to the object, system, or component being accessed. An example "logical association" exists between relational databases, where access to elements in the first database can provide information from one or more elements of the plurality of additional databases, each having an identified relationship with the accessed element And / or data. In another example, if "A" and "B" are logically associated, accessing "A" will reveal or otherwise extract information and/or data from "B", and vice versa.1 is a schematic diagram of an illustrative semiconductor package 100 that includes an electrical mesh network 110 that is electrically coupled (160) to a plurality of semiconductor intellectual property cores 120A-120n (singular) in accordance with at least one embodiment described herein. Each of the "IP cores 120", collectively referred to as "IP cores 120", is electrically coupled 170 to the basic die 130 (which includes a plurality of support circuits 140A-140n (collectively) Is "support circuit 140")). The base die 130 is communicatively coupled 180 to a substrate 150 (eg, a multilayer printed circuit board, etc.). In an embodiment, electrical grid network 110 includes a plurality of interconnected conductive pathways or components that couple each of IP cores 120 to one or more neighboring IP cores 120 to facilitate communication between IP cores 120. In an embodiment, the interconnected conductive vias or components that form the electrical mesh network 110 also electrically couple each of the IP cores 120 to the base die 130 to facilitate communication between the IP cores 120 and the support circuitry 140. The base die 130 provides a "resource pool" shared by some or all of the IP cores 120. Beneficially, when new IP core technology is introduced, the IP core 120 can be replaced during the manufacturing process without the need for a redesign of the basic die 130 - reducing manufacturing costs and improving manufacturing flexibility and market responsiveness.The electrical mesh network 110 includes a first plurality of conductors and a second plurality of conductors disposed at an angle to the first plurality of conductors such that at least one of the second plurality of conductors intersects at least one of the first plurality of conductors. In some embodiments, the electrical mesh network 110 can include: a first plurality of conductors disposed all of one another across the upper surface 132 of the basic die 130 in parallel; and a second plurality of conductors that are parallel to each other and to the first plurality Each of the conductors is arranged vertically. Contacting one of the second plurality of conductors with each of the intersections of one of the first plurality of conductors defines one of a plurality of nodes on the electrical mesh network 110. In an embodiment, each of the IP cores 120 is electrically coupled to a respective electrical mesh network node. In an embodiment, the electrical mesh network 110 may be deposited, patterned, formed, or otherwise formed over, over, or around at least a portion of the upper surface 132 of the basic die 130 using any currently available or future developed material deposition process or method. Set it in other ways. In some implementations, the electrical mesh network 110 can be formed across a single layer across all or a portion of the basic die 130 - that is, the first plurality of conductors and the second plurality of conductors can be on the same layer of the substantially die 130 (eg, the same metal Formed on the layer). In some implementations, the electrical mesh network 110 can be formed across multiple layers across all or a portion of the basic die 130 - that is, each of the first plurality of conductors and/or each of the second plurality of conductors can be in the base tube Two or more different layers of the core 130 (eg, adjacent or non-adjacent metal layers) are formed.Each of the semiconductor intellectual property cores ("IP cores") 120 may include, but is not limited to, a reusable unit of logic, unit or integrated circuit/chip/chiplet layout design. Example IP core 120 includes, but is not limited to, a universal asynchronous receiver/transmitter (UART), a central processing unit (CPU), a graphics processing unit (GPU), an IEEE 802.11 Ethernet controller, and a peripheral component interconnect (PCI). Interface, storage device, etc. Each of the IP cores 120 includes circuitry (eg, a processor core circuit) disposed on an integrated circuit that is relatively small (compared to the base die 130). Each of the IP cores 120 has a lower surface 124 that is disposed proximate to the electrical mesh network 110. In an embodiment, the set of machine executable instructions that cause operation of the support circuit 140 in the basic die 130 may be wholly or partially comprised of processor circuitry and/or controller circuitry (which is disposed in, on, or in the IP core 120 Run around.) In an embodiment, each of the IP cores 120 can occupy the same area on the upper surface 132 of the substantially die 130. In an embodiment, the IP cores 120 may occupy different areas on the upper surface 132 of the substantially die 130. In an embodiment, the IP core 120 can have a surface area of less than about 25 square millimeters (mm2), about 20 mm2, about 15 mm2, about 12 mm2, about 10 mm2, about 8 mm2, or about 5 mm2.Each of the IP cores 120 includes one or more conductive fixtures 126A-126n (contact bumps, pads, banks, grooves, pins, etc. - collectively referred to as "fixtures 126"), which are in the IP core 120 At least a portion of at least a portion of the lower surface 124 is disposed in, over, or across at least a portion of the lower surface 124 of the IP core 120. One or more electrically conductive fixtures 126 may be disposed across or in a manner, in a fixed pattern or arrangement of the lower surface 124 of each of the IP cores 120 that are electrically conductively coupled to the base die 130. Maintaining the conductive features 126 in a fixed pattern or arrangement beneficially permits replacement and/or replacement of the IP core 120 without the need for a redesign of the basic die 130. For example, newer IP cores 120 can selectively replace older IP cores 120 in a particular semiconductor package design. When the arrangement of conductive features 126 on the older IP core 120 matches the placement of conductive features present on the newer IP core 120 new, this replacement is greatly facilitated and the redesign time and cost is reduced or even eliminated. Since the IP core 120 can be easily replaced without the need for full rework of the basic die 130, it is advantageous to reduce time to market and beneficially improve market responsiveness.At least one of the one or more conductive fixtures 126 can electrically couple 160 the respective IP cores 120 to the electrical mesh network 110. In an embodiment, at least one of the one or more conductive fixtures 126 can electrically couple 160 the IP core 120 to the support circuit 140 (which is disposed in the base die 130). In an embodiment, conductive micro solder bumps, solder balls, solder paste or the like may physically and/or electrically couple 160 the IP core 120 to the support circuitry in the electrical mesh network 110 and/or the basic die 130.The base die 130 includes a support circuit 140 that is deposited, patterned, formed, or otherwise disposed in, on, or around the base die 130. In an embodiment, support circuitry 140 may include, but is not limited to, one or more of the following: data storage circuitry, cache circuitry, input/output circuitry, processor voltage regulation circuitry (eg, fully integrated voltage regulator or "FIVR" Circuit), communication interface circuit, bus interface circuit, and combinations thereof. The base die 130 can provide a substrate for the semiconductor package 100. In an embodiment, the substantially die 130 is relatively larger than each of the IP cores 120. In an embodiment, the substantially die may have an upper surface area of less than about 3000 square millimeters (mm2), about 2500 mm2, about 2000 mm2, about 1500 mm2, about 1000 mm2, about 700 mm2, or about 500 mm2. In an embodiment, all or a portion of the peripheral region of the substantially die 130 may include an I/O circuit. In an embodiment, all or a portion of the central region of the substantially die 130 defined by the peripheral regions may include a cache circuit. In such embodiments, IP core 120 may be coupled to electrical mesh network 110 and/or base die 130 in a central region of basic die 130 that includes a cache circuit. Setting the IP core 120 close to the cache circuitry in the base die advantageously reduces cache access time, thereby improving the performance of the semiconductor package 100.At least a portion of the electrical mesh network 110 can be disposed, patterned, deposited, or otherwise formed across or in at least a portion of the upper surface 132 of the basic die 130. In an embodiment, the electrical mesh network 110 can be formed as a single metal layer on the upper surface 132 of the substantially die 130. In other embodiments, the electrical mesh network 110 can be formed as a plurality of metal layers on the upper surface 132 of the substantially die 130. Electrical mesh network 110 can be formed using any currently available or future developed material deposition and/or patterning process or method. Non-limiting examples of material deposition and/or patterning processes include, but are not limited to, photolithography, printing, electroplating, electroless plating, chemical vapor deposition, atomic layer deposition, physical layer deposition, and the like. The support circuitry 140 disposed in the base die 130 is communicatively coupled 170 to the electrical mesh network 110 using conductors (e.g., metal traces, vias, etc.) disposed in, on or around the substantially die 130.In addition to being conductively coupled to the electrical mesh network 110, at least some of the IP cores 120 are electrically coupled to the support circuitry 140 (which is disposed in the base die 130). In an embodiment, one or more conductive structures 136 may be deposited, patterned, formed, or otherwise disposed across or in all of, or in the entirety of, the upper surface 132 of the base die 130. At least one IP core 120 is coupled to the support circuitry 140 carried by the base die 130. Conductors (eg, metal traces, vias, etc.) couple the conductive structures 136 on the upper surface 132 of the substantially die 130 to the support circuitry 140.A plurality of electrically conductive features 138 can be deposited, patterned, formed, or otherwise disposed across or in at least a portion of lower surface 134 of basic die 130. A plurality of conductive features 138 electrically couple 180 the basic die 130 (and semiconductor package 100) to the substrate 150 (eg, a printed circuit board, motherboard, daughter board, server blade, etc.). Conductors (eg, metal traces, vias, etc.) electrically couple the conductive features 138 on the lower surface 132 of the substantially die 130 to the support circuitry 140 and/or the electrical mesh network 110.2 is a partial cross-sectional elevation view of an illustrative semiconductor package 200 including an electrical mesh network 110 communicatively coupling a plurality of IP cores 120A-120C to a substantially die, in accordance with at least one embodiment described herein. 130. As shown in FIG. 2, semiconductor components (including active semiconductor components, such as transistors) may be formed or otherwise disposed in the lower portion 210 of each of the respective IP cores 120. The provision of semiconductor components in the lower portion 210 of each of the IP cores 120 reduces the physical separation between the circuitry comprising the respective semiconductor components and the electrical mesh network 110, beneficially improving performance while reducing power consumption. Similarly, semiconductor components (including active semiconductor components, such as transistors) may be formed or otherwise disposed in the upper portion 220 of the substantially die 130. In at least some embodiments, at least some of the semiconductor components disposed in the upper portion 220 of the substantially die 130 can form all or a portion of the support circuitry 140. In such embodiments, the semiconductor component is disposed in the upper portion 220 of the base die 130 to reduce the physical separation between the support circuit 140 and the electrical mesh network 110, further improving performance while reducing power consumption.One or more conductors 230 (eg, one or more vias or traces) may electrically couple at least a portion of the semiconductor component formed or disposed in the upper portion 220 of the substantially die 130 to a lower surface across the basic die 130 or One or more of a plurality of conductive features 138 (pads, land, contacts, grooves, pins, etc.) deposited, formed, patterned, or otherwise disposed therein, over, around. Conductive structures 240A-240n (eg, solder bumps, solder balls, posts, and/or leads) can be used to physically and electrically couple the base die 130 to the substrate 150.One or more conductive structures 250A-250n (collectively referred to as "conductive structures 250") (eg, one or more microprojections, solder bumps, solder balls, etc.) electrically couple each of the IP cores 120 to an electrical mesh network 110 and/or basic die 130. In an embodiment, the one or more electrically conductive structures can include a plurality of microprojections disposed on the array of small pitches. For example, the electrically conductive structure 250 can include microprojections formed from copper (Cu), copper containing alloys, silver (Ag), silver containing alloys, nickel (Ni), nickel containing alloys, and combinations thereof. In an embodiment, the electrically conductive structure 250 can include microprojections having a diameter of less than about 50 micrometers (μm), about 40 μm, about 30 μm, about 25 μm, about 15 μm, or about 10 μm. In an embodiment, the conductive structures 250 can be disposed at a pitch of less than about 70 micrometers (μm), about 60 μm, about 50 μm, about 40 μm, about 30 μm, or about 20 μm. In some implementations, a fine layer of reflowable solder-like conductive material can be placed proximate to the conductive fixture 126 disposed on the lower surface 124 of the IP core 120.3A is a plan view of an illustrative semiconductor package 300 including an electrical mesh network 110 including a first plurality of conductors 310A-310n (collectively referred to as "first conductors 310", in accordance with at least one embodiment described herein. And a second plurality of conductors 320A-320n (collectively referred to as "second conductors 320") disposed perpendicular to the first conductor 310. 3B is a cross-sectional elevation view of the illustrative semiconductor package shown along section line 3B-3B of FIG. 3A, in accordance with at least one embodiment described herein. As shown in Figures 3A and 3B, electrical mesh network 110 physically and conductively couples a plurality of IP cores 120A-120n to a basic die 130 (which includes a plurality of support circuits 140A-140n).Each of the IP cores 120 can include any number of circuits or circuits. As shown in FIG. 3A, each of the IP cores 120A-120n includes four processor core circuits 330A-330D (collectively referred to as "processor core circuits 330"). Each of the processor core circuits 330 is electrically coupled to the electrical mesh network 110. Electrical mesh network 110 electrically couples each of IP cores 120 to at least a portion of the remaining IP cores. Electrical mesh network 110 also electrically couples each of IP cores 120 to support circuitry 140 (which is disposed in basic die 130).The basic die 130 includes a plurality of support circuits 140. In an embodiment, the base die 130 can include an area that includes the cache circuit 330. In such embodiments, the IP core 120 can be positioned proximate to the area of the base die 130 that includes the cache circuit 330. Positioning the IP core 120 close to the cache circuit advantageously improves cache access time while reducing power consumption.A plurality of support circuits 140 (including input/output (I/O) circuits) may be deposited, formed, patterned, or otherwise disposed across or in the periphery of the base die 130. The I/O circuitry can include any I/O circuitry 140 that is currently available or will be developed in the future. Example I/O circuits may include, but are not limited to, a serial I/O interface, a parallel I/O interface, a wired I/O interface, a wireless I/O interface, or a combination thereof. In the example semiconductor package 300 shown in FIGS. 3A and 3B, the I/O circuit includes a general purpose I/O (GPIO) circuit 140C, a super path interconnect (UPI) circuit 140D and 140R, and a peripheral component interconnect (PCI). Circuits 140E, 140F, 140L, 140M, 140N, and 140O and RLink circuits 140G, 140H, 140P, and 140Q.An additional number of support circuits 140 (including data storage circuits) may be deposited, formed, patterned, or otherwise disposed across or in the periphery of the base die 130. The data storage circuit can include any data storage technology currently available or developed in the future. Such data storage circuits may include, but are not limited to, electrostatic data storage circuits, quantum data storage circuits, molecular data storage circuits, resistive data storage circuits, optical data storage circuits, or combinations thereof. In the example semiconductor package 300 shown in FIGS. 3A and 3B, the basic die 130 includes double data rate (DDR) I/O circuits 140A, 140B, 140J, and 140K.The first plurality of conductors 310 include conductors 310A-310n that are deposited, formed, patterned, or otherwise disposed across, in or around the upper surface 132 of the substantially die 130. In an embodiment, the conductors 310A-310n included in the first plurality of conductors 310 may be disposed on the same or different metal layers disposed in, on or around the substantially die 130. In an embodiment, each of the conductors 310A-310n included in the first plurality of conductors 310 can be deposited, formed, patterned, or otherwise disposed in a regular or irregular pattern on the upper surface 132 of the substantially die 130. . Although shown in FIG. 3A as being deposited in a straight line, each of the conductors 310A-310n included in the plurality of conductors 310 can have any configuration including, but not limited to, any shape, any size (length, height, width) Etc) and / or any physical configuration (bending, sinusoidal, elliptical, circular, polygonal, etc.).In an embodiment, the spacing or physical distance between each of the conductors 310A-310n included in the first plurality of conductors 310 may be the same or different. In an embodiment, the spacing between any two of the conductors 310A-310n included in the first plurality of conductors 310 can be constant or variable. In an embodiment, the conductors 310A-310n included in the first plurality of conductors 310 may be parallel to each other and disposed at a constant or variable spacing distance between adjacent conductors. The conductors 310A-310n included in the first plurality of conductors 310 may be composed of a metallic or non-metallic conductive material. Exemplary metallic materials include, but are not limited to, copper, copper containing alloys, aluminum, aluminum containing alloys, and the like. Exemplary non-metallic materials include conductive polymers as well as conductive nanoparticles (eg, silver nanowires) suspended in a polymer matrix.The second plurality of conductors 320 includes conductors 320A-320n that are deposited, formed, patterned, or otherwise disposed across or in the upper surface 132 of the substantially die 130. In an embodiment, the conductors 320A-320n included in the second plurality of conductors 320 may be disposed on the same or different metal layers included in the substantially die 130. In an embodiment, some or all of the conductors 320A-320n included in the second plurality of conductors 320 may be disposed on the same or different layers as some or all of the conductors 310A-310n included in the first plurality of conductors 310. Although shown in FIG. 3A as being deposited in a straight line, each of the conductors 320A-320n included in the second plurality of conductors 320 can have any configuration including, but not limited to, any shape, any size (length, height) , width, etc.) and / or any physical configuration (bending, sinusoidal, elliptical, circular, polygonal, etc.).In an embodiment, at least one of the conductors 320A-320n included in the second plurality of conductors 320 intersects at least one of the conductors 310A-310n included in the first plurality of conductors 310 to form an electrical mesh network 110. In other embodiments, at least one of the conductors 320A-320n included in the second plurality of conductors 320 intersects each of the conductors 310A-310n included in the first plurality of conductors 310 to form an electrical mesh network 110. In still other embodiments, each of the conductors 320A-320n included in the second plurality of conductors 320 intersects each of the conductors 310A-310n included in the first plurality of conductors 310 to form an electrical mesh network 110.Each of the conductors 320A-320n included in the second plurality of conductors 320 can be disposed at any angle measured relative to the conductors 310A-310n included in the first plurality of conductors 310. In an embodiment, at least one of the conductors 320A-320n included in the second plurality of conductors 320 may be disposed perpendicular to at least one of the conductors 310A-310n included in the first plurality of conductors 310. In an embodiment, each of the conductors 320A-320n included in the second plurality of conductors 320 may be disposed perpendicular to each of the conductors 310A-310n included in the first plurality of conductors 310.The electrical mesh network 110 formed by the conductors 310A-310n included in the first plurality of conductors 310 and the conductors 320A-320n included in the second plurality of conductors 320 forms an electrical mesh network 110 including a plurality of nodes. The intersection and/or electrical coupling of conductive member 310 with conductive member 320 forms a "node" on electrical mesh network 110. In the case where the conductor 310 and the conductor 320 are formed or disposed on the same layer in the basic die 130, the node is a position where the conductors 310 and 320 intersect. Where conductors 310 and conductors 320 are formed on different layers in the substantially die 130, the nodes appear at locations where the vias or similar conductive features electrically couple the conductors 310 to the conductors 320.In an embodiment, each of the conductors 320A-320n included in the second plurality of conductors 320 can be deposited, formed, patterned, or disposed in a regular or irregular pattern on the upper surface 132 of the substantially die 130. In an embodiment, the spacing between each of the conductors 320A-320n included in the second plurality of conductors 320 may be the same or different. In an embodiment, the spacing between any two of the conductors 320A-320n included in the second plurality of conductors 320 can be constant or variable. In an embodiment, the conductors 320A-320n included in the second plurality of conductors 320 may be parallel to each other and disposed at a constant or variable spacing distance between adjacent conductors. The conductors 320A-320n included in the second plurality of conductors 320 may be comprised of a metallic or non-metallic conductive material. Exemplary metallic materials include, but are not limited to, copper, copper containing alloys, aluminum, aluminum containing alloys, and the like. Exemplary non-metallic materials include conductive polymers as well as conductive nanoparticles (eg, silver nanowires) suspended in a polymer matrix.The conductors 310A-310n included in the first plurality of conductors 310 and the conductors 320A-320n included in the second plurality of conductors 320 may be stacked across the basic die 130 or using any currently available or future developed material deposition process and/or method. Formed, patterned, deposited, and/or disposed in, on or around. Non-limiting exemplary material deposition processes include, but are not limited to, photolithography, printing, electroplating, electroless plating, thin film deposition, atomic layer deposition, and the like. In an embodiment, all or a portion of the conductors 310A-310n included in the first plurality of conductors 310 and/or all or a portion of the conductors 320A-320n included in the second plurality of conductors 320 may be across the basic die 130. Any layers and/or locations of thickness are provided such that all or a portion of the electrical mesh network 110 is formed within the interior of the substantially die 130. In other embodiments, all or a portion of the conductors 310A-310n included in the first plurality of conductors 310 and/or all or a portion of the conductors 320A-320n included in the second plurality of conductors 320 may be across the basic die 130. At least a portion of the lower surface 134 is disposed or formed therein, over or around such that all or a portion of the electrical mesh network 110 is formed on at least a portion of the lower surface 134. In such embodiments, one or more through silicon vias (TSVs) may electrically couple one or more IP cores 120 to the electrical mesh network 110. In still other embodiments, all or a portion of the conductors 310A-310n included in the first plurality of conductors 310 and/or all or a portion of the conductors 320A-320n included in the second plurality of conductors 320 may be across the basic die At least a portion of the upper surface 132 of the 130 is disposed or formed therein, over or around such that all or a portion of the electrical mesh network 110 is formed on at least a portion of the upper surface 132.4 is a schematic diagram of an illustrative processor-based device 400 including conductively coupling a plurality of IP cores 120 to a base as described in FIGS. 1-3, respectively, in accordance with at least one embodiment described herein. One or more semiconductor packages 100A, 100B of the electrical mesh network 110 of the die 130. The processor-based device 400 can include one or more of: processor circuit 410, graphics processor circuit 412, wireless input/output (I/O) interface 420, wired I/O interface 430, memory circuit 440, power management circuitry 450. Storage device 460 and/or network interface 470. The following discussion provides a brief general description of the components that form the illustrative processor-based device 400. Non-limiting example Processor-based device 400 can include a smart phone, a wearable computer, a portable computing device, a handheld computing device, a desktop computing device, a blade server device, a workstation, and the like.Processor-based device 400 includes a processor circuit 410 having an electrical mesh network 110 that electrically couples a plurality of IP cores 120 to a base die 130. In an embodiment, processor-based device 400 may additionally include graphics processor circuit 412 having an electrical mesh network 110 that electrically couples multiple IP cores 120 to basic die 130. In an embodiment, processor-based device 400 includes one or more processor circuits 410 that are capable of executing machine readable instruction sets 414, reading data and/or instructions 414 from one or more storage devices 460, and The data is written to one or more storage devices 460. In some embodiments, the processor-based device 400 includes one or more graphics processor circuits 412 that are capable of running a machine readable instruction set 414 and generating an output signal capable of providing a display output to a system user. Those skilled in the relevant art will appreciate that the illustrated embodiments, as well as other embodiments, can be implemented with other processor-based device configurations, including portable electronic or handheld electronic devices (eg, smart phones, portable computers, wearable computers, consumers). Electronic devices, personal computers ("PCs"), network PCs, microcomputers, server blades, mainframe computers, etc.).Processor circuit 410 may include any number of hardwired or configurable circuits, some or all of which may include electronic components, semiconductor devices, and/or logic elements (which are partially or fully disposed in a PC capable of executing processor readable instructions Programmable and/or configurable combination of, in a server or other computing system.The processor-based device 400 includes a bus or similar communication link 416 that communicatively couples and facilitates various system components (including processor circuit 410, graphics processor circuit 412, one or more wireless I/O interfaces 420, one) Exchange of information and/or data between multiple wired I/O interfaces 430, one or more storage devices 460, and/or one or more network interfaces 470). The processor-based device 400 may be represented herein in the singular, but this is not intended to limit the embodiments to a single processor-based device 400, as in some embodiments there may be more than one processor-based Apparatus 400, which incorporates, includes or encompasses any number of circuits or devices communicatively coupled, juxtaposed or remotely networked.Processor circuit 410 may include one or more semiconductor packages 100A that include an electrical mesh network 110 coupled to a plurality of relatively small IP cores 120 and a single relatively large base die 130. Graphics processor circuit 412 may include one or more semiconductor packages 100B that include an electrical mesh network 110 (coupled to a plurality of relatively small IP cores 120 and a single relatively large base die 130).Processor circuit 410 can include any number, type, or combination of devices. Processor circuit 410 may include, but is not limited to, any current or future developed single or multi-core processor or microprocessor, such as one or more system-on-a-chip (SOC), central processing unit (CPU), digital signal processor (DSP), graphics processing unit (GPU), application specific integrated circuit (ASIC), programmable logic unit, field programmable gate array (FPGA), and the like. The construction and operation of the various blocks shown in Figure 4 are conventional designs unless otherwise stated. Therefore, such blocks need not be described in greater detail herein as they will be understood by those skilled in the relevant art. Bus 416, which interconnects at least some of the components of processor-based device 400, can employ any known serial or parallel bus structure or architecture.System memory 440 can include read only memory ("ROM") 442 and random access memory ("RAM") 446. A portion of ROM 442 can be used to store or otherwise maintain a basic input/output system ("BIOS") 444. The BIOS 444 provides basic functionality to the processor-based device 400, for example, by having the processor circuit 410 load one or more machine readable instruction sets 414. In an embodiment, at least some of the one or more machine readable instruction sets 414 cause at least a portion of the processor circuit 410 to provide, create, generate, transform, and/or act as a dedicated, specific, and specific machine, such as a word processing machine, a digital Image acquisition machine, media playback machine, game system, communication device, and the like.Processor-based device 400 can include at least one wireless input/output (I/O) interface 420. At least one wireless I/O interface 420 is communicably coupled to one or more physical output devices 422 (haptic devices, video displays, audio output devices, hard copy output devices, etc.). At least one wireless I/O interface 420 is communicably coupled to one or more physical input devices 424 (pointer devices, touch screens, keyboards, haptic devices, etc.). The at least one wireless I/O interface 420 can include any currently available or future developed wireless I/O interface. Example wireless I/O interfaces include, but are not limited to: BLUETOOTH®, Near Field Communication (NFC), and the like.Processor-based device 400 can include one or more wired input/output (I/O) interfaces 430. At least one wired I/O interface 430 is communicably coupled to one or more physical output devices 422 (haptic devices, video displays, audio output devices, hard copy output devices, etc.). At least one wired I/O interface 430 is communicatively coupled to one or more physical input devices 424 (pointer devices, touch screens, keyboards, haptic devices, etc.). Wired I/O interface 430 can include any currently available or future developed I/O interface. Example wired I/O interfaces include, but are not limited to, Universal Serial Bus (USB), IEEE 1394 ("FireWire"), and the like.Processor-based device 400 can include one or more non-transitory data storage devices 460 that are communicatively coupled. Data storage device 460 can include one or more hard disk drives (HDDs) and/or one or more solid state storage devices (SSDs). One or more data storage devices 460 can include any currently or future developed storage device, network storage device, and/or system. Non-limiting examples of such data storage devices 460 may include, but are not limited to, any current or future developed non-transitory storage devices or devices, such as one or more magnetic storage devices, one or more optical storage devices, one or A plurality of resistive memory devices, one or more molecular storage devices, one or more quantum memory devices, or various combinations thereof. In some implementations, one or more data storage devices 460 can include one or more removable storage devices, such as one or more flash drives, flash memories, flash memory units, or capable of interfacing with processor-based devices A similar device or device that is communicatively coupled and decoupled.One or more data storage devices 460 can include an interface or controller (not shown) that communicatively couples a respective storage device or system to bus 416. One or more data storage devices 460 can store, maintain, or otherwise include a machine readable instruction set, a data structure, a program module, a data storage, a database, a logical structure, and/or a pair of processor circuits 410 and/or a graphics processor Circuitry 412 is useful for other data and/or one or more applications running on or running on processor circuit 410 and/or graphics processor circuit 412. In some examples, one or more data storage devices 460 can be, for example, via bus 416 or via one or more wired communication interfaces 430 (eg, a universal serial bus or USB), one or more wireless communication interfaces 420 (eg, Bluetooth® , Near Field Communication or NFC) and/or one or more network interfaces 470 (IEEE 802.3 or Ethernet, IEEE 802.11 or WiFi®, etc.) are communicatively coupled to processor circuit 410.The set of processor readable instructions 414 and other programs, applications, sets of logic, and/or modules may be stored in system memory 440 in whole or in part. Such sets of instructions 414 may be transferred in whole or in part from one or more data storage devices 460. The set of instructions 414 can be loaded, stored, or otherwise maintained in system memory 440, either fully or partially, during operation by processor circuit 410 and/or graphics processor circuit 412. The processor readable instruction set 414 can include machine readable and/or processor readable code, instructions, or similar logic capable of providing the voice coaching functions and capabilities described herein.The processor-based device 400 can include a power management circuit 450 that controls one or more operational aspects of the energy storage device 452. In an embodiment, energy storage device 452 may include one or more primary (ie, not rechargeable) or secondary (ie, rechargeable) batteries or similar energy storage devices. In an embodiment, the energy storage device 452 can include one or more supercapacitors or ultracapacitors. In an embodiment, power management circuit 450 can change, adjust, or control the flow of energy from external power source 454 to energy storage device 452 and/or to processor-based device 400. Power source 454 can include, but is not limited to, a solar power system, a commercial power grid, a portable generator, an external energy storage device, or any combination thereof.For convenience, processor circuit 410, graphics processor circuit 412, wireless I/O interface 420, wired I/O interface 430, power management circuit 450, storage device 460, and network interface 470 are shown as being communicatively coupled to each other via bus 416. Thereby providing connectivity between the above components. In alternative embodiments, the above components may be communicatively coupled in a different manner than that shown in FIG. For example, one or more of the above-described components can be directly coupled to other components or can be coupled to each other via one or more intermediate components (not shown). In another example, one or more of the above components can be integrated into processor circuit 410 and/or graphics processor circuit 412. In some embodiments, all or a portion of bus 416 may be omitted and components are directly coupled to each other using a suitable wired or wireless connection.5, 6, and 7 are plan views of various non-limiting illustrative electrical mesh networks 110 configurations. One of the benefits of the electrical mesh network 110 described herein is the ability to design the electrical mesh network 110 to suit the particular geometry, manufacturing, and/or operational needs. In addition to changing or altering the physical geometry of the electrical mesh network 110, the number of conductors 310A-310n, 320A--320n included in each of the plurality of conductors 310, 320 may vary, or as will be seen, even Eliminate one of the multiple conductors. The physical size, shape, and/or cross-sectional geometry of some or all of the conductors 310A-310n, 320A-320n included in the plurality of conductors 310, 320 may be the same or different. In an embodiment, the composition and/or physical geometry of the conductors 310A-310n, 320A-320n included in each of the plurality of conductors 310, 320 can be varied to provide the desired conductance, resistance, capacitance, and the like. Such physical, geometric, and compositional variations of conductors 310A-310n, 320A-320n, and/or conductors forming all or a portion of electrical mesh network 110 should be considered to fall within the scope of the present disclosure.5 is a plan view of an illustrative semiconductor package 500 including an electrical mesh network 110 in a "annular" configuration, wherein the first plurality of conductors 310 are arranged such that individual conductors, in accordance with at least one embodiment described herein, The 310A-310n are positioned end to end to form a closed loop. In this arrangement, the junctions between the two adjacent conductors 310A-310n form nodes 510A-510n of the electrical mesh network 110. As shown in FIG. 5, the IP cores 120 can be arranged on the upper surface 132 of the substantially die 130 in a generally circular or elliptical pattern. Each of the IP cores 120A-120n can be electrically coupled to respective nodes of the plurality of nodes 510A-510n on the electrical mesh network 110 via one or more conductive structures 250.6 is a plan view of an illustrative semiconductor package 600 in accordance with at least one embodiment described herein, the package 600 including an electrical mesh network 110 configured in a "toroidal" network, wherein the first plurality of conductors 310 are included Each of the conductors 310A-310n and each of the conductors 320A-320n included in the second plurality of conductors 320 "circulate" between a portion of the IP core 120 (which is disposed on the upper surface 132 of the substantially die 130). . In the toroidal network configuration shown in FIG. 6, each IP core 120 is conductively coupled to four adjacent IP cores 120. As shown in FIG. 6, using a toroidal shaped electrical mesh network 110, the IP cores 120 can be arranged in a generally orthogonal pattern on the upper surface 132 of the substantially die 130. Each of the IP cores 120A-120n can be electrically coupled to respective nodes of the plurality of nodes 610A-610n on the electrical mesh network 110 via one or more conductive structures 250.7 is a plan view of an illustrative semiconductor package 700 including an electrical mesh network 110 configured in a "star" network, with conductors included in a first plurality of conductors 310, in accordance with at least one embodiment described herein. Each of 310A-310n electrically couples each of the peripheral IP cores 120A-120H to the central IP core 120I. The distal ends (relative to the central IP core 120I) of each of the conductors 310A-310n define respective nodes 710A-710n on the electrical mesh network 110. In the star network configuration shown in Figure 7, each IP core 120 is electrically coupled to a central IP core 120I. As shown in FIG. 7, using a star-shaped electrical mesh network 110, the IP cores 120 can be arranged in a generally circular or elliptical pattern around the perimeter of the upper surface 132 of the substantially die 130. Each of the IP cores 120A-120n can be electrically coupled to respective nodes of the plurality of nodes 710A-710n on the electrical mesh network 110 via one or more conductive structures 250.8 is a basic die 130 in accordance with at least one embodiment described herein and electrically coupled to each of a plurality of nodes 810A-810J included in an electrical mesh network 110 disposed on an upper surface 132 of the substantially die 130. A plan view of the arrangement of the IP cores 120A-120I of the respective nodes. In the example embodiment shown in FIG. 8, the base die includes support circuits 140A-140N. Support circuits 140A-140D include input/output circuits. Support circuits 140E-140L include low level cache ("LLC") circuits. Support circuit 140M includes a high speed peripheral component interconnect ("PCIe") circuit. Support circuit 140N includes a double data rate (MC/DDR) circuit. The IP cores 120A-120I include a graphics processor circuit 120A, processor core circuits 120B-120G, a memory-to-input/output (M2IO) circuit, and a performance monitoring counter (M2MEM) circuit 120I.As shown in FIG. 8, conductive structures 250 on each of IP cores 120 are aligned with respective nodes 810 on electrical mesh network 110. The area of the basic die 130 occupied by the IP core 120 is primarily dedicated to the final level cache circuit, so the spacing of the IP core 120 from the support circuitry 140 carried by the basic die 130 and the configuration of the electrical mesh network 110 are beneficially beneficial. The area occupied by the basic die 130 is not increased.9 is an illustration showing conductively coupling a plurality of IP cores 120 to a substantially die 130 using an electrical mesh network 110 disposed proximate to an upper surface 132 of a substantially die 130, in accordance with at least one embodiment described herein. Advanced logic flow diagram for the method 900. Method 900 can be used in conjunction with any of methods 1000, 1100, and 1200 described in detail with respect to Figures 10, 11, and 12, respectively. Coupling the IP core 120 to the base die 130 using the electrical mesh network 110 advantageously minimizes the physical separation between the IP core 120 and the support circuitry 140. Minimizing the distance between the IP core 120 and the support circuit 140 beneficially improves performance while reducing power consumption. Reducing the component count on the IP core 120 beneficially improves yield by reducing the likelihood of component failure. The ability to couple the evolved IP core technology to the die 130 having an interface defined by the electrical mesh network 110 improves time to market, reactivity, and yield because time is not wasted in redesigning the basic tube for each improvement of the IP core technology. core. Method 900 begins at 902.At 904, the conductors 310A-310n included in the first plurality of conductors 310 are patterned, formed, deposited, or otherwise disposed across or in all or a portion of the base die 130. In an embodiment, the substantially die 130 can include a semiconductor die that is relatively larger when physically compared to a relatively smaller die that includes an IP core circuit. The conductors 310A-310n can be patterned, formed, deposited, or otherwise patterned across, in or around all or a portion of the basic die 130 using any currently available and/or future developed material deposition process or method. Settings. For example, the conductors 310A-310n can be formed or otherwise deposited using a photolithography process, an electrodeposition process, a vapor deposition process, an atomic layer deposition process, a printing process, a three dimensional printing process, or a combination thereof.In an embodiment, at least a portion of the first plurality of conductors can be formed on the upper surface 132 of the substantially die 130. In an embodiment, at least a portion of the first plurality of conductors 310 can be formed on one or more intermediate layers within the substantially die 130. The conductors 310A-310n can be formed using any conductive material including, but not limited to, metals (copper, aluminum, etc.), metal alloys (copper containing alloys, aluminum containing alloys, etc.), conductive non-metals (polymers, conductive nanoparticle substrates) Etc) or any combination of them. The conductors 310A-310n can have any physical size, shape, geometry, and/or cross-sectional profile. The conductors 310A-310n can be disposed or otherwise deposited in any uniform or non-uniform pattern including, but not limited to, straight lines, circles, arcs, polygons, or combinations thereof. Conductors 310A-310n are electrically coupled to support circuitry 140 that is formed across or in or around substantially die 130 using vias, metal traces, or similar conductive structures. The conductors 310A-310n can be electrically coupled to contact pads or similar conductive features on the lower surface 134 of the substantially die 130 through one or more through silicon vias (TSVs).At 906, the conductors 320A-320n included in the first plurality of conductors 320 are patterned, formed, deposited, or otherwise disposed across or in all or a portion of the base die 130. The conductors 320A-320n can be patterned, formed, deposited, or otherwise patterned across, in or around all or a portion of the basic die 130 using any currently available and/or future developed material deposition process or method. Settings. For example, the conductors 320A-320n can be formed or otherwise deposited using a photolithography process, an electrodeposition process, a vapor deposition process, an atomic layer deposition process, a printing process, a three dimensional printing process, or a combination thereof.In an embodiment, at least a portion of the second plurality of conductors 320 can be formed on the upper surface 132 of the substantially die 130. In an embodiment, at least a portion of the second plurality of conductors 320 can be formed on one or more intermediate layers within the substantially die 130. In an embodiment, at least a portion of the second plurality of conductors 320 can be disposed, patterned, formed, or otherwise deposited on the same layer of the substantially plurality of conductors 130 as the first plurality of conductors 310. In an embodiment, at least a portion of the second plurality of conductors 320 can be disposed, patterned, formed, or otherwise deposited on a different layer of the substantially die 130 than the first plurality of conductors 310. In such embodiments, vias, traces or similar conductive elements can electrically couple one or more of the conductors 320A-320n to the one or more conductors 310A-310n.In an embodiment, at least one of the conductors 320A-320n included in the second plurality of conductors 320 intersects or is electrically coupled to at least one of the conductors 310A-310n included in the first plurality of conductors 310. In other embodiments, each of the conductors 320A-320n included in the second plurality of conductors 320 intersects or is electrically coupled to each of the conductors 310A-310n included in the first plurality of conductors 310. The conductors 320A-320n included in the second plurality of conductors 320 may intersect the conductors 310A-310n included in the first plurality of conductors 310 at any angle measured relative to at least one of the conductors 310A-310n. In an embodiment, the conductors 320A-320n included in the second plurality of conductors 320 may intersect the conductors 310A-310n included in the first plurality of conductors 310 at an angle of approximately 90 degrees (ie, each of the conductors 320A-320n) It is perpendicular to each of the conductors 310A-310n).The conductors 320A-320n can be formed using any conductive material, including but not limited to metals (copper, aluminum, etc.), metal alloys (copper containing alloys, aluminum containing alloys, etc.), conductive non-metals (polymers, conductive nanoparticle substrates) Etc) or any combination of them. The conductors 320A-320n can have any physical size, shape, geometry, and/or cross-sectional profile. The conductors 320A-320n can be disposed or otherwise deposited in any uniform or non-uniform pattern including, but not limited to, straight lines, circles, arcs, polygons, or combinations thereof. Conductors 320A-320n are electrically coupled to support circuitry 140 that is formed across or in or around substantially die 130 using vias, metal traces, or similar conductive structures. Conductors 320A-320n may be conductively coupled to contact pads or similar conductive features on lower surface 134 of substantially die 130 by one or more through silicon vias (TSVs).At 908, nodes on the electrical mesh network 110 are created at each point where the conductor 320 and the conductor 310 intersect or are electrically coupled. In an embodiment, a plurality of nodes may be created by a plurality of intersecting and/or conductive couplings between conductors 320A-320n and conductors 310A-310n. Each of the nodes creates a potential connection point for at least one IP core 120. In an embodiment, each node may have a single conductive coupling to support circuitry 140 and/or IP core 120 disposed in basic die 130. In other embodiments, each node on the electrical mesh network 110 can have multiple conductive couplings to the support circuitry 140 and/or the IP core 120 disposed in the base die 130. Thus, a node on the electrical mesh network 110 can represent a conductive coupling that includes only a single connection or a conductive coupling that includes multiple connections.At 910, each of the plurality of IP cores 120 is physically and electrically coupled to a respective node of a plurality of nodes included in the electrical mesh network 110. The method 900 ends at 912.10 is an illustration of coupling one or more conductive electrical grid networks 110 disposed on at least a portion of upper surface 132 of substantially die 130 to lower surface 134 of substantially die 132 in accordance with at least one embodiment described herein. A high level flow chart of an illustrative method 1000 of structure 138. Method 1000 can be used in conjunction with any of methods 900, 1100, and 1200 described in detail with respect to Figures 9, 11 and 12, respectively. Electrical mesh network 110 is conductively coupled to each of IP cores 120 and is also electrically coupled to support circuitry 140 (which is disposed in basic die 130). In an embodiment, the electrical mesh network 110 can be electrically coupled to the substrate 150 via a conductive structure 138 that is disposed on the lower surface 134 of the substantially die 130. Method 1000 begins at 1002.At 1004, a through silicon via (TSV) 230 is formed through the substantially die 130. The TSV 230 electrically couples the electrical mesh network 110 to the conductive structure 138 (which is disposed on the lower surface 134 of the substantially die 130). In an embodiment, one or more vias may also electrically or somely couple some or all of the support circuitry 140 (which is disposed in, on or around the substantially die 130) to the conductive structure 138 (which is disposed in the base tube) On the lower surface 134 of the core 130). Method 1000 ends at 1006.11 is a diagram of one or more active components and/or support circuits 140 (including regions or portions of the substantially die 130 that are proximate to the upper surface 132 of the substantially die 130) in accordance with at least one embodiment described herein. A high level flow chart of an illustrative method 1100 of one or more active components. Method 1100 can be used in conjunction with any of methods 900, 1000, and 1200 described in detail with respect to Figures 9, 10, and 12, respectively. In an embodiment, the basic die 130 may include a support circuit 140 that is accessed by the IP core 120 via the electrical mesh network 110. In an embodiment, support circuitry 140 may include, but is not limited to, input/output circuitry, data storage circuitry, voltage regulation circuitry, power distribution circuitry, cache circuitry, and combinations thereof. In an embodiment, support circuit 140 can include an active component (eg, a transistor). Method 1100 begins at 1102.At 1104, active components are deposited, formed, or otherwise disposed in portion 220 of basic die 130. In an embodiment, portion 220 can include a portion of a substantially die that is proximate to upper surface 132 of the substantially die. The active components can include one or more circuits including active semiconductor components, such as transistors that form part of the support circuitry 140 that is conductively coupled to the electrical mesh network 110. Method 1100 ends at 1106.12 is one or more of forming regions or portions of one or more active components and/or circuits that include IP cores 120 proximate to lower surface 124 of IP core 120, in accordance with at least one embodiment described herein. A high level flow chart of an illustrative method 1200 of active components. Method 1200 can be used in conjunction with any of methods 900, 1000, and 1100 described in detail with respect to Figures 9, 10, and 11, respectively. In an embodiment, IP core 120 may include circuitry (eg, processor core circuitry or graphics processor circuitry). Positioning the active component proximate the lower surface 124 of the IP core advantageously shortens between the circuitry disposed in, on or around the IP core 120 and the support circuitry disposed in, on or around the substantially die 130 Physical distance. Reducing the physical distance between the IP core circuit and the support circuit 140 can reduce power consumption and/or improve communication bandwidth. Method 1200 begins at 1202.At 1204, active components are deposited, formed, or otherwise disposed in portion 210 of IP core 120. In an embodiment, portion 210 may include a portion of IP core 120 that is proximate to lower surface 124 of IP core 120. The active components can include one or more circuits including active semiconductor components, such as transistors that form part of the functional circuitry of the IP core 120.At 1206, active components formed in the lower portion 210 of the IP core 120 are conductively coupled to the electrical mesh network 110. Method 1200 ends at 1206.Although FIGS. 9, 10, 11, and 12 illustrate various operations in accordance with one or more embodiments, it is to be understood that the operations illustrated in FIG. 9, FIG. 10, FIG. 11, or FIG. 12 are not all other implementations. The example is required. Indeed, it is fully contemplated herein that in other embodiments of the present disclosure, the operations illustrated in Figures 9, 10, 11, and 12 and/or other operations described herein may be in accordance with any of the Figures. The manner specifically shown is combined, but still fully conforms to the present disclosure. Therefore, the claims and features of the features and/or operations that are not specifically shown in the drawings are considered to be within the scope and content of the disclosure.The invention also discloses a set of technical solutions, as follows:Technical Solution 1. A semiconductor package comprising:An electrical mesh network, the electrical mesh network comprising:First plurality of conductors;a second plurality of conductors, each of the second plurality of conductors intersecting at least one of the first plurality of conductors to form a plurality of network nodes, each of the network nodes being in the first plurality An intersection of one of the conductors and one of the second plurality of conductors;a basic die including an I/O circuit electrically coupled to at least one of the plurality of nodes;A plurality of IP cores each comprising a processor core circuit, each of the plurality of IP cores being electrically coupled to a respective node of the plurality of nodes.Technical Solution 2. The semiconductor package of claim 1,Wherein the basic die comprises an upper surface and a laterally opposite lower surface;Wherein the first plurality of conductors and the second plurality of conductors are disposed on the upper surface of the basic die.The semiconductor package of claim 1, wherein each of the first plurality of conductors is disposed perpendicular to at least one of the second plurality of conductors.The semiconductor package of claim 1, wherein each of the first plurality of conductors is disposed perpendicular to each of the second plurality of conductors.The semiconductor package of claim 1, wherein each of the first plurality of conductors is electrically coupled to each of the second plurality of conductors.The semiconductor package of claim 1, wherein the basic die includes a plurality of through silicon vias (TSVs), the TSVs in the electrical mesh network and the I/O circuits At least one is electrically coupled to a contact pad disposed on the lower surface of the substantially die.The semiconductor package of claim 1, wherein the basic die further comprises at least one active component.The semiconductor package of claim 7, wherein the at least one active component comprises at least one transistor disposed proximate the upper surface of the substantially die, the at least one transistor being electrically coupled to The electrical mesh network.Technical Solution 9. The semiconductor package of claim 1,Wherein each of the plurality of IP cores includes an upper surface and a laterally opposite lower surface;Wherein each of at least some of the IP cores includes at least one transistor disposed proximate the lower surface of the respective IP core.Technical Solution 10. The semiconductor package of claim 1,Wherein each of the first plurality of conductors comprises a plurality of conductors patterned on the upper surface of the substantially die;Wherein each of the second plurality of conductors comprises a plurality of conductors patterned on the upper surface of the substantially die.The semiconductor package of claim 1, wherein the circuit included in the basic die includes a voltage regulator circuit electrically coupled to at least the plurality of IP cores The processor core circuit included in one.Technical Solution 12. A method comprising:Forming a first plurality of conductors on an upper surface of the substantially die;Forming a second plurality of conductors on the upper surface of the base die, wherein:Each of the first plurality of conductors is disposed on the first surface and spaced apart from the remaining first plurality of conductors;Each of the second plurality of conductors is spaced apart from the remaining second plurality of conductors;Each of the first plurality of conductors intersects and is electrically coupled to at least one of the second plurality of conductors to form an electrical mesh network electrically coupled to the core die CircuitEach of the plurality of IP cores is electrically coupled to a node formed by the intersection of one of the first plurality of conductors and one of the second plurality of conductors.The method of claim 12, wherein forming the second plurality of conductors on the upper surface of the basic die further comprises:Forming the second plurality of conductors on the upper surface of the base die such that each of the second plurality of conductors is disposed perpendicular to at least one of the first plurality of conductors.The method of claim 12, wherein forming the second plurality of conductors on the upper surface of the basic die further comprises:Forming the second plurality of conductors on the upper surface of the base die such that each of the second plurality of conductors is disposed perpendicular to each of the first plurality of conductors.The method of claim 12, wherein forming the second plurality of conductors on the upper surface of the basic die further comprises:The second plurality of conductors are formed on the upper surface of the base die such that each of the second plurality of conductors intersects each of the first plurality of conductors and is electrically coupled.The method of claim 12, further comprising:Forming a plurality of through silicon vias (TSVs) in the basic die, the TSVs electrically coupling at least one of the electrical mesh network and the I/O circuitry to the base die disposed Contact pads on the lower surface.The method of claim 12, further comprising:Forming at least one active component proximate the upper surface of the base die.The method of claim 19, wherein forming the at least one active component proximate the upper surface of the base die further comprises:Forming at least one transistor proximate the upper surface of the base die.The method of claim 18, further comprising:The at least one transistor is electrically coupled to the electrical mesh network.The method of claim 12, further comprising:Forming at least one transistor proximate a lower surface of at least some of the plurality of IP cores;Each of the at least one transistor proximate the lower surface of at least some of the plurality of IP cores is electrically coupled to the electrical mesh network.Technical Solution 21. The method of claim 12,Where the forming the first plurality of conductors on the upper surface of the substantially die further comprises: patterning each of the first plurality of conductors on the upper surface of the substantially die;Forming the second plurality of conductors on the upper surface of the substantially die further includes patterning each of the second plurality of conductors on the upper surface of the base die.The method of claim 12, further comprising:At least one of the following is formed: an input/output (I/O) circuit, a voltage regulator circuit, a controller circuit, and a memory circuit in the basic die.Technical Solution 23. The method of claim 12, further comprising:Forming an input/output circuit in the basic die;The I/O circuitry in the base die is conductively coupled to the processor core circuitry included in at least one of the plurality of IP cores via the electrical mesh network.Technical Solution 24. An electronic device comprising:Printed circuit board;a semiconductor package electrically conductively coupled to the printed circuit board, the semiconductor package comprising:a basic die having an upper surface and a lower surface, the basic die including an input/output circuit;An electrical mesh network disposed on the upper surface of the basic die and electrically coupled to the circuitry included in the basic die, the electrical mesh network comprising:a first plurality of conductors, whereinEach of the first plurality of conductors is disposed on the upper surface of the base die and spaced apart from the remaining first plurality of conductors;a second plurality of conductors, wherein:Each of the second plurality of conductors is disposed on the upper surface of the base die and spaced apart from the remaining second plurality of conductors;Each of the second plurality of conductors intersects at least one of the first plurality of conductors and is electrically coupled;a plurality of IP cores, each of the plurality of IP cores including a processor core circuit, each of the IP cores being electrically coupled to pass through one of the first plurality of conductors and the second plurality of conductors A node formed by the intersection of one.The electronic device of claim 24, wherein the circuit included in the basic die comprises a voltage regulator circuit electrically coupled to at least the plurality of IP cores The processor core circuit included in one.As used in this application and the claims, a list of items connected by the term "and/or" can mean any combination of the listed items. For example, the phrase "A, B, and/or C" can mean: A; B; C; A and B; A and C; B and C; or A, B, and C. As used herein and in the claims, a list of items connected by the term "at least one of" can mean any combination of the listed terms. For example, the phrase "at least one of A, B, or C" can mean: A; B; C; A and B; A and C; B and C; or A, B, and C.Any of the operations described herein can be implemented in a system that includes one or more media (eg, non-transitory storage media) in which instructions are stored, either individually or in combination, that are executed when executed by one or more processors method. Here, the processor may include, for example, a server CPU, a mobile device CPU, and/or other programmable circuitry. Additionally, it is contemplated that the operations described herein can be distributed across multiple physical devices, such as processing structures at more than one different physical location. The storage medium may include any type of tangible medium, such as any type of magnetic disk, including a hard disk, a floppy disk, an optical disk, a compact disk read only memory (CD-ROM), a rewritable compact disk (CD-RW), and a magneto-optical disk; Devices such as read only memory (ROM), RAM such as dynamic random access memory (RAM) and static RAM, erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), flash Memory, Solid State Drive (SSD), Embedded Multimedia Card (eMMC), Secure Digital Input/Output (SDIO) card, magnetic or optical card; or any type of media suitable for storing electronic instructions. Other embodiments may be implemented as software that is executed by a programmable control device.Accordingly, the present disclosure is directed to electrically coupling a plurality of physically relatively small IP cores using an electrical mesh network that is formed wholly or partially across or in all or a portion of the base die. Systems and methods to physically relatively large basic dies. The use of an electrical mesh network beneficially permits the positioning of the IP core in close proximity to the support circuitry carried by the basic die. The minimum spacing between the IP core circuitry and the support circuitry advantageously improves communication bandwidth while reducing power consumption. Each of the IP cores may include functionally specific circuitry (eg, processor core circuitry or graphics processing circuitry). The use of IP cores is beneficial and advantageously permits the use of a wide variety of IP cores, each with a common or similar interface to an electrical mesh network.The following examples relate to other embodiments. The following examples of the present disclosure may include subject matter material, such as at least one apparatus, method, at least one machine readable medium for storing instructions that, when executed, cause a machine to perform actions based on the method, for performing based on the method The components of the action and/or the system for providing an electrical mesh network that communicatively couples a plurality of relatively small limited-function IP cores to a relatively large base die including support circuits for use by the IP core.According to Example 1, a semiconductor package is provided. The semiconductor package can include a substantially die having an upper surface and a lower surface, the substantially die including input/output circuitry, an electrical mesh network disposed proximate the upper surface of the substantially die, and electrically coupled to the substantially die Input/output circuit, the electrical mesh network comprising a first plurality of conductors (where each of the first plurality of conductors is disposed proximate to an upper surface of the substantially die and spaced apart from the remaining first plurality of conductors) and a second plurality Conductors (wherein each of the second plurality of conductors is disposed proximate to an upper surface of the substantially die and spaced apart from the remaining second plurality of conductors, and at least each of the second plurality of conductors and at least the first plurality of conductors One intersecting and electrically coupled); a plurality of IP cores, each of the plurality of IP cores including a processor core circuit, each of the IP cores being electrically coupled to pass through one of the first plurality of conductors and the second plurality of conductors One of the nodes formed by the intersection.Example 2 may include the element of Example 1, wherein each of the first plurality of conductors is disposed perpendicular to at least one of the second plurality of conductors.Example 3 may include the element of any of Examples 1 or 2, wherein each of the first plurality of conductors is disposed perpendicular to each of the second plurality of conductors.Example 4 can include the element of any of examples 1 to 3, wherein each of the first plurality of conductors intersects each of the second plurality of conductors and is electrically coupled.Example 5 may include the element of any of examples 1 to 4, wherein the substantially die includes a plurality of through silicon vias (TSVs) that electrically couple at least one of the electrical mesh network and the I/O circuitry to the set Contact pads on the lower surface of the basic die.Example 6 can include the elements of any of examples 1 to 5, wherein the substantially die further comprises at least one active component.Example 7 can include the element of any of examples 1 to 6, wherein the at least one active component comprises at least one transistor disposed proximate an upper surface of the substantially die, the at least one transistor being electrically coupled to the electrical mesh network.Example 8 may include the element of any one of examples 1 to 7, wherein each of the IP cores includes an upper surface and a lower surface; and each of at least some of the IP cores includes at least one transistor disposed to be adjacent to the corresponding IP core surface.Example 9 may include the element of any of examples 1 to 8, wherein each of the first plurality of conductors comprises a plurality of conductors patterned on an upper surface of the substantially die.Example 10 can include the elements of any of examples 1 to 9, wherein each of the second plurality of conductors comprises a plurality of conductors patterned on an upper surface of the substantially die.Example 11 may include the elements of any of examples 1 to 10, wherein the basic die further comprises at least one of: a voltage regulator circuit, a controller circuit, and a memory circuit.Example 12 can include the elements of any of examples 1 to 11, wherein the basic die further comprises a voltage regulator circuit electrically coupled to the processor core circuit included in at least one of the plurality of IP cores.According to Example 13, a method is provided. The method can include: forming a first plurality of conductors proximate an upper surface of the substantially die; forming a second plurality of conductors proximate a lower surface of the substantially die, wherein each of the first plurality of conductors is disposed proximate to the substantially die An upper surface and spaced apart from the remaining first plurality of conductors, each of the second plurality of conductors disposed proximate to the upper surface of the substantially die and spaced apart from the remaining second plurality of conductors, and the first plurality of conductors Each of the plurality of conductors intersecting and electrically coupled to form an electrical mesh network, the electrical mesh network being electrically coupled to circuitry included in the substantially die; and electrically coupling each of the plurality of IP cores And a corresponding node formed by the intersection of one of the first plurality of conductors and one of the second plurality of conductors.Example 14 can include the element of Example 13, wherein forming the second plurality of conductors on the upper surface of the substantially die further comprises: forming a second plurality of conductors on the upper surface of the substantially die such that the second plurality of conductors Each is disposed perpendicular to at least one of the first plurality of conductors.Example 15 can include the elements of Examples 13 and 14, wherein forming the second plurality of conductors on the upper surface of the substantially die further comprises: forming a second plurality of conductors on the upper surface of the substantially die such that the second plurality of conductors Each of the plurality of conductors is disposed perpendicular to each of the first plurality of conductors.The example 16 may include the element of any one of examples 13 to 15, wherein forming the second plurality of conductors on the upper surface of the substantially die further comprises: forming a second plurality of conductors on the upper surface of the substantially die such that the second Each of the plurality of conductors intersects each of the first plurality of conductors and is electrically coupled.Example 17 can include the elements of any of Examples 13-16, the method further comprising: forming a plurality of through silicon vias (TSVs) in the base die, the at least one of the electrical mesh network and the I/O circuitry Electrically coupled to the contact pads (which are disposed on the lower surface of the substantially die).Example 18 can include the elements of any of Examples 13-17, the method further comprising: forming at least one active component proximate the upper surface of the substantially die.The example 19 may include the element of any one of examples 13 to 18, wherein forming the at least one active element proximate the upper surface of the substantially die further comprises forming at least one transistor proximate the upper surface of the substantially die.Example 20 can include the elements of any of Examples 13 through 19, the method further comprising: electrically coupling the at least one transistor to the electrical mesh network.The example 21 may include the element of any one of the examples 13 to 20, the method further comprising: forming at least one transistor proximate to a lower surface of at least some of the plurality of IP cores; and a lower surface proximate to at least some of the plurality of IP cores Each of the at least one transistor is electrically coupled to the electrical mesh network.The example 22 may include the element of any one of examples 13 to 21, wherein forming the first plurality of conductors on the upper surface of the substantially die further comprises: patterning each of the first plurality of conductors on an upper surface of the substantially die One.The example 23 may include the element of any one of examples 13 to 22, wherein forming the second plurality of conductors on the upper surface of the substantially die may further include: patterning each of the second plurality of conductors on the upper surface of the substantially die One.Example 24 can include the elements of any of Examples 13 to 23, the method further comprising: forming at least one of: an input/output (I/O) circuit, a voltage regulator circuit, a controller circuit, and a memory in the base die Circuit.Example 25 can include the elements of any of Examples 13 to 24, the method further comprising: forming an input/output circuit in the base die; and electrically coupling the I/O circuitry in the base die to the via via an electrical mesh network A processor core circuit included in at least one of the plurality of IP cores.According to Example 26, an electronic device is provided. The electronic device can include a printed circuit board and a semiconductor package electrically coupled to the printed circuit board, the semiconductor package including: a basic die having an upper surface and a lower surface, the basic die including input/output circuitry; an electrical mesh network, the setting Adjacent to an upper surface of the substantially die and electrically coupled to circuitry contained in the substantially die, the electrical mesh network comprising a first plurality of conductors (where each of the first plurality of conductors is disposed proximate the upper surface of the substantially die) And spaced apart from the remaining first plurality of conductors) and a second plurality of conductors (wherein each of the second plurality of conductors is disposed proximate to an upper surface of the substantially die and spaced apart from the remaining second plurality of conductors, and Each of the second plurality of conductors intersects and is electrically coupled to at least one of the first plurality of conductors; a plurality of IP cores, each of the plurality of IP cores including a processor core circuit, each of the IP cores being electrically coupled a node formed by the intersection of one of the first plurality of conductors and one of the second plurality of conductors.Example 27 can include the element of example 26, wherein each of the first plurality of conductors is disposed perpendicular to at least one of the second plurality of conductors.Example 28 can include the elements of any of Examples 26 and 27, wherein each of the first plurality of conductors is disposed perpendicular to each of the second plurality of conductors.Example 29 can include the elements of any of Examples 26-28, wherein each of the first plurality of conductors intersects each of the second plurality of conductors and is electrically coupled.The example 30 can include the elements of any of examples 26 to 29, wherein the substantially die further includes a plurality of through silicon vias (TSVs) to electrically couple at least one of the electrical mesh network and the I/O circuitry to the contacts Pad (which is placed on the lower surface of the basic die).Example 31 can include the elements of any of Examples 26 to 30, wherein the substantially die further comprises at least one active component.The example 32 can include the elements of any of examples 36 to 31, wherein the at least one active component comprises at least one transistor disposed proximate an upper surface of the substantially die, the at least one transistor being electrically coupled to the electrical mesh network.Example 33 may include the elements of any one of Examples 26 to 32, wherein each of the IP cores includes an upper surface and a lower surface; and wherein each of at least some of the IP cores includes at least one transistor disposed proximate to the respective second semiconductor The lower surface of the die.The example 34 can include the elements of any of examples 26 to 33, wherein each of the first plurality of conductors comprises a plurality of conductors patterned on an upper surface of the substantially die.Example 35 can include the elements of any of Examples 26 to 34, wherein each of the second plurality of conductors comprises a plurality of conductors patterned on an upper surface of the substantially die.Example 36 can include the elements of any of Examples 26 to 35, wherein the circuitry included in the basic die further comprises at least one of: a voltage regulator circuit, a controller circuit, and a memory circuit.The example 37 can include the elements of any of examples 26 to 36, wherein the basic die further includes a voltage regulator circuit electrically coupled to the processor core circuit included in at least one of the plurality of IP cores.According to Example 38, a system is provided comprising: means for forming a first plurality of conductors proximate an upper surface of a substantially die; components for forming a second plurality of conductors proximate a lower surface of the substantially die, Wherein each of the first plurality of conductors is disposed proximate to an upper surface of the substantially die and spaced apart from the remaining first plurality of conductors, each of the second plurality of conductors being disposed proximate the upper surface of the substantially die and with the remainder a second plurality of conductors are spaced apart, and each of the first plurality of conductors intersects at least one of the second plurality of conductors and is electrically coupled to form an electrical mesh network, the electrical mesh network being electrically coupled to the base die At least I/O circuitry; means for electrically coupling each of the plurality of IP cores to a node formed by the intersection of one of the first plurality of conductors and one of the second plurality of conductors.Example 39 can include the element of example 38, wherein the means for forming the second plurality of conductors proximate the upper surface of the substantially die further comprises a second plurality of conductors for forming an upper surface proximate to the substantially die such that the second a member of each of the plurality of conductors disposed perpendicularly to at least one of the first plurality of conductors.Example 40 can include the elements of any of Examples 38 and 39, wherein the means for forming the second plurality of conductors proximate the upper surface of the substantially die can further comprise a second plurality for forming an upper surface proximate to the substantially die The conductors are such that each of the second plurality of conductors is disposed perpendicular to each of the first plurality of conductors.The example 41 can include the element of any one of examples 38 to 40, wherein forming the second plurality of conductors proximate the upper surface of the substantially die further comprises: forming a second plurality of conductors on the upper surface of the substantially die, such that Each of the two plurality of conductors intersects each of the first plurality of conductors and is electrically coupled.Example 42 can include the elements of any of Examples 38 through 41, and the system can further include means for forming a plurality of through silicon vias (TSVs) in the substantially die, the TSVs will electrically grid network and I/ At least one of the O circuits is electrically coupled to the contact pads (which are disposed on a lower surface of the substantially die).Example 43 can include the elements of any of Examples 38 through 42, and the system can also include components for forming at least one active component proximate the upper surface of the substantially die.The example 44 can include the elements of any of examples 38 to 43, wherein the means for forming the at least one active component proximate the upper surface of the substantially die further comprises at least one transistor for forming an upper surface proximate to the substantially die Parts.Example 45 can include the elements of any of Examples 38 through 44, and the system can also include means for electrically coupling the at least one transistor to the electrical mesh network.Example 46 can include the elements of any of Examples 38-45, and the system can further include: means for forming at least one transistor proximate to a lower surface of at least some of the plurality of IP cores; and for approaching the plurality of IPs Each of the at least one transistor of the lower surface of at least some of the cores is electrically coupled to a component of the electrical mesh network.The example 47 can include the elements of any one of examples 38 to 46, wherein the means for forming the first plurality of conductors proximate the upper surface of the substantially die further comprises patterning the first on the upper surface of the substantially die A component of each of a plurality of conductors.The example 48 can include the elements of any of examples 38 to 47, wherein the means for forming the second plurality of conductors on the upper surface of the substantially die further comprises patterning the second on the upper surface of the substantially die A component of each of a plurality of conductors.Example 49 can include the elements of any of Examples 38 through 48, and the system can also include input/output (I/O) circuitry for forming a substantially die, voltage regulator circuitry, controller circuitry, and memory circuitry At least one of the parts.Example 50 can include the elements of any of Examples 38 through 49, and the system can further include: means for forming an input/output circuit in the base die; and for using an I in the basic die via an electrical mesh network The /O circuit is electrically coupled to components of the processor core circuit included in at least one of the plurality of IP cores.According to Example 51, a semiconductor package is provided. The semiconductor package and the plurality of dies may include: an electrical mesh network including a first plurality of conductors and a second plurality of conductors, each of the second plurality of conductors intersecting at least one of the first plurality of conductors to form a plurality of a network node, each of the network nodes being at an intersection of one of the first plurality of conductors and one of the second plurality of conductors; the basic die comprising an I/O circuit electrically coupled to at least one of the plurality of nodes And a plurality of IP cores, each of the plurality of IP cores including a processor core circuit, each of the plurality of IP cores being electrically coupled to a respective node of the plurality of nodes.Example 52 can include the element of example 51, wherein the substantially die includes an upper surface and a laterally opposite lower surface, and wherein the first plurality of conductors and the second plurality of conductors are disposed on an upper surface of the substantially die.Example 53 can include the elements of any of Examples 51 and 52, wherein each of the first plurality of conductors is disposed perpendicular to at least one of the second plurality of conductors.The example 54 may include the element of any one of examples 51 to 53, wherein each of the first plurality of conductors is disposed perpendicular to each of the second plurality of conductors.Example 55 can include the element of any of examples 51 to 54, wherein each of the first plurality of conductors is electrically coupled to each of the second plurality of conductors.Example 56 can include the elements of any of examples 51 to 55, wherein the substantially die further comprises a plurality of through silicon vias (TSVs) that electrically couple at least one of the electrical mesh network and the I/O circuitry to the contacts Pad (which is placed on the lower surface of the basic die).Example 57 can include the elements of any of examples 51 to 56, wherein the substantially die further comprises at least one active component.The example 58 can include the elements of any of examples 51-57, wherein the at least one active component comprises at least one transistor disposed proximate an upper surface of the substantially die, the at least one transistor being electrically coupled to the electrical mesh network.The example 59 may include the element of any one of examples 51 to 58, wherein each of the plurality of IP cores includes an upper surface and a laterally opposite lower surface; and wherein each of at least some of the IP cores includes at least one transistor configured to Close to the lower surface of the corresponding IP core.The example 60 can include the elements of any of examples 51 to 59, wherein each of the first plurality of conductors comprises a plurality of conductors patterned on an upper surface of the substantially die.The example 61 can include the elements of any of examples 51 to 60, wherein each of the second plurality of conductors comprises a plurality of conductors patterned on an upper surface of the substantially die.Example 62 can include the elements of any of examples 51 to 61, wherein the basic die further comprises at least one of: a voltage regulator circuit, a controller circuit, and a memory circuit.Example 63 can include the elements of any of examples 51-62, wherein the circuitry included in the basic die includes a voltage regulator circuit electrically coupled to the processor core circuit included in at least one of the plurality of IP cores .The terms and expressions used herein have been used for purposes of illustration and not limitation, and are not intended to be Various modifications are possible within the scope of the claims. Accordingly, the claims are intended to cover all such equivalents.
Described herein are technologies for managing lists of universal resource locators ("URLs") for a mobile device based, at least in part, upon the determined location of the device. This Abstract is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims.
A method to provide location aware services using a mobile device, comprising:determining a location of the mobile device, wherein the location of the mobile device is determined using at least one or more of global positioning system, GPS, wireless fidelity, Wi-Fi, systems, and identifiable wireless sources;determining one or more contextual factors based on the location of the mobile device, wherein the one or more contextual factors includes a mode of travel of a user of the mobile device and a type of location;identifying one or more location relevant uniform resource locators, URLs, based on the location of the mobile device and the one or more contextual factors of the mobile device; anddisplaying a list of the one or more location relevant URLs to enable the user of the mobile device to be prompted to use the one or more location relevant URLs,wherein the one or more location relevant URLs allow the user to use applications desired by the user,wherein the one or more location relevant URLs aid the user of the mobile device to avoid manual search for relevant URLs.The method of claim 1, wherein the contextual factors include crowd-sourced history of one or more web-site usage at or near the location.The method of claims 1 or 2, wherein the contextual factors include usage of websites while the user is at the location.The method of claims 1 to 3, further comprising determining an approximate location of the mobile device, wherein the approximate location of the mobile device is determined based on the identifiable wireless sources.The method of claim 4, wherein the identifiable wireless sources include a wireless access point, WAP.The method of claims 1 to 5, wherein a URL of the list of one or more relevant URLs represents an address of a web-site.
BACKGROUNDThe use of mobile devices, such as smartphones, is nearly ubiquitous. Many of these mobile devices include the capability to determine their physical location. That is, the mobile device is capable of determining its location in the physical world. Conventionally location determination is typically accomplished by using Global Positioning Systems (GPS), some form of triangulation or interpolation of multiple radio signals, internet protocol (IP) geo-location, or some combination thereof.A collection of so-called location-based services (LBS) are emerging that take advantage of the location-detection capability of the mobile devices that so many people are carrying with them each day. For example, LBSs include targeted advertising, social networking, locating friends ("check-ins"), photo tagging, life logging, location-based games, fitness monitoring, and others. Location-based services may include vehicle or parcel tracking as well.With the ubiquitous nature of the mobile devices comes the frequent access to the websites on such devices via wireless Internet access. Users have grown accustomed to finding information by searching the World Wide Web (i.e., the "web") at any time and any place.BRIEF DESCRIPTION OF THE DRAWINGSFig. 1 shows example scenarios to illustrate implementations in accordance with the technologies described herein.Fig. 2 is a flow chart illustrating an example method in accordance with the technologies described herein.Fig. 3 is a state diagram illustrating an example method in accordance with the technologies described herein.Fig. 4 illustrates an example system in accordance with the technologies described herein.Fig. 5 illustrates an example computing device to implement in accordance with the technologies described herein.Fig. 6 illustrates an example device to implement in accordance with the technologies described herein.The Detailed Description references the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to reference like features and components.DETAILED DESCRIPTIONDisclosed herein are technologies for managing lists of uniform resource locators ("URLs") for a mobile device based, at least in part, upon the determined location of the device. Generally, a URL is the global address of documents, services, and other resources on the World Wide Web (i.e., the "web"). A website is a set of related web pages containing content such as text, images, video, audio, etc. The web pages of a website are the most common documents to which a URL points. Consequently, a URL may also be called a link, a website address, or a web address. Collectively, a URL list may be called favorites or bookmarks.The described technology may include, for example, helping a user of a mobile device easily find URLs to websites that are appropriate and best for the current location. The disclosed technologies may also include automatic and dynamic generation of a list of URLs to location-relevant websites. Similarly, such technologies may include automatic caching of location-relevant websites (or pages at such sites) when the present wireless connection to the Internet is not bandwidth restrictive or cost prohibitive.Often some websites are designed for use in specific locations or types of locations. Some examples include a university campus map, a regional subway application, or information related to a particular neighborhood or city location. An example of a website that is useful in specific types of locations is a baseball scoring website, which is useful while at a baseball game.Unfortunately, using conventional approaches, a user of a mobile device can find it difficult to find websites associated with or appropriate for a specific location and to cull the valuable ones from the less helpful ones. With the technology disclosed here, a user can arrive in a location and have his or her mobile device provide a list of links to one or more websites that are appropriate for the specific location.If the user arrives in New York City, for example, there is a tremendous number of available websites to assist in finding museums, restaurants, or even the subway schedule. Those available websites vary in degree of quality and location appropriateness. The technology described herein will help the user in finding which location-specific websites that are available and which ones that are ones are valuable to that user.Another concern not adequately addressed by the conventional approaches is how to manage already cached location-specific applications based the appropriateness for the current location. When the user leaves a particular location where a location-specific website is appropriate, the technology described herein removes the location-specific website from the cache. If the user is leaving the location, then there is no need for the device to cache the web pages of that site for the user.The identification of websites that are appropriate for a particular location can also be used more generally to predict the websites that a user will access at any point in the day. As the user traverses the places and routes that he normally travels, the mobile device keeps track of the websites associated with each location (place/route).Each user of a mobile device has a limited knowledge and understanding of which location-specific websites are appropriate for a particular location. For example, a user who attends a minor league baseball game is likely unaware of a website that is particular to the ballpark that provides live statistics of the game. The user might not ever find the website by searching for it.Conventional approaches require a large amount of the user's time and manual input. When searching for websites, users can query for specific websites but they have to actively do so with keyword searches or knowledge of the type of website they are looking for. Furthermore, users must remember which websites are related to which location or try to manually arrange them in ways that makes this process easier.In short, the technology described herein helps a user to gain the benefits of using location-specific websites without requiring a large amount of manual searching for such websites.EXAMPLE LOCATION-AWARE URL LIST MANAGEMENT SCENARIOSFig. 1 shows an example set of scenarios 100 in which one or more implementations of the technology described herein may be employed. As depicted, the scenarios include four locations with a mobile device in operation at each location. User 102 is holding a smartphone 110 as he approaches his train in a metropolitan transit center 112 of a city that he is visiting for the first time. Another user (not shown) is with a cell phone 120 waiting during a layover at an airport 122. A hungry traveler (not shown) is using his tablet computer 130 while eating a restaurant 132. Still another user (not shown) has her smartphone 140 with her at home 142.Each of these mobile devices is connected to a communications network 150 via a wireless connection. Such a connection can be Wi-Fi, Bluetooth, cellular, or another technology. This connection links the mobile devices to the Internet, a private intranet, and/or to a so-called cloud. Each of the web servers 170 and a database server 160 may be part of the Internet, a private intranet, or a cloud, at least in part. Of course, each of the web servers 170 and the database server 160 can be implemented as one or more servers.While referring to Fig. 1 , various example scenarios 100 are discussed. When at the transit center 112, the user 102 browses the web on his smartphone 110. Some of those might include some websites that are specific to the transit system of the city. For example, it might include website with a subway train schedule. Using known or new techniques, the smartphone 110 determines its current location, which is the transit center 112.That current location (the transit center 112) is associated with the website that the user 102 is using on the smartphone 110 while at that location. Other contextual factors of the website's use are associated with the website and the current location. For example, how much the website is used at that location, how often is it used at that location, which pages on that website are used at that location, how frequently the website is used at that location by others, and the similar factors. In addition to use, some of the contextual factors may include ratings provided by users of website at particular locations.This associated information can be stored on the smartphone 110. In addition, such location-aware associations can be performed by many mobile devices at that transit center 112 over a period of time. Those various associations can be uploaded via the communications network 150 to the database server 160, where such associations are collected and organized. The information gathered about the various associations between the websites and locations, and perhaps contextual factors, can be called crowd-sourced since it is gathered from a crowd of users over time.While waiting a few hours in the airport 122 for his connecting flight home, the user may wish to explore what is available to him at the airport. Using an implementation of the technology described herein, the cell phone 120 communicates its current location to the database server 160, which returns a list of links to websites that are specific to the current location of the phone 120. The links can be listed in order of relevance based upon contextual factors associated with the linked websites in the database server 160.Similar to the airport scenario, the hungry traveler can receive a list of recommended websites on his tablet computer 130 while dining at the restaurant 132. The traveler can choose to browse a local news website while dining.While carrying her smartphone 140, a user arrives at her home 142 in Spokane, Washington after a business trip to New York City. While she was in New York City, she frequently used several websites that helped get around and better enjoy the city. Now she is home and not interested in favorites list being populated by links to websites relevant to a city across the nation. Her smartphone 140 determines her current location and presents her a list of website links relevant to that current location. Indeed, her browser on her smartphone 140 may have a list simply labeled "Useful Here" that lists only location-relevant website links.LOCATION AWARENESSLocation awareness involves the mobile device determining its present location. Conventional location-determination approaches include GPS and signal positioning (e.g., triangulation, trilateration, and other forms of interpolation and extrapolation) to determine geo-physical location relative to multiple signal sources. GPS are near-ubiquitous outdoor location technology and a GPS enabled typical smartphone has three to five meter accuracy. For signal positioning, the signal sources can use cellular or a variant of IEEE 802.11 (i.e., Wi-Fi). Signal-positioning approaches rely upon a map of signal sources whose locations are known to extrapolate a location of a device.Rather than relying on signal-triangulation-based location approaches (like GPS) to determine geo-location with a fine-grain and absolute resolution, the technology described herein is based upon a location determination with a coarse grain and relative resolution. More particularly, the technology described herein utilizes determinations of logical or semantic locations.One or more implementations include, for example, a mobile device recognizing and learning a frequented discrete location based on the "observed" ambient radio environment at that location. In particular, the mobile device can recognize and learn which ambient identifiable wireless ("IWS") sources are part of a topography within reception range at that discrete location.A wireless access point (WAP) is a specific example of an ambient IWS source. The IWS sources are called ambient herein because they may be detected or "observed" in the environment while a mobile device moves about the world. The IWS sources are called "identifiable" because each is uniquely identifiable. For example, each WAP may be uniquely identified by its basic service set identification (BSSID) or media access card (MAC) address. Of course, other identifying characteristics may be used alone or in combination with each other or with the BSSID or MAC address. Examples of such other identifying characteristics include service set identification (SSID) and received signal strength indication (RSSI).Geo-location, also called geo-physical location, includes determination of a real-world geographic location of an object or person. "Physical location" is a broader term than geo-location and includes a determination of any real-world location of the object or person.CONTEXTUAL FACTORSAs part of one or more implementations described herein, a mobile device can determine contextual factors. In short, a contextual factor is some observed, measured, calculated, and/or determined data about the context in which the mobile device exists. A contextual factor answers some aspects of the questions that are typically asked when gathering information: how, who, what, when, where, and why.In general, the determined present location of the mobile device is a contextual factor. However, herein the location (i.e., where) is a special case of a contextual factor that is handled separately. Consequently, as used herein, contextual factors explicitly exclude location of the mobile phone because that is handled separately. That said, contextual factor can include locations where the user is predicted to be traveling, estimated time/place of arrival, or route prediction.An example of a contextual factor is the mode of travel of the user of the mobile device. Is the user walking, biking, riding bus or train, or in a motor vehicle? If walking, the user might, for example, want to see websites for a local bus schedule.Another example of a contextual factor is the type of location. For example, if the user is determined to be at Spokane International Airport, is a type "airport" or more generally "transportation," consequently, websites associated with that type of location can be recommended to the user.Another example of a contextual factor is the type of event happening at a location. For example, HP Pavilion in San Jose is home to the San Jose Sharks ice hockey team, but also hosts various concerts, shows, and events. In addition, a known schedule of events that occur at a particular location may be a contextual factor.Many of the contextual factors are based on website usage. The user builds a personal history of website usage at or near the determined location. Furthermore, many users generate a crowd-sourced history of website usage at or near the determined location. The route in which websites are used and the destination to which websites are used en route are other factors.Some other context factors may include, for example, crowd-sourced information about websites, such as ratings of websites.EXAMPLE OF LOCATION-AWARE URL LIST MANAGEMENT OPERATIONFig. 2 illustrates an example process 200 for implementing, at least in part, the technology described herein. In particular, process 200 depicts an example of location-aware URL-list-management operations performed, at least in part, by a mobile device, such as smartphone 110. Servers, such as a database server 160 or other cloud-based services may perform some portions of the example process 200.At 202, a mobile device determines its present location using one or more of the new or known location-awareness approaches. The determined location of the mobile device can be, for example, a physical location, a geo-location, or a logical location. The geo-location information can be obtained from a GPS. The location information can be obtained, at least in part, from one or more ambient IWS sources.At 204, the mobile device determines contextual factors of the mobile device.At 206, the mobile device accesses a database of website associations. The database provides an association between websites, their URLs, and locations. In addition, the database may provide additional information about contextual factors associated with the websites and/or with locations. The database, or part thereof, can be stored locally on the mobile device itself. In some implementations the mobile device may access a remote database via a communications network. For example, the smartphone 110 accesses the database server 160 via a network 150. The database may include crowd-sourced information about websites. For example, the database may include a collection of website usage information and user-supplied ratings from many different users for websites used at or near locations.At 208, the database provides a list of websites associated with the present location of the mobile device. In some implementations, the list may include websites associated with the present location or with locations near the present location. Additionally or alternatively, the database provides a list of websites that are associated with different locations than that of the present location or nearby that location of the mobile device. This listing may be used to remove such websites from the device's cache.For websites associated with the present location, operations 210 and 212 are performed. For websites that are associated with a location other than the present location, operations 214 and 216 are performed.At 210, the mobile device selects one or more websites that are associated with or are nearby the present location. If location is the only criterion, then, in some implementations, all the websites associated with the present location are selected. In some implementations the selecting may be based, at least in part, on contextual factors. In one or more implementations, the selection may include the mobile device querying the database to find a list of websites that are associated with the determined location and then the mobile device choosing one or more websites from the list of website links found by the query.When selecting the appropriate websites, the mobile device may collect a group of seemingly disparate but linked web pages together and designate them a website. In doing this, a representative entry-point URL is selected for the designated website.At 212, the mobile device generates a URL list of the links to the selected websites. The list may be ordered based upon one or more of the contextual factors. For example, the websites used most at a particular location by the most people may be listed first.At 213, the mobile device displays the generated URL list of websites relevant to the present location. The user may view the generated list via their mobile browser. Alternatively, the list may be viewed outside the context of their mobile browser. Of course, when the user chooses a URL from the list, the mobile device will open the mobile browser to get and view the website associated with chosen websiteInstead of websites that are associated with the present location, the mobile device may act upon websites that are associated with a different location than the present location. For websites that are associated with a location other than the present location, operations 214 and 216 are performed.At 214, the mobile device selects one or more websites that are associated with a location that is different from the present location. In some implementations, the mobile device may select those websites that are associated with a location far from the present location. The threshold of how far can be determined by known or calculable distances between present and associated locations exceeding a distance threshold. Alternatively, the database may designate nearby locations for websites or for specific locations.If location is the only criterion, then, in some implementations, all the websites associated with a location other than the present location are selected. In some implementations the selecting may be based, at least in part, upon the contextual factors. In one or more implementations, the selection may include the mobile device querying the database to find a list of websites that are associated a location other than the determined location and then the mobile device choosing one or more websites from the list of websites found by the query.At 216, the mobile device determines whether content of the selected websites are stored in the cache of the mobile device. If so, then the mobile device releases portions of the cache storing content of the selected one or more websites. That is, the mobile device removes one or more of the selected websites from the cache on the mobile device. Doing this frees up valuable memory on the mobile device.ANOTHER EXAMPLE OF LOCATION-AWARE URL LIST MANAGEMENT OPERATIONFig. 3 illustrates a state diagram 300 of an example process for implementing, at least in part, the technology described herein. In particular, state diagram 300 depicts an example of location-aware URL list management operation performed, at least in part, by a mobile device, such as a smartphone 110. Servers, such as a database server 160 or other cloud-based services may perform some portions of the state diagram 300.At 301, a mobile device tracks its location continually until the device determines that the user arrives a new location.At 302, when a user arrives at a new location that he or she has never visited with the mobile device before, the mobile device determines that this is a place that the user has not visited before. That is, this location is a new location. In one or more implementations, the determination of the place at which a user arrives can be predicted before arrival if the user is traveling to a known location. In this situation, the device can enter state 302 and then 304 prior to the user's arrival.At 304, the mobile device determines the geo-location and queries a location-aware database to get a list of links to websites associated with the new location. The mobile device presents this list to the user and installs the applications desired by the user. The mobile device adds this new place to a model of location-aware websites, which may involve updating the database of such websites. The mobile device tracks the usage of websites while the user remains at this location.At 306, when the user arrives at a place that he or she has previously visited, the mobile device checks for updates to websites associated with this location and generates a URL list of those websites. In addition, the device may also query the database to find new or better websites to include in the URL list. The mobile device tracks the usage of websites while the user remains at this location.At 308 and 310, the mobile device continues to track user location until the user moves away from the location. If the user moves away from the location, then the device moves to state 312.At 312, the mobile device updates usage statistics and sends the statistics to the database server.EXAMPLE SYSTEMFig. 4 illustrates example system 400 for implementing the technology described herein. The system 400 includes a mobile device 404, a network 430, and a network or cloud-based server 440. The mobile device 404 may be the same as or similar to mobile devices 110, 120, 130, and 140, which have already been introduced. The cloud-based server 440 may be the same as or similar to the database server 160, which has already been introduced.The mobile device 404 includes a memory 410, one or more processor(s) 412, a wireless signal manager 414, a display system 416, a web browser, a location-awareness system 420, a contextualizer 422, a URL list generator 424, and local database 426. These functional components can be separate or some combination of hardware units. Alternatively, the components can be implemented, at least in part, in software and thus be stored in the memory 410 and executed by the processors 412.The memory 410 may include a cache. The cache stores copies of website content (e.g., text, images, audio, video, etc.) that is likely to be needed again in the near future. This allows for quicker access next time.The wireless signal manager 414 handles all wireless signals sent or received by the device. For example, wireless signal manager 414 handles the communications via the network 430. The wireless signal manager 414 especially handles signal management that aid in location awareness. For example, the wireless signal manager 414 may include the GPS components, cellular transceivers, and Wi-Fi transceivers.The display system 416 includes the display itself and the graphics system to drive that display. The web browser 418 typically is an application running on the device that is designed to reach out to the web and load web pages therefrom for the user to view on the mobile device.The location-awareness system 420 uses one or more of the existing and/or new location-awareness approaches to determine the present location of the mobile device 404. The contextualizer 422 determines the contextual factors. The URL list generator 424 generates a list of links to the selected websites. The local database 426 stores relevant data, such as the associations between known locations and often used websites.The network 430 can be a wired and/or wireless network. It can include the Internet infrastructure and it may be presented as the cloud. The network 430 includes wired or wireless local area networks, a cellular network, and/or the like. The network 430 links the mobile device 404 with the network server 440. Some implementations of the technology described here operate without assistance from the network.The network or cloud-based server 440 provides assistance to the mobile device 404 as part of one or more implementations of the technology described herein. In some implementations, the network 430 and network server 440 are not used. The network server 440 can be one or more actual servers.The network server 440 includes a website-searching assistant 442 and a remote database 450. The website-searching assistant 442 helps locate relevant websites for a query submitted by the mobile device 404. The remote database 450 stores associations between websites, their URLs, locations, and/or contextual factors. These associations can be collected from many mobile devices, such as the mobile device 404.As depicted and discussed, the wireless devices 110, 120, 140, and 404 are mobile phones. However, devices can be other types of portable devices, such as smartphones, cell phones, tablet computers, any wireless-enabled wearable devices, laptop computers, netbook computers, or the like.EXAMPLE COMPUTING DEVICEFig. 5 illustrates an example system 500 that may implement, at least in part, the technologies described herein. In various implementations, system 500 is a media system, although system 500 is not limited to this context. For example, system 500 can be incorporated into a personal computer (PC), laptop computer, ultra-laptop computer, tablet, touch pad, portable computer, handheld computer, palmtop computer, personal digital assistant (PDA), cellular telephone, combination cellular telephone/PDA, television, smart device (e.g., smart phone, smart tablet, or smart television), mobile internet device (MID), messaging device, data communication device, and so forth.In various implementations, system 500 includes a platform 502 coupled to a display 520. Platform 502 receives content from devices such as content services device 530, content delivery device 540, or other similar content sources. A navigation controller 550 including one or more navigation features may be used to interact with, for example, platform 502 and/or display 520.In various implementations, platform 502 includes any combination of a chipset 505, a processor 510, memory 512, storage 514,a graphics subsystem 515, applications 516 and/or radio 518. Chipset 505 provides intercommunication among processor 510, memory 512, storage 514, graphics subsystem 515, application 516, and/or radio 518. For example, chipset 505 can include a storage adapter (not depicted) capable of providing intercommunication with storage 514.Processor 510 may be implemented as a complex instruction set computer (CISC) or reduced instruction set computer (RISC) processors, x86 instruction set compatible processors, multicore, or any other microprocessor or central processing unit (CPU). In various implementations, processor 510 may be dual-core processors, dual-core mobile processors, and so forth.Memory 512 may be implemented as a volatile memory device such as, but not limited to, a random access memory (RAM), dynamic random access memory (DRAM), or static RAM (SRAM).Storage 514 may be implemented as a nonvolatile storage device such as, but not limited to, a magnetic disk drive, optical disk drive, tape drive, an internal storage device, an attached storage device, flash memory, battery backed-up synchronous DRAM (SDRAM), and/or a network accessible storage device. In various implementations storage 514 includes technology to increase the storage performance-enhanced protection for valuable digital media when multiple hard drives are included.Graphics subsystem 515 processes of images such as still or video for display. Graphics subsystem 515 can be a graphics processing unit (GPU) or a visual processing unit (VPU), for example. An analog or digital interface may be used to communicatively couple the graphics subsystem 515 and the display 520. For example, the interface can be a high-definition multimedia interface, display port, wireless high definition media interface (HDMI), and/or wireless HD-compliant techniques. Graphics subsystem 515 may be integrated into processor 510 or chipset 505. In some implementations graphics subsystem 515 may be a stand-alone card communicatively coupled to chipset 505.The graphics and/or video processing techniques described herein are implemented in various hardware architectures. For example, graphics and/or video functionality may be integrated within a chipset. Alternatively, a discrete graphics and/or a video processor may be used. As still another implementation, the graphics and/or video functions may be provided by a general-purpose processor, including a multicore processor. In further embodiments, the functions may be implemented in a consumer electronics device.Radio 518 may include one or more radios capable of transmitting and receiving signals using various suitable wireless communications techniques. Such techniques involve communications across one or more wireless networks. Example wireless networks include, but are not limited to, wireless local area networks (WLANs), wireless personal area networks (WPANs), wireless metropolitan area network (WMANs), cellular networks, and satellite networks. In communicating across such networks, radio 518 operates in accordance with one or more applicable standards in any version.In various implementations display 520 includes any television-type monitor or display. Display 520 may include, for example, a computer display screen, touch-screen display, video monitor, television-like device, and/or a television. Display 520 can be digital and/or analog. In various implementations, display 520 may be a holographic display. In addition, display 520 may be a transparent surface that receives a visual projection. Such projections convey various forms of information, images, and/or objects. For example, such projections may be a visual overlay for a mobile augmented reality (MAR) application. Under the control of one or more software applications (516), platform 502 can display user interface 522 on display 520.In various implementations, content services device(s) (530) may be hosted by any national, international, and/or independent service and thus accessible to platform 502 via the Internet. Content services device(s) (530) may be coupled to platform 502 and/or to display 520. Platform 502 and/or content services device(s) 530 may be coupled to a network 560 to communicate media information to and from the network 560. Content delivery device(s) 540 also may be coupled to platform 502 and/or to display 520.In various implementations, content services device(s) 530 include a cable television box, personal computer, network, telephone, Internet-enabled devices, appliances capable of delivering digital information and/or content, and any other similar device capable of unidirectionally or bidirectionally communicating content between content providers and platform 502 and/display 520, via network 560 or directly. The content can be communicated unidirectionally and/or bidirectionally to and from any one of the components in system 500 and a content provider via a network 560. Examples of content include any media information including, for example, video, music, medical and gaming information, and so forth.Content services device(s) 530 receive content such as cable television programming including media information, digital information, and/or other content. Examples of content providers include any cable or satellite television or radio or Internet content providers. The provided examples are not meant to limit implementations in accordance with the present disclosure in any way.In various implementations platform 502 may receive control signals from navigation controller 550 having one or more navigation features. The navigation features of controller 550 may be used to interact with user interface 522, for example. In some embodiments, navigation controller 550 may be a pointing device such as a computer hardware component, specifically a human interface device, that allows a user to input spatial (e.g., continuous and multidimensional) data into a computer. Many systems such as graphical user interfaces (GUI), and televisions and monitors allow the user to control and provide data to the computer or television using physical gestures.Movements of the navigation features of controller 550 can be replicated on a display (e.g., display 520) by movements of a pointer, cursor, focus ring, or other visual indicators displayed on the display. For example, under the control of software applications 516, the navigation features located on navigation controller 550 can be mapped to virtual navigation features displayed on user interface 522. In some embodiments, controller 550 may not be a separate component but may be integrated into platform 502 and/or display 520. The present disclosure, however, is not limited to the elements or in the context shown or described herein.In various implementations, drivers (not shown) include technology to enable users to instantly turn on and off platform 502 like a television with the touch of a button after initial boot up, when enabled. Program logic allows platform 502 to stream content to media adaptors or other content services device(s) 530 or content delivery device(s) 540 even when the platform is turned off. In addition, chipset 505 includes hardware and/or software support for 5.1 surround sound audio and/or high definition 5.1 surround sound audio, for example. Drivers may include a graphics driver for integrated graphics platforms. In some embodiments the graphics driver may comprise a peripheral component interconnect (PCI) express graphics card.In various implementations any one or more of the components shown in system 500 can be integrated. For example, platform 502 and content services device(s) 530 can be integrated, or platform 502 and content delivery device(s) (540) can be integrated, or platform 502, content services device(s) (530), and content delivery device(s) 540 can be integrated. In various embodiments, platform 502 and display 520 can be an integrated unit. Display 520 and content service device(s) 530 can be integrated, or display 520 and content delivery device(s) 540 can be integrated. These examples are not meant to limit the present disclosure.In various embodiments system 500 can be implemented as a wireless system, a wired system, or a combination of both. When implemented as a wireless system, system 500 can include components and interfaces suitable for communicating over a wireless shared media, such as one or more antennae, transmitters, receivers, transceivers, amplifiers, filters, control logic, and so forth. An example of wireless shared media includes portions of a wireless spectrum, such as the RF spectrum. When implemented as a wired system, system 500 can include components and interfaces suitable for communicating over wired communications media, such as input/output (I/O) adapters, physical connectors to connect the I/O adapter with a corresponding wired communications medium, a network interface card (NIC), disc controller, video controller, audio controller, and the like. Examples of wired communications media can include a wire, cable, metal leads, printed circuit board (PCB), backplane, switch fabric, semiconductor material, twisted-pair wire, coaxial cable, fiber optics, and others.Platform 502 can establish one or more logical or physical channels to communicate information. The information includes media information and control information. Media information refers to any data representing content meant for a user. Examples of content include data from a voice conversation, videoconference, streaming video, electronic mail ("e-mail") message, voice-mail message, alphanumeric symbols, graphics, image, video, text, and so on. Data from a voice conversation can be, for instance, speech information, silence periods, background noise, comfort noise, tones, and other similar items. Control information refers to any data representing commands, instructions, or control words meant for an automated system. For example, control information can be used to route media information through a system, or instruct a node to process the media information in a predetermined manner. The embodiments, however, are not limited to the elements or in the context shown or described in Fig. 5 .As described above, system 500 can be embodied in varying physical styles or form factors. Fig. 5 illustrates implementations of a small form-factor device 500 in which system 500 can be embodied. In embodiments, for example, device 500 can be implemented as a mobile computing device having wireless capabilities. A mobile computing device may refer to any device having a processing system and a mobile power source or supply, such as one or more batteries.Examples of a mobile computing device, in addition to those already mentioned, also may include computers that are arranged to be worn by a person, such as a wrist computer, finger computer, ring computer, eyeglass computer, belt-clip computer, arm-band computer, shoe computers, clothing computers, and other wearable computers. In various embodiments, a mobile computing device can be implemented as a smart phone capable of executing computer applications, as well as voice communications and/or data communications. Although some embodiments can be described with a mobile computing device, other embodiments can be implemented using other wireless mobile computing devices as well. The embodiments are not limited in this context.As shown in Fig. 6 , device 600 includes a housing 602, a display 604, an I/O device 606, and an antenna 608. Device 600 also includes navigation features 612. Display 604 includes any suitable display unit for displaying information appropriate for a mobile computing device. I/O device 606 includes any suitable I/O device for entering information into a mobile computing device. Examples for I/O device 606 include an alphanumeric keyboard, a numeric keypad, a touch pad, input keys, buttons, switches, rocker switches, microphones, speakers, voice recognition device and software, and others. Information also can be entered into device 600 by way of microphone (not shown). Such information is digitized by a voice recognition device (not shown). The embodiments are not limited in this context.Various embodiments can be implemented using hardware elements, software elements, or a combination of both. Examples of hardware elements include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, etc.), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chip sets, and more. Examples of software include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements varies in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds, and other design or performance constraints.One or more aspects of at least one embodiment can be implemented by representative instructions stored on a machine-readable medium that represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" can be stored on a tangible, machine-readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.While certain features set forth herein have been described with reference to various implementations, this description is not intended to be construed in a limiting sense. Hence, various modifications of the implementations described herein, as well as other implementations, which are apparent to persons skilled in the art to which the present disclosure pertains are deemed to lie within the scope of the present disclosure.Realizations in accordance with the present invention have been described in the context of particular embodiments. These embodiments are meant to be illustrative and not limiting. Many variations, modifications, additions, and improvements are possible. Accordingly, plural instances may be provided for components described herein as a single instance. Boundaries between various components, operations, and data stores are somewhat arbitrary, and particular operations are demonstrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of claims that follow. Finally, structures and functionality presented as discrete components in the various configurations may be implemented as a combined structure or component. These and other variations, modifications, additions, and improvements may fall within the scope of the invention as defined in the claims that follow.ADDITIONAL AND ALERTNATIVE IMPLEMENTATION NOTESIn general, a mobile device is a small, hand-held, portable computing device that typically has a display screen and some user input mechanism (e.g., touch screen or keyboard). Often they weigh less than two pounds. Often, they are equipped with wireless communications capabilities, such as Wi-Fi, Bluetooth, and cellular. Examples of implementations of a mobile device include a smartphone, a tablet computer, a feature phone, a personal digital assistant (PDA), any wireless-enabled wearable devices, laptop computers, netbook computers, or other so-called handheld devices or computers.In the above description of exemplary implementations, for purposes of explanation, specific numbers, materials configurations, and other details are set forth in order to better explain the present invention, as claimed. However, it will be apparent to one skilled in the art that the claimed invention may be practiced using different details than the exemplary ones described herein. In other instances, well-known features are omitted or simplified to clarify the description of the exemplary implementations.The inventor intends the described exemplary implementations to be primarily examples. The inventor does not intend these exemplary implementations to limit the scope of the appended claims. Rather, the inventor has contemplated that the claimed invention might also be embodied and implemented in other ways, in conjunction with other present or future technologies.Moreover, the word "exemplary" is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as exemplary is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the word "exemplary" is intended to present concepts and techniques in a concrete fashion. The term "technology," for instance, may refer to one or more devices, apparatuses, systems, methods, articles of manufacture, and/or computer-readable instructions as indicated by the context described herein.As used in this application, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or." That is, unless specified otherwise or clear from context, "X employs A or B" is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then "X employs A or B" is satisfied under any of the foregoing instances. In addition, the articles "a" and "an" as used in this application and the appended claims should generally be construed to mean "one or more," unless specified otherwise or clear from context to be directed to a singular form.These processes are illustrated as a collection of blocks in a logical flow graph, which represents a sequence of operations that can be implemented in mechanics alone or a combination with hardware, software, and/or firmware. In the context of software/firmware, the execution of the instructions on the medium may cause performance of the operations described herein.Note that the order in which the processes are described is not intended to be construed as a limitation, and any number of the described process blocks can be combined in any order to implement the processes or an alternate process.The term "computer-readable media" includes computer-storage media. For example, computer-storage media may include, but are not limited to, magnetic storage devices (e.g., hard disk, floppy disk, and magnetic strips), optical disks (e.g., compact disk [CD] and digital versatile disk [DVD]), smart cards, flash memory devices (e.g., thumb drive, stick, key drive, and SD cards), and volatile and nonvolatile memory (e.g., random access memory [RAM], read-only memory [ROM]). Examples provide a mobile device comprising: a location-awareness system configured to determine a location of the mobile device; a URL-list-manager configured to: select one or more websites that are associated with the determined location; generate a list of uniform resource locators ("URLs") to the one or more of the selected websites. In some examples the mobile device further comprises a contextualizer configured to determine contextual factors of the mobile device, the URL-list-manager being further configured to select based, at least in part, upon the determined contextual factors. In some examples the contextual factors are selected from a group consisting of mode of travel of a user of the mobile device, crowd-sourced ratings of websites, personal history of website usage at or near the determined location, crowd-sourced history of website usage at or near the determined location, identification of type of the determined location, and identification of the type of event happening at the location. In some examples the URL-list-manager is further configured to designate a group of web pages to be part of at least one of the selected websites. In some examples the determined location of the mobile device is selected from a group consisting of a physical location, geo-location, and a logical location. In some examples the location-awareness system is further configured to determine the location using, at least in part, geo-location information obtained from a global positioning system (GPS). In some examples the location-awareness system is further configured to determine the location using, at least in part, location information obtained from one or more ambient identifiable wireless signal (IWS) sources. In some examples the mobile device further comprises: a display configured to present thereon a user interface to a user of the mobile device, the user interface offering the generated list of ULRs to the one or more of the selected websites; a user input system operatively associated with the user interface, the user-input system being configured to obtain input from a user that indicates the user's choice of one or more of the selected websites to access.Examples provide a method of management of lists of uniform resource locators (URLs) for a mobile device, the method comprising: determining a location of a mobile device; selecting one or more websites that are associated with the determined location; generating a list of URLs to the one or more of the selected websites. In some examples the method further comprises determining contextual factors of the mobile device, wherein the selecting is based, at least in part, upon the determined contextual factors. In some examples the contextual factors are selected from a group consisting of mode of travel of a user of the mobile device, crowd-sourced ratings of websites, personal history of website usage at or near the determined location, crowd-sourced history of website usage at or near the determined location, identification of type of the determined location, and identification of the type of event happening at the location. In some examples the method further comprises designating a group of web pages to be part of at least one of the selected websites. In some examples the determined location of the mobile device is selected from a group consisting of a physical location, geo-location, and a logical location. In some examples the determining of the location is based, at least in part, geo-location information obtained from a global positioning system (GPS). In some examples the determining of the location is based, at least in part, location information obtained from one or more ambient identifiable wireless signal (IWS) sources. In some examples the selecting includes: querying a database to find a list of websites that are associated with the determined location; choosing one or more websites from the list of websites found by the query. In some examples the method further comprises accessing the database via a communications network. In some examples the database includes crowd-sourced information about websites. In some examples the database includes crowd-sourced information about websites, wherein such information is selected from a group consisting of usage at or near locations and user-supplied ratings.Examples provide one or more computer-readable media with processor-executable instructions stored thereon which when executed by one or more processors cause performance of operations comprising: determining a location of a mobile device; determining contextual factors of the mobile device; selecting one or more websites that are associated with the determined location and with one or more determined contextual factors; generating a list of uniform resource locators ("URLs") to the one or more of the selected websites. In some examples the contextual factors are selected from a group consisting of mode of travel of a user of the mobile device, crowd-sourced ratings of websites, personal history of website usage at or near the determined location, crowd-sourced history of website usage at or near the determined location, identification of type of the determined location, and identification of the type of event happening at the location. In some examples the operations further comprising designating a group of web pages to be part of at least one of the selected websites. In some examples the determined location of the mobile device is selected from a group consisting of a physical location, geo-location, and a logical location. In some examplesExamples provide a method comprising: determining a location of a mobile device; determining contextual factors of the mobile device; tracking usage of one or more websites while at the determined location; generating an association between the determined location, determined contextual factors, and the one or more tracked websites; facilitating storage of the association in a database. In some examples the contextual factors are selected from a group consisting of mode of travel of a user of the mobile device, crowd-sourced ratings of websites, personal history of website usage at or near the determined location, personal history of website usage en route to the determined location, crowd-sourced history of website usage at or near the determined location, identification of type of the determined location, and identification of the type of event happening at the location. In some examples the determining the contextual factors includes determining usage of one or more websites of the mobile device while at or near the determined location. In some examples the usage being determined for a particular website is selected from a group consisting of whether the particular website is used while at or near the determined location, how much or how long the particular website is used while at or near the determined location, whether the particular website is initiated while at or near the determined location, whether the particular website is active while at or near the determined location, whether the particular website is inactive while at or near the determined location, whether the particular website is deactivated while at or near the determined location, whether the particular website is installed while at or near the determined location, whether the particular website is uninstalled while at or near the determined location, and any combination thereof. In some examples the determined location of the mobile device is selected from a group consisting of a physical location, geo-location, and a logical location.
Techniques are disclosed for forming transistor devices having source and drain regions with high concentrations of boron doped germanium. In some embodiments, an in situ boron doped germanium, or alternatively, boron doped silicon germanium capped with a heavily boron doped germanium layer, are provided using selective epitaxial deposition in the source and drain regions and their corresponding tip regions. In some such cases, germanium concentration can be, for example, in excess of 50 atomic % and up to 100 atomic %, and the boron concentration can be, for instance, in excess of 1E20 cm.sup.-3. A buffer providing graded germanium and/or boron concentrations can be used to better interface disparate layers. The concentration of boron doped in the germanium at the epi-metal interface effectively lowers parasitic resistance without degrading tip abruptness. The techniques can be embodied, for instance, in planar or non-planar transistor devices.
1.A transistor device comprising:A substrate having a channel region;A gate electrode over the channel region, wherein a gate dielectric layer is provided between the gate electrode and the channel region and a spacer is provided on a side of the gate electrode; as well asA source region and a drain region disposed in respective cavities defined in the substrate adjacent to the channel region, each of the source region and the drain region including a tip Region extending below at least one of the corresponding one of the spacers and the gate dielectric layer, wherein the source region and the drain region and the corresponding tip region comprise a boron-doped Germanium layer, the boron-doped germanium layer having:A germanium concentration exceeding 50 at%; andBoron concentration exceeding 1E20 cm-3.2.The device of claim 1, further comprising a buffer portion between the substrate and the boron-doped germanium layer, wherein the buffer portion comprises a boron-doped silicon germanium layer, the boron-doped silicon germanium Layers have:Being graded as a germanium concentration at a high concentration exceeding 95 atomic% from a reference level concentration compatible with the substrate; andWas graded as a boron concentration from a reference level concentration compatible with the substrate to a high concentration exceeding 1E20 cm-3.3.The device of claim 2, wherein the high concentration reflects pure germanium.4.The device according to claim 1, wherein the boron-doped germanium layer has a double-layered structure including:Boron doped silicon germanium portion; andA boron-doped germanium cap layer on the boron-doped silicon germanium portion.5.The device according to claim 4, wherein:Germanium concentration of the boron-doped silicon germanium portion is graded from a reference level concentration compatible with the substrate to a high concentration exceeding 50 atom%; andThe boron-doped germanium cap layer has a germanium concentration in excess of 95 atomic%.6.The device according to claim 4, wherein a boron concentration of said boron-doped silicon germanium portion is graded from a reference level concentration compatible with said substrate to a high concentration exceeding 1E20 cm-3.7.The device according to claim 4, wherein:Said boron-doped silicon germanium portion having a fixed germanium concentration; andThe device further comprising a buffer between the boron-doped silicon germanium portion and the boron-doped germanium cap, the buffer having:Being graded as a germanium concentration at a high concentration exceeding 50 atomic% from a reference level concentration compatible with the boron-doped silicon germanium portion; andIs graded as a boron concentration from a reference level compatible with the boron-doped silicon germanium portion to a high concentration exceeding 1E20 cm-3.8.The device of claim 1, wherein the device is one of a planar or FinFET PMOS transistor.9.A transistor device comprising:A substrate having a channel region;A gate electrode over the channel region, wherein a gate dielectric layer is provided between the gate electrode and the channel region and a spacer is provided on a side of the gate electrode;A source region and a drain region disposed in respective cavities defined in the substrate adjacent to the channel region, each of the source region and the drain region including a tip Region extending below at least one of the corresponding one of the spacers and the gate dielectric layer, wherein the source region and the drain region and the respective tip region comprise a boron-doped Germanium layer, the boron-doped germanium layer having:A germanium concentration exceeding 50 at%; andBoron concentration in excess of 2E20 cm-3; andMetal-germanide source contact and metal-germanide drain contact.10.The device of claim 9, further comprising a buffer portion between the substrate and the boron-doped germanium layer, the buffer portion having:Being graded as a germanium concentration at a high concentration exceeding 95 atomic% from a reference level concentration compatible with the substrate; andWas graded as a boron concentration from a reference level concentration compatible with the substrate to a high concentration exceeding 2E20 cm-3.11.The device according to claim 9, wherein the boron-doped germanium layer has a double-layered structure comprising:Boron doped silicon germanium portion; andA boron-doped germanium cap layer on the boron-doped silicon germanium portion.12.The device according to claim 11, wherein:Germanium concentration of the boron-doped silicon germanium portion is graded from a reference level concentration compatible with the substrate to a high concentration exceeding 50 atom%; andThe boron-doped germanium cap layer has a germanium concentration in excess of 95 atomic%.13.The device according to claim 12, wherein the boron concentration of the boron-doped silicon germanium portion is graded from a reference level concentration compatible with the substrate to a high concentration exceeding 2E20 cm-3.14.The device according to claim 11, wherein:Said boron-doped silicon germanium portion having a fixed germanium concentration; andSaid device further comprising a thin buffer portion between said boron-doped silicon germanium portion and said boron-doped germanium cap layer, said buffer portion having:Being graded as a germanium concentration at a high concentration exceeding 50 atomic% from a reference level concentration compatible with the boron-doped silicon germanium portion;Being classified as a concentration of boron from a reference level compatible with the boron-doped silicon germanium portion to a high concentration of boron exceeding 2E20 cm-3; andThickness less than 100 angstroms.15.A method for forming a transistor device, comprising:Providing a substrate having a channel region;Providing a gate electrode over the channel region, wherein a gate dielectric layer is provided between the gate electrode and the channel region and a spacer is provided on a side of the gate electrode; andA source region and a drain region are provided in respective cavities defined in the substrate and adjacent to the channel region, each of the source region and the drain region including a tip Region extending below at least one of the corresponding one of the spacers and the gate dielectric layer, wherein the source region and the drain region and the respective tip region comprise a boron-doped Germanium layer, the boron-doped germanium layer having:A germanium concentration exceeding 50 at%; andBoron concentration exceeding 1E20 cm-3.16.The method of claim 15, further comprising:Providing a buffer between the substrate and the boron-doped germanium layer, the buffer having:Being graded as a germanium concentration at a high concentration exceeding 95 atomic% from a reference level concentration compatible with the substrate; andWas graded as a boron concentration from a reference level concentration compatible with the substrate to a high concentration exceeding 1E20 cm-3.17.The method according to claim 15, wherein the boron-doped germanium layer has a double-layered structure including:Boron doped silicon germanium portion; andA boron-doped germanium cap layer on the boron-doped silicon germanium portion.18.The method of claim 17, wherein:Germanium concentration of the boron-doped silicon germanium portion is graded from a reference level concentration compatible with the substrate to a high concentration exceeding 50 atom%; andThe boron-doped germanium cap layer has a germanium concentration in excess of 95 atomic%.19.The method of claim 17, wherein the boron-doped silicon germanium portion has a fixed germanium concentration, the method further comprising:Providing a buffer between the boron-doped silicon germanium portion and the boron-doped germanium cap, the buffer having:Being graded as a germanium concentration at a high concentration exceeding 50 atomic% from a reference level concentration compatible with the boron-doped silicon germanium portion; andIs graded as a boron concentration from a reference level compatible with the boron-doped silicon germanium portion to a high concentration exceeding 1E20 cm-3.20.The method of claim 19 wherein the boron concentration of the boron-doped silicon germanium portion is graded from a reference level concentration compatible with the substrate to a high concentration exceeding 1E20 cm-3.
Transistors with a high concentration of boron-doped germaniumThis application is a divisional application whose original application was International Patent Application No. PCT / US2011 / 063813, which entered the Chinese national phase on June 21, 2013, and whose international application date is December 7, 2011. The original application of China's national application number Is 201180062124.7, entitled "Transistor with Boron doped germanium at high concentration".Background techniqueThe improved performance of circuit devices, including transistors, diodes, resistors, capacitors, and other passive and active electronics formed on semiconductor substrates, is generally a major factor considered in the design, manufacture, and operation of these devices . For example, in the design and fabrication or formation of metal oxide semiconductor (MOS) transistor semiconductor devices, such as those used in complementary metal-oxide semiconductor (CMOS), it is often desirable to increase the size of the N-type MOS device ) Electrons in the channel region and increase the movement of positively charged holes in the channel region of the P-type MOS device (PMOS). This increased drive current can be achieved in the transistor by reducing the device resistance.One way to reduce the overall resistance of a MOS device is to dope the region between the source / drain region and the channel region, referred to as the tip region of the MOS device (or sometimes referred to as the source Pole / drain extension). For example, a dopant may be implanted in the source / drain region and a subsequent anneal may be performed to diffuse the dopant into the channel region. Because of the injection and diffusion methods used, all have the ability to control the dopant concentration and position. In addition, the size of other portions of the MOS device, such as its offset spacer thickness, may also have a significant impact on the location of the tip region. All of these in turn affect the ability of the tip region to maximize dopant concentration and approximate the channel region. Accordingly, there is a need for improved methods or structures to overcome the limitations of conventional tip regions.BRIEF DESCRIPTION OF THE DRAWINGS FIGFigure IA shows a conventional MOS device that includes source and drain tip regions formed using implantation and diffusion.FIG. 1B shows a MOS device configured in accordance with an embodiment of the present invention that includes source and drain extension tips.Figure 1C shows how spacer thickness can affect the etch of the epitaxial tip of a MOS device.FIG. 1D is a graph showing the correlation of UC to UC distance and spacer thickness.FIG. 2 is a method of forming source and drain extension tips in accordance with an embodiment of the present invention. FIG.FIGS. 3A-3J illustrate structures formed when the method of FIG. 2 is performed in accordance with various embodiments of the present invention.Figure 4 shows a perspective view of a FinFET transistor architecture configured in accordance with one embodiment of the present invention.FIG. 5 is a graph showing how the UC to UC distance for a MOS device formed in accordance with an embodiment of the present invention is less correlated with the spacer thickness.6A shows measured values ​​of a Schottky barrier nickel germanide (NiGe) diode in accordance with some embodiments of the present invention, confirming that the valence band edge of the work function of NiGe is about 85 mV.6B depicts simulated data according to some embodiments of the present invention showing that such a germanide material provides substantial Rext improvement over conventional SiGe source / drain PMOS devices.detailed descriptionTechniques for forming transistor devices having source and drain regions of high concentration boron-doped germanium are disclosed. For example, these techniques can be used to extend self-aligned epitaxial tip (SET) transistors to achieve theoretical limits very near uniaxial strain. In some embodiments, this is achieved by in-situ boron-doped germanium, which is provided by selective epitaxial deposition in the source and drain regions and their corresponding tip regions . In other embodiments, selective epitaxial deposition is used to form a bilayer structure of boron-doped silicon germanium covered with a heavy boron-doped germanium layer in the source / drain and corresponding tip regions. In this case, the germanium concentration may be, for example, in the range of 20 atomic% to 100 atomic%, and the boron concentration may be, for example, in the range of 1E20 cm -3 to 2E21 cm -3 (for example, germanium concentration exceeding 50 atomic%, boron concentration exceeding 2E20 cm- 3). An optional thin buffer with graded germanium and / or boron concentration can be used as an interfacial layer with one or more underlying substrate materials having a boron-doped germanium layer. Similarly, in a two-layer structure, a thin buffer with graded germanium and / or boron concentration can be used as an interfacial layer with a silicon germanium layer with a boron-doped germanium cap layer, in other embodiments boron The doped germanium or silicon germanium layer itself may have a graded germanium and / or boron concentration in a manner similar to the optional buffer. In any such case, a high concentration of boron can be doped in germanium due to the inhibition of boron diffusion in germanium (the higher the concentration, the greater the inhibition), which in turn results in lower parasitic resistance without the tip The steepness of demotion. In addition, the contact resistance is reduced due to the reduced Schottky barrier height. These techniques can for example be embodied in planar or non-planar FinFET transistor devices.OverviewIt is well known that metal oxide semiconductor (MOS) transistors can include source and drain tip regions that are designed to reduce the overall resistance of the transistor while improving the short channel effect (SCE). Traditionally, these tip regions are part of a substrate that is implanted with a dopant, such as boron or carbon, using implantation and diffusion techniques. A source tip region is formed in a region between the source region and the channel region. Similarly, a drain tip region is formed in a region between the drain region and the channel region. The tip region obtained by this conventional process minimizes the gate dielectric layer of the underdiffuse transistor.More specifically, FIG. 1A shows a conventional MOS transistor 100A formed on a substrate 102. The source region 110 and drain are typically formed by implanting a dopant such as boron into the substrate, or by etching the substrate followed by epitaxial deposition of a silicon or silicon germanium material having a germanium concentration in the range of 10 to 40 atomic% Polar region 112. A gate stack 122 is formed on the channel region 120 of the transistor 100A. It is further seen that the gate stack body 122 includes the gate dielectric layer 106 and the gate electrode 104 and the spacers 108 are formed adjacent to the gate stack body 122. In some example cases, and depending on the technology node, the spacers 108 create between about 10 and 20 nanometers (nm) of the distance between the edge of the gate dielectric layer 106 and the edge of each of the source and drain regions 110/112 distance. A source tip region 110A and a drain tip region 112A are formed during this interval. As shown, the implant-diffusion-based tip region 110A / 112A overlaps the spacer 108 and may also overlap or under-diffuse the gate dielectric layer 106 at a distance of less than 10 nm. Dopants such as boron or carbon are implanted into the source region 110 and the drain region 112 during formation of the implant-diffusion-based tip region 110A / 112A. The transistor 100A is then annealed such that the dopant diffuses into the channel region 120. Angled ion implantation techniques may also be used to further implant dopants in those regions between the gate dielectric layer 106 and the source / drain regions 110/112. Unfortunately, factors such as the shape of the tip region 110A / 112A, the distance that the dopant penetrates beneath the spacer 108, and the concentration gradient of the tip region 110A / 112A depends on the diffusion of the dopant in the substrate material characteristic. For example, the concentration of the tip region will be higher near the source / drain region 110/112 and lower near the channel region 120. Although highly desirable, it is almost impossible to make the dopant concentration extremely high in the vicinity of the channel region 120 without driving the dopant into the channel region 120. Moreover, the source and drain regions 110/112 can not be moved closer to the channel region 120 because dopants are similarly driven into the channel region 120. This limits how close the source and drain regions 110/112 can be formed to the channel region 120, thereby constraining the gate length scaling.1B shows an exemplary MOS device 100B that includes a source and drain epitaxial tip (commonly referred to herein as an epi-tip) configured in accordance with an embodiment of the present invention. More specifically, the MOS transistor 100B uses undercut etching to allow the source and drain regions 110 and 112 to extend under the spacer 108 and, in some cases, under the gate dielectric layer 106. The portions of the source / drain regions 110/112 that will extend under the spacers 108 (possibly over the gate dielectric layer 106) are referred to herein as source epitaxial tips 110B and drain epitaxial tips 112B, respectively. Source and drain extension tips 110B / 112B replace the injection / diffusion based tip regions 110A / 112A described with respect to FIG. 1A.According to an embodiment of the present invention, as shown in FIG. 1B, for example, the substrate 102 can be etched, which includes an undercut spacer 108 (possibly a gate dielectric layer 106); a selective epitaxial deposition is then used to provide the in-situ Boron-doped germanium, or boron-doped silicon germanium (SiGe) coated with heavy boron-doped germanium to fill the source / drain regions 110/112 and the source / drain epitaxial tips 110B / 112B to form a source / Drain region 110/112 and source / drain extension tip 110B / 112B. Note that the epitaxial fill may be raised with respect to the surface of the substrate 102, as further shown in FIG. 1B.According to some embodiments of the invention, the graded buffer may be used for more structures or structures depending on factors such as the substrate composition and the degree to which mismatch dislocations are to be inhibited between different layers of the device structure Multi-position. For example, the substrate 102 may be a silicon substrate, a silicon-on-insulator (SOI) film substrate, or a multilayer substrate including silicon, silicon germanium, germanium, and / or III-V compound semiconductors. Thus, illustratively, in embodiments having a silicon or silicon germanium substrate 102 and in-situ boron-doped germanium for filling the source / drain regions 110/112 and the source / drain epitaxial tips 110B / 112B, , A buffer may be provided between the underlying substrate 102 and the upper boron-doped germanium. In this embodiment, the buffer may be a graded boron doped (or intrinsic) silicon germanium layer having a base level concentration that is compatible with the underlying silicon substrate or silicon germanium substrate to Germanium constituents fractionated up to 100 at% (or close to 100 at%, such as over 90 at% or 95 at% or 98 at%). In a particular such embodiment, the germanium concentration ranges from less than or equal to 40 atomic% to more than 98 atomic%. The boron concentration in this buffer may for example be fixed at a high level or level, for example from a reference concentration at or compatible with the underlying substrate to a desired high concentration (eg, over 1E20 cm-3 or 5E20 cm-3). Note that the compatibility used herein does not necessarily require that the concentration levels overlap (for example, the germanium concentration of the underlying substrate may be 0 to 20 atomic% and the initial germanium concentration of the buffer may be 30 to 40 atomic%). In addition, the phrase "immobilized" as used herein with respect to a concentration level is intended to mean a relatively constant concentration level (eg, the lowest concentration level in a layer is within 10% of the highest concentration level in the layer). In the more general sense, a fixed concentration level is intended to mean the absence of a concentration level that is intentionally assigned. The thickness of the bumper may vary depending on factors such as the range of concentrations of the bumper, but in some embodiments the thickness of the bumper is in the range of 30 to 120 angstroms, such as 50 to(eg,Or. It will be appreciated according to the present disclosure that such graded buffers beneficially reduce the Schottky barrier height.Alternatively, rather than using a thin buffer between the underlying substrate 102 and the upper layer of boron-doped germanium, the boron-doped germanium layer itself may be graded in a similar manner. For example, according to one exemplary embodiment, the boron-doped germanium layer may be configured with a germanium concentration from a reference level concentration (eg, in the range of 30 to 70 at%) compatible with the underlying substrate up to 100 at% . In some such embodiments, the boron concentration within the boron-doped germanium layer may range, for example, from a reference concentration at or compatible with the underlying substrate to a desired high concentration (eg, over 1E20 cm-3).In an embodiment having a silicon or silicon germanium substrate 102, and in-situ boron-doped SiGe with a bilayer of a boron-doped germanium cap layer filling source / drain regions 110/112 and source / drain epitaxial tips 110B / 112B In other embodiments of the structure, a buffer may be provided between the boron-doped SiGe layer and the upper boron-doped germanium cap layer. In one such embodiment, the boron-doped SiGe layer has a fixed germanium concentration (eg, in the range of 30 to 70 atomic%) and the buffer portion may be a thin SiGe layer (eg, 30 tosuch as 50 tohaving a germanium concentration graded from a reference level concentration compatible with the underlying boron-doped SiGe layer up to 100 at% (or near 100 at%, such as over 90 at% or 95 at% or 98 at%) . In some such cases, the boron concentration within the buffer may be fixed at a high level, for example, or may range from, for example, a reference concentration in or compatible with the underlying SiGe layer to a desired high concentration (eg, greater than 1E20 cm- 2E20 cm-3, 3E20 cm-3, 4E20 cm-3, or 5E20 cm-3).Alternatively, rather than using a thin buffer between two layers of a two-layer structure, the boron-doped SiGe layer may be itself graded in a similar manner. For example, according to one exemplary embodiment, the concentration may be adjusted from a reference level concentration (eg, in the range of 30 to 70 atomic%) to as high as 100 atomic% (or as nearly 100 atomic% ) Graded germanium concentration to configure a boron-doped SiGe layer. The boron concentration in the boron-doped SiGe layer can be fixed, for example, at a high level or can be, for example, at a high concentration (eg, more than 1E20 cm-3) from a reference concentration at or compatible with the underlying substrate In the range.Therefore, a SET architecture for planar and non-planar FinFET transistor devices is provided. Devices may be formed using, in part, conventional processes, such as through dummy gate oxide, thin spacer, and isotropic undercut (or ammonia etched to form a faceted fin in a single crystal substrate Grooves, or other suitable etch to form fin grooves). According to some embodiments, selective epitaxial deposition may then be used to provide in-situ boron-doped germanium or, alternatively, a fully-strained boron-doped silicon germanium layer covered with heavy boron-doped pure germanium to form the tip and Source / drain region. Optional buffers may be used as previously explained. With such an embodiment, no P-type source and drain (PSD) implantation or annealing based on high temperature diffusion is required because boron is fully active at the time of deposition. Any suitable high-k replacement metal gate (RMG) process flow can also be used, where a high-k dielectric replaces the dummy gate oxide. For example, silicidation of titanium with nickel, nickel-platinum, or with prior amorphization of germanium, may or may not be used to form low-resistance germanides. As previously explained, this embodiment extends the SET transistor device architecture to achieve (almost) the theoretical limit of uniaxial strain. The techniques provided herein are, for example, applicable to benefit any technology node (eg, 90 nm, 65 nm, 45 nm, 32 nm, 22 nm, 14 nm and 10 nm transistors, and lower), the claimed invention is not intended to be limited to devices Any particular such node or range of geometric dimensions. Other advantages will be apparent from the present disclosure.For example, it is noted that the source and drain extension tips 110B / 112B configured in accordance with embodiments of the present invention may be formed in the same process as the source and drain regions 110/112, which reduces process time. In addition, the lattice parameters of the source / drain epitaxial tips 110B / 112B configured in accordance with embodiments of the invention cause strain in the channel region 120, unlike traditional injection / diffusion based tip regions, which increases the amount of space available Hole mobility, and thus reduce the resistance in the channel. Another advantage of the SET architecture configured in accordance with some embodiments of the present invention is that the interface between the source / drain epitaxial tips 110B and 112B and the substrate material 102 forming the channel region 120 is abrupt. For example, a boron-doped germanium (B: Ge) material epitaxially deposited (eg, having a B concentration of over 2E20 cm-3 or 5E20 cm-3) on one side of the interface is a liner on the other side of the interface The bottom material, which forms the channel region 120 (eg, silicon germanium, or other suitable substrate material). This structure enables the epitaxial source / drain epitaxial tips 110B / 112B to bring the heavy boron-doped high-concentration germanium material into close proximity to the channel region 120. Boron in epitaxial source / drain epitaxial tip 110B / 112B remains substantially or completely within the epitaxial tip without tending to diffuse into channel region 120.Conventional methods that can be used to form the source and drain extension tips 110B / 112B present problems that should be considered. In particular, with reference to FIGS. 1B and 1C, conventional undercut etching techniques may result in the undercut regions forming a bullet profile. In this case, more substrate material is etched slightly below the gate dielectric layer 106 than immediately adjacent to the gate dielectric layer 106. As such, source-epitaxial tip 110B and drain-epitaxial tip 112B each coincide with the bullet-shaped profile, which creates a non-optimal strain in channel region 120. Moreover, deviations in conventional undercut etching techniques can be translated into resulting deviations in the resulting source and drain epitaxial tips 110B / 112B. Another problem with the conventional method of forming source and drain epitaxial tips 110B / 112B involves the effect of spacer thickness on undercut etching, as shown in FIGS. 1B and 1C. Referring to FIG. 1B, the MOS transistor 100B is shown as an offset spacer 108 having a first thickness x1. Substrate etching was performed, which undercut the spacer 108 and a portion of the gate dielectric layer 106 to enable the formation of source and drain extension tips 110B / 112B. The undercut-to-cut (UC to UC) distance 114 separates the source-epitaxial tip 110B from the drain-epitaxial tip 112B. Referring to FIG. 1C, the MOS transistor 100C is shown as an offset spacer 108 having a thickness x2. Here, the thickness x2 is much larger than the thickness x1 of the spacer 108 of FIG. 1B. As a result, thicker spacers 108 push the undercut etch out and cause the source / drain epitaxial tips 110B / 112B to be further formed away from the channel region 120 of the transistor 100C as the substrate etch is performed . The substrate is etched so that less surface area under the MOS transistor 100C is undercut. Therefore, the UC to UC distance 116 of the MOS transistor 100C is much larger than the UC to UC distance 114 of the MOS transistor 100B. Changing the UC to UC distance in this way results in a large drive current variation of the MOS transistor. FIG. 1D is a graph showing how the spacer thickness affects the UC to UC distance in a device formed using known methods. The graph provides data represented by line 118, showing that as the spacer thickness increases, the UC to UC distance also increases, resulting in a large drive current variation. In general, as the spacer thickness increases every nanometer, the UC to UC distance increases by about 2 nm. In this sense, the use of conventional methods to form source / drain epitaxial tips allows the thickness of offset spacers to have a significant impact on the performance of MOS devices, at least in some cases. As will be appreciated in light of the present disclosure, some embodiments of the present invention provide methods of forming source and drain tips for self-alignment and epitaxial deposition that address such issues.Architecture and methods2 is a method 200 of constructing a MOS transistor having a self-aligned source and drain extension tip according to an embodiment of the present invention. FIGS. 3A-3J illustrate an example structure formed as method 200 is performed in accordance with some embodiments.As shown, the method 200 begins with providing 202 a semiconductor substrate on which a MOS device, such as a PMOS transistor, can be formed. For example, the semiconductor substrate can be implemented as bulk silicon or silicon on insulator. In other implementations, the semiconductor substrate may be formed using an alternative material that may or may not incorporate silicon, such as germanium, silicon germanium, indium antimonide, lead telluride, indium arsenide, indium phosphide , Gallium arsenide or gallium antimonide. In a more general sense, according to embodiments of the invention, any material that may serve as a basis upon which semiconductor devices may be constructed may be used.The method 200 continues by forming 204 a gate stack on the semiconductor substrate. The gate stack may be formed as conventionally implemented, or the gate stack may be formed using any suitable customization technique. In some embodiments of the present invention, the gate stack may be formed by depositing, followed by patterning the gate dielectric layer and the gate electrode layer. For example, in one exemplary case, the gate dielectric layer may be uniformly deposited on a semiconductor substrate using conventional deposition processes such as chemical vapor deposition (CVD), atomic layer deposition (ALD), spin-on deposition (SOD), or physical vapor deposition (PVD). Alternate deposition techniques may also be used, for example, the gate dielectric layer may be thermally grown. For example, the gate dielectric material may be formed of a material such as silicon oxide or a high-k dielectric material. Examples of high-k gate dielectric materials include, for example, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide, barium strontium titanium oxide, barium titanium Oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, and lead zinc niobate. In some specific exemplary embodiments, the thickness of the high-k gate dielectric layer may be between aboutto about(eg,to. In general, the thickness of the gate dielectric layer should be sufficient to electrically isolate the gate electrode from adjacent source and drain contacts. In a further embodiment, additional processing, such as annealing, may be performed on the high-k gate dielectric layer to improve the quality of the high-k material. Next, a similar deposition technique, such as ALD, CVD, or PVD, may be used to deposit the gate electrode material on the gate dielectric layer. In some such specific embodiments, the gate electrode material is a polysilicon or metal layer, although other suitable gate electrode materials may also be used. The gate electrode material is typically a sacrificial material, which is later removed for the replacement metal gate (RMG) process, which in some embodiments has a thickness in the range ofto(eg,. A conventional patterning process may then be performed to etch away portions of the gate electrode layer and the gate dielectric layer to form a gate stack, as shown in FIG. 3A.FIG. 3A shows a substrate 300 on which a gate stack is formed. As can be seen with this exemplary embodiment, the gate stack includes a gate dielectric layer 302 (which may be a high-k gate dielectric material) and a sacrificial gate electrode 304. In one particular exemplary case, the gate stack includes a silicon oxide gate dielectric layer 302 and a polysilicon gate electrode 304. The gate stack may also include a gate hard mask layer 306 that provides certain benefits or uses during processing, such as protecting the gate electrode 304 from subsequent ion implantation processes. The hard mask layer 306 may be formed using a typical hard mask material such as silicon oxide, silicon nitride, and / or a conventional dielectric material.With further reference to FIG. 2, after forming the gate stack, the method 200 continues the ion implantation process to highly dope the substrate portion adjacent to the gate stack by implanting 206 a dopant into the substrate. For example, the dopant for use in the ion implantation process may be selected based on its ability to increase the etch rate of the substrate material it is implanted and the particular dopant selected for the ion implantation process may be selected based on the substrate material And the etchant used in the subsequent etching process. Specific dopants that can be selected to increase the etch rate of the substrate include, for example, carbon, phosphorous and arsenic. For example, carbon may be used at a dose in the range of 1 x 1014 to 1 x 1016 atoms / cm3 using an implantation energy of between 5 and 15 keV. Phosphorous may be used at a dose in the range of 1 x 1014 to 5 x 1015 atoms / cm3 using an implantation energy of between 1 and 5 keV. Arsenic may be used at a dose in the range of 1 x 1014 to 5 x 1015 atoms / cm3 using an implantation energy of between 2 and 5 keV. Other suitable dopants and dosage regimens will be apparent in light of the present disclosure. In some embodiments, ion implantation is performed substantially in a vertical direction (ie, perpendicular to the direction of the substrate); while in other embodiments at least a portion of the ion implantation process is performed in an angled direction to form a gate electrode Ions are implanted under the stack. Note that the hard mask 306 can be used to prevent doping of the gate electrode 304 material.Next, the method 200 continues annealing the substrate 207 to drive the dopants further into the substrate and reduce any damage the substrate experiences during the ion implantation process. In some embodiments, implant 206 and subsequent annealing 207 can drive the ions to a substrate depth of, for example, between 2 nm and 20 nm. Annealing 207 may be performed, for example, at a temperature between 700 ° C and 1100 ° C for a duration of 60 seconds or less (eg, 5 seconds). It will be understood that the annealing temperature and duration can vary between one embodiment and the next, depending on factors such as the diffusion rate, the substrate material, the dopant used, and the expected final dopant concentration Class factors.FIG. 3B shows the substrate 300 after the ion implantation and diffusion process. As shown in this exemplary embodiment, the ion implantation process creates two doped regions 308 adjacent to the gate dielectric layer 302 for the formed MOS transistors. When exposed to a suitable etchant, the etch rate of doped region 308 may be higher than the etch rate of the surrounding substrate material. A doped region 308 will act as part of the source region, including its self-aligned epitaxial tip. Another doped region 308 will serve as part of the drain region, including its self-aligned epitaxial tip. In the illustrated exemplary embodiment, a portion of the doped region 308 is below the gate dielectric layer 302. Note that the size of doped region 308, including its depth, may vary based on the requirements for the formed MOS transistor.Next, the method 200 continues with forming 208 spacers on either side of the gate stack. For example, conventional materials such as silicon oxide, silicon nitride, or other suitable spacer materials can be used to form the spacers. The width of the spacer can generally be chosen based on the design requirements for the formed MOS transistor. However, according to some embodiments, the width of the spacer is not dictated by the design constraints imposed by the source and drain extension tips. 3C shows a substrate 300 having spacers 310 formed on either side of a gate electrode layer 304 and a gate dielectric layer 302, according to an example embodiment.With further reference to FIG. 2, method 200 continues with dry etching 210 the doped region of the substrate to form a cavity in which source / drain regions including their respective epitaxial tips may be formed. As best appreciated with reference to FIG. 3D, the etched cavity is generally adjacent to the gate stack, and the epitaxial tip region is effectively an extension of the source / drain cavity region. In some exemplary embodiments, the etched cavities may be formed to a depth of between 50 nm and 1500 nm, which may be deeper than the doped region. In a more general sense, the etch depth can be set as desired based on the expected performance of the MOS device. In some embodiments, the dry etching process may use a supplemental etchant recipe for the dopant in the ion implantation process to increase the etch rate of the doped regions so that the etching process can be performed at a faster rate than the substrate 300 to remove the substrate material from the doped region. In some embodiments, this includes portions of the undercut spacers 310 and doped regions of the gate dielectric layer 302, thereby defining a self-aligned tip architecture that specifically defines a transistor. Increasing the etch rate of the doped regions enables the etched source and drain tips to be etched with UC to UC distances substantially unaffected by factors such as spacer thickness, variations in dry etch processes, and other process variations, The cavity can undercut spacer 310 and gate dielectric layer 302.According to some embodiments, the dry etching process may use a chlorinated chemical reaction performed in a plasma reactor. In some particular such embodiments, the etchant formulation may include a combination of NF3 and Cl2 wherein argon or helium is used as a buffer gas or carrier gas. According to some such embodiments, the flow rate of the reactive etchant species may vary, for example, between 50 and 200 standard cubic centimeters per minute (SCCM) while the flow rate of the carrier gas may vary, for example, between 150 and 400 SCCM. According to some such embodiments, a high-energy plasma may be used with a low RF bias of less than 100W, for example, at a power in the range of 700W to 1100W. According to some such embodiments, the reactor pressure may range from about 1 Pascal (Pa) to about 2 Pa. In another particular exemplary embodiment, the etchant chemistry may include a combination of HBr and Cl2. In some such embodiments, the flow rate of the etchant species can vary, for example, from 40 SCCM to 100 SCCM. According to some such embodiments, the high-energy plasma may be used with a low RF bias of less than 100 W, for example, at a power in the range of about 600 W to about 1000 W, the reactor pressure may be in the range of about 0.3 Pa to about 0.8 Pa. In another exemplary embodiment, the etchant chemistry may include a combination of Ar and Cl2. In some such embodiments, the flow rate of the etchant species can vary, for example, from 40 SCCM to 80 SCCM. According to some such embodiments, the medium energy plasma may be used, for example, at a power in the range of about 400 W to about 800 W, with a high RF bias of between about 100 W and 200 W, the reactor pressure may be between about 1 Pa and about 2 Pa range. For each of these exemplary embodiments, the time of the dry etch process may, for example, be as high as 60 seconds per substrate, but may vary depending on factors such as expected etch depth and etchant. As will be understood, such etch process parameters can vary.3D shows the substrate 300 after performing a dry etching process according to some embodiments of the present invention. As shown, a source region cavity 312 and a drain region cavity 314 are formed. In addition, the source tip cavity 312A and the drain tip cavity 314A are formed as an extension of the cavities 312 and 314, respectively, by etching 210 of the doped regions as previously described. Note that the thickness of the spacer 310 has the lowest etch of the source tip cavity 312A and the drain tip cavity 314A due to the use of dopants and etchant formulations that increase the etch rate of the doped regions during the etching 210 Impact of limits.After completing the dry etching process, with further reference to FIG. 2, the method of this exemplary embodiment continues the wet etching 212 to clean and further etch the source region cavity 312 and its source epitaxial tip cavity 312A, and the drain Polar region cavities 314 and their drain extension tip cavities 314A. Wet etching 212 may be performed using conventional or custom wet etching chemistry, which may be used to remove contaminants such as carbon, fluorine, chlorofluorocarbons, and oxides (such as silicon oxide) to A clean surface is provided on which the subsequent process can be carried out. In addition, assuming a monocrystalline silicon substrate, wet etching 212 can also be used to remove thin portions of the substrate along <111> and <001> crystallographic planes to provide a smooth surface upon which high quality epitaxial deposition . In some exemplary cases, the thin portion of the etched substrate can be as thick as 5 nm, for example, and can also remove residual contaminants. As best seen in FIG. 3E, wet etching 212 causes the edges of source region cavity 312 and its epitaxial region 312A, and drain region cavity 314, and its epitaxial tip region 314A to follow the <111> and <001> Planes. Also note that the source and drain extension tip regions 312A and 314A do not have the bullet profile that occurs in conventional processing.After completing the wet etching process, with further reference to FIG. 2, the method 200 proceeds with epitaxial deposition 214 of in-situ boron-doped germanium in the source / drain and corresponding tip cavities (in some cases with an intermediate thin buffer ), Or boron-doped silicon germanium covered with a heavily boron doped germanium layer. According to some embodiments, the epitaxial deposition fills the source and drain cavities including their respective epitaxial tip regions in one process. A CVD process or other suitable deposition technique may be used to deposit 214. For example, deposition 214 may be performed in a CVD reactor, LPCVD reactor, or ultra-high vacuum CVD (UHVCVD). In some exemplary cases, the reactor temperature may be between 600 ° C and 800 ° C, for example, and the reactor pressure may be between 1 and 760 Torr, for example. The carrier gas may, for example, include hydrogen or helium at a suitable flow rate, such as between 10 and 50 SLM. In some particular embodiments, the deposition may be performed using a germanium source precursor gas such as GeH4, which is diluted in H2 (eg, GeH4 may be diluted to 1-5%). For example, diluted GeH4 can be used at a concentration of 1% and a flow rate in the range of 50 to 300 SCCM. For in-situ doping of boron, diluted B2H6 can be used (for example, B2H6 can be diluted to 1-5% in H2). For example, diluted B2H6 can be used at a concentration of 3% and a flow rate in the range of 10 to 100 SCCM. In some exemplary cases, an etchant can be added to increase the selectivity of the deposition. For example, HCl or Cl2 can be added at a flow rate in the range of 50 to 300 SCCM.In accordance with some example embodiments of the present invention and as best shown in FIG. 3F, source and drain region cavities 312/314 are filled with in-situ boron-doped germanium along with their respective tip regions 312A / 314A such that A source region 318 (along with the epitaxial tip 318A) and a drain region 320 (along with the drain epitaxial tip 320A) of the MOS transistor 316 are formed in the substrate 300. In some such embodiments, boron-doped germanium has a boron concentration in excess of 5E20 cm-3, such as 2E21 cm-3 or higher. The thickness of the boron-doped germanium deposited layer may, for example, range from 50 to 500 nm (eg, 120 nm) according to some particular embodiments, although other layer thicknesses will be apparent in light of the present disclosure. As previously explained, some such embodiments may include having a thin buffer between the pure germanium layer and the substrate. For example, it may be further seen in the exemplary embodiment shown in FIG. 3F that the source buffer 313 and the drain buffer 315 are deposited prior to depositing in-situ boron-doped germanium. In some such embodiments, the buffer portions 313 and 315 may be a graded boron-doped silicon germanium layer with a germanium content ranging from a reference level concentration compatible with the material of the underlying substrate 300 up to 100 atomic% (or as previously described Nearly 100 atomic%). The thickness of the buffer portions 313 and 315 will vary depending on such factors as the concentration range of the buffer transition and the composition of the underlying substrate 300. In one exemplary embodiment having a silicon-germanium substrate, the buffer thickness ranges from 2 nm to 10 nm, although other suitable thicknesses may be used. In a particular such embodiment, the range of boron concentration within the buffer portions 313 and 315 is, for example, from a reference concentration compatible with the underlying silicon germanium substrate to an intended concentration (eg, over 1E20 cm "3 up to 2E21 cm" 3 ), Two particular embodiments exceed 2E20 cm-3 or exceed 5E20 cm-3. In a more general sense, the boron concentration may be adjusted as necessary to provide the desired level of electrical conductivity, as will be appreciated in light of the present disclosure.In accordance with other exemplary embodiments of the present invention, and best illustrated in FIG. 3G, in-situ boron-doped silicon germanium is used to fill the source and drain region cavities 312/314 along with their respective tip regions 312A / 314A To form the source region 318 (along with the epitaxial tip 318A) and the drain region 320 (along with the drain epitaxial tip 320A) of the MOS transistor 316 in the substrate 300. The boron-doped silicon germanium filler is then covered with a heavily boron-doped germanium layer to provide a source cap layer 317 and a drain cap layer 319. In some such two-layer structure embodiments, the boron-doped silicon germanium filler that may be epitaxially deposited in the one or more layers has a germanium concentration in the range of 30 to 70 atomic% or more. As previously explained, the germanium concentration of the SiGe filler may be fixed or graded so as to increase from a reference level (near substrate 300) to a higher level (eg, over 50 at%, near pure germanium cap 317 / 319). In some such embodiments, the boron concentration may exceed 1E20 cm-3, such as above 5E20 cm-3 or 2E21 cm-3, and may also be graded so as to increase from a reference level near the substrate 300 to a high level (eg, over 1E20 cm -3 or 2E20 cm-3, or 3E20 cm-3, etc. near the cap layer 317 / 319. In embodiments in which the germanium concentration of the boron-doped SiGe layer is fixed, the thin graded buffer can be used to better connect the boron-doped SiGe layer The Ge cap layer is doped with boron, as previously explained According to some particular embodiments, the thickness 318/320 of the boron-doped SiGe deposited layer (or set of layers) may for example be in the range of 50 to 250 nm (eg 60 nm) The pure germanium cap layer 317/319 may have a thickness in the range of, for example, 50 to 250 nm (eg, 50 nm), although alternative embodiments may have other layers and cap thicknesses as will be apparent in light of this disclosure.In some embodiments , It is noted that cavities may be created under the spacers during the cyclic deposition-etch process, which cavities may also be backfilled by the epitaxial capping layer (which may, for example, have the same composition as the boron-doped germanium cap layer 317/319) .As further understood in light of the present disclosure, high germanium concentrations (eg, over 50 at%, up to pure germanium) and high boron concentrations (eg, over 1E20 cm " 3) can be used to achieve much higher conductivity in the source and drain regions of PMOS SET transistor devices and their corresponding tip regions. In addition, as previously explained, since boron diffusion was sufficiently inhibited by pure germanium, no adverse SCE degradation was obtained in subsequent anneals, despite the higher boron in the stressor films deposited. The reduction of the barrier height is also achieved by the higher germanium concentration at the interface. In some exemplary embodiments, germanium concentrations in excess of 95 at% up to pure germanium (100 at%) may be used to achieve this benefit.As further shown in FIGS. 3F and 3G, unlike traditional source and drain tip regions formed by implantation and diffusion techniques, and therefore not having sharp boundaries between the tip region and the channel region, the MOS transistor 316 Self-aligned source and drain extension tips have abrupt boundaries. In other words, the interface between the source / drain epitaxial tips and the channel region is clear and unambiguous. On one side of the interface is a heavily boron-doped germanium layer (layer 318/320 of FIG. 3F, or cap layer 317/319 of FIG. 3G), on the other side of the interface is a substrate 300 material that constitutes Channel area. Boron in the source / drain epitaxial tips 318A / 320A remains substantially or completely within the epitaxial tips and does not tend to diffuse into the channel region so that heavy boron doped germanium materials can be brought very close to the ditch relative to conventional techniques Road area. For example, in some particular embodiments, the source / drain epitaxial tips 318A / 328A may undercut the gate dielectric layer 302 by more than 10 nm. This, in turn, makes it possible to reduce the gate length without having to shorten the channel region.Relatively close to the channel region to form the source and drain extension tips also impose a greater hydrostatic stress on the channel. This stress increases the strain in the channel, thereby increasing the mobility in the channel and increasing the drive current. This stress can be further amplified by increasing the germanium concentration at the source and drain extension tips. This is an improvement over the diffusion-based process where the tip region generally does not cause strain on the channel region.Once the source and drain regions have been filled in accordance with embodiments of the present invention, a variety of conventional MOS processes may be implemented to complete the construction of the MOS transistor 316, such as a process of replacing a gate oxide, a process of replacing a metal gate, an annealing And self-aligned salicidation processes, which may further modify transistor 316 and / or provide the necessary electrical connections. For example, with further reference to FIG. 2 after epitaxial deposition of the source / drain regions along with their respective tips, the method 200 may proceed with depositing 216 an interlayer dielectric (ILD) on the transistor 316 followed by a common implementation of The ILD layer is planarized. ILD layers, such as low-k dielectric materials, may be formed using known materials in dielectric layers suitable for use in integrated circuit structures. Such dielectric materials include, for example, oxides such as silicon oxide (SiO 2) and carbon-doped oxide (CDO), silicon nitride, organic polymers such as octafluorocyclobutane or polytetrafluoroethylene, fluorine Silicate glass (FSG), and organosilicates such as silsesquioxanes, siloxanes or organosilicate glasses. In some example structures, the ILD layer may include pinholes or other voids to further reduce its dielectric constant. FIG. 3H illustrates an exemplary ILD layer 322 that has been deposited thereon and then planarized down to the hard mask 306.Next, in some embodiments of the present invention that use a replacement metal gate process, the method 200 continues to remove the 218 gate stack (including the high-k gate dielectric layer 302, the sacrificial Gate electrode 304 and hard mask layer 306). In an alternative implementation, only the sacrificial gate 304 is removed. FIG. 3I shows a trench opening formed upon etching away the gate stack in accordance with one such embodiment. If the gate dielectric layer is removed, the method can continue with the deposition of 220 a new gate dielectric layer in the trench opening. Any suitable high-k dielectric material, such as hafnium oxide, as previously described, may be used herein. The same deposition process can also be used. The replacement of the gate dielectric layer may, for example, be used to cope with any damage that may be done to the original gate dielectric layer during dry and wet etch process implementations and / or to high-k or other intended gate dielectric material Instead of low-k or sacrificial dielectric material.The method 200 may then proceed with depositing a metal gate electrode layer 222 onto the trench and onto the gate dielectric layer. Conventional metal deposition processes can be used to form metal gate electrode layers such as CVD, ALD, PVD, electroless plating, or plating. The metal gate electrode layer may include, for example, P type work function metals such as ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides such as ruthenium oxide. In some example structures, two or more metal gate electrode layers may be deposited. For example, the work function metal may be deposited followed by a suitable metal gate electrode filler metal, such as aluminum. FIG. 3J illustrates an exemplary high-k dielectric layer 324 and metal gate electrode 326 that have been deposited into trench openings, according to one embodiment.Metallization of the source and drain contacts is performed using a silicidation process, which typically deposits contact metal and then anneals. For example, silicidation with nickel, aluminum, nickel-platinum or nickel-aluminum or other alloys of nickel and aluminum, or with or without pre-amorphized implantation of germanium, can be used to form the low resistance germanides. Boron-doped germanium epitaxial layers allow for formation of metal-germanides (eg, nickel-germanium). Germanide allows for much lower Schottky barrier height and improved contact resistance (including Rext) than conventional metal silicide systems. For example, conventional transistors typically use a source / drain SiGe epitaxy process where the germanium concentration is in the range of 30-40 atomic%. Limited by the epitaxial / silicide interface resistance, this conventional system exhibits a Rext value of about 140 Ohm * um, which is high and will hinder gate pitch scaling in the future. Some embodiments of the present invention allow for a considerable improvement in Rext in PMOS devices (eg, about a two-fold improvement, or a Rext of about 70 Ohm * um), which may better support PMOS device scaling. Thus, a transistor having a source / drain configured with heavy boron doped germanium according to an embodiment of the present invention may exhibit a Rext value of less than 100 Ohm * um, in some cases less than 90 Ohm * um, and in some cases, less than 80 Ohm * um, in some cases less than 75 Ohm * um or less, where the interface between the source / drain epitaxial tip and the channel region has a boron concentration of more than 1E20 cm-3, more than 50 at% and up to or near Ge concentration of pure germanium (100 atomic%).Accordingly, self-aligned source and drain epitaxial tips are disclosed that decrease due to increased amounts of boron-doped germanium (eg, boron-doped germanium or boron-doped silicon germanium with a germanium cap) The overall MOS transistor resistance is small, increasing the channel strain. In some such embodiments, the source and drain extension tips do not have a bullet profile, a catastrophic boundary between the channel region and the source and drain regions, and / or have a more controllable doping concentration , Resulting in a more optimized source-drain profile. In addition, according to some embodiments, source and drain epitaxial tips may be etched substantially without spacer thickness by selecting a combination of suitable dopant and etchant formulations. This self-aligned process can therefore be used to improve performance when needed, while minimizing process variations.FinFET structureIt is well known that FinFETs are transistors built around a thin strip of semiconductor material, commonly referred to as a fin. Transistors include standard field-effect transistor (FET) nodes including a gate, a gate dielectric, a source region, and a drain region. The conductive channel of the device is on the outside of the fin below the gate dielectric. Specifically, the current flows along both sidewalls of the fin (the side perpendicular to the substrate surface) and the top of the fin (the side parallel to the substrate surface). Because the location of the conductive channel of such a structure is substantially along three different outer planar regions of the fin, this FinFET design is sometimes referred to as a Tri-Gate FinFET. Other types of FinFET structures are also available, such as so-called dual-gate FinFETs where the location of the conductive channel extends primarily along both sidewalls of the fin (rather than along the top of the fin).Figure 4 shows a perspective view of an exemplary tri-gate architecture configured in accordance with one embodiment of the present invention. As shown, the tri-gate device includes a substrate 400 having a semiconductor body or fins 260 (indicated by dashed lines) extending from the substrate 400 through the isolation regions 710, 720. Gate electrodes 340 are formed on the three surfaces of the fins 260 to form three gates. A hard mask 410 is formed on top of the gate electrode 340. Gate spacers 460, 470 are formed on the opposite sidewalls of the gate electrode 340. The source region includes an epitaxial region 531 formed on a recessed source interface 266 and a sidewall of a fin 260 that includes sidewalls of a recessed source interface 266 and opposing fins 260 Out) on the extension of the area 531. Cap layer 541 is deposited on epitaxial region 531. In one embodiment, isolation regions 710, 720 are shallow trench isolation (STI) regions formed by common techniques, such as etching substrate 200 to form trenches and subsequently depositing oxide material on the trenches to form STI regions . The isolation regions 710, 720 may be made of a known insulating material such as SiO 2. The preceding discussion of the substrate 102 is equally applicable here (eg, the substrate 400 may be a silicon substrate, or an SOI substrate, or a multi-layer substrate).As will be appreciated according to the present disclosure, conventional processes and forming techniques can be used to fabricate a tri-gate transistor structure. According to one exemplary embodiment of the present invention, a dual-layer structure of the epitaxial region 531 and the cap layer 541 may be realized using in-situ boron-doped silicon germanium covered with heavy boron-doped germanium, with between two double layers Optional graduated buffer of germanium and / or boron. As previously explained, such a buffer may be used to transition from a reference level germanium / boron concentration compatible with boron-doped SiGe deposited for epitaxial region 531 in recessed source interface 266 to heavy boron-doped germanium Cap layer 541. Alternatively, grading of germanium and / or boron concentration may be achieved directly in the epitaxial region 531, rather than in an intervening grading buffer arrangement. As will be further appreciated, it is noted that an alternative to the tri-gate structure is a dual gate architecture that includes a dielectric / isolation layer on the top of the fins 260.FIG. 5 is a graph illustrating an improved graph that may be obtained by using self-aligned source and drain epitaxial tips configured in accordance with an example embodiment of the present invention. Line 500 represents data collected for a MOS device constructed using the techniques provided herein. As shown, the UC to UC distance is much less affected by the spacer thickness than the device formed using conventional processes, the latter data being similarly represented by line 118. 6A and 6B further demonstrate the improvements achieved by using self-aligned source and drain extension tips configured in accordance with one exemplary embodiment of the present invention. In particular, FIG. 6A shows Schottky barrier NiGe diode measurements (leakage vs voltage) confirming that the nickel germanium work function is extremely p-type (roughly 85 mV above the Ge valence band). 6B depicts simulated data according to some embodiments of the present invention showing that improvements in such germanide material and Schottky barrier height achieve twice the Rext improvement over conventional SiGe source / drain PMOS devices. It is well known that Schottky barrier height is the rectifying barrier of electrical conduction across a semiconductor-metal junction. The magnitude of the Schottky barrier height reflects the mismatch between the potential energy at the Fermi level of the metal and the majority carrier band edge of the semiconductor across the semiconductor-metal interface. For a p-type semiconductor-metal interface, the Schottky barrier height is the difference between the Fermi level of the metal and the maximum valence band of the semiconductor.As such, various embodiments of the present invention provided herein, as will be understood in light of the present disclosure, may be used to address several transistor size adjustment issues such as providing higher channel mobility with pitch and power supply (Vcc) sizing , Providing reduced source / drain and contact resistance, providing improved channel abruptness, providing a reduced barrier height between self-aligned polycide and source / drain to minimize total parasitic resistance , Especially in both flat and non-planar architectures. Various embodiments will be apparent in light of the present disclosure.One exemplary embodiment of the present invention provides a transistor device. The device includes a substrate having a channel region. The device further includes a gate electrode over the channel region, wherein a gate dielectric layer is provided between the gate electrode and the channel region, and a spacer is provided on a side of the gate electrode. The device further includes source and drain regions formed in the substrate and adjacent to the channel region, each of the source and drain regions including a tip region extending between the gate dielectric layer and / or a corresponding one Wherein the source and drain regions comprise a boron-doped germanium layer having a germanium concentration exceeding 50 at% and a boron concentration exceeding 1E20 cm-3. In one such case, the device is one of a planar or FinFET PMOS transistor. In another such case, the device may include a metal-germanide source and drain contact. In another such case, the device may include interlayer dielectrics on the source and drain regions. In another such case, the device may include a buffer between the substrate and the boron-doped germanium layer. In one such specific case, the buffer has a concentration of germanium graded from a substrate-compatible reference level concentration to over 95 at%. In another such particular case, the buffer has a boron concentration that is graded from a substrate-compatible reference level to a high concentration of more than 1E20 cm-3. In another particular embodiment, the boron-doped germanium layer has a double-layered structure including a boron-doped silicon germanium portion and a boron-doped germanium cap layer thereon. In one such particular case, the boron-doped silicon germanium portion has a germanium concentration from a substrate-conforming reference concentration to a high concentration fraction of more than 50at%, the boron-doped germanium cap having a purity of more than 95at% Ge concentration. In another such particular case, the boron-doped silicon germanium portion has a boron concentration that is graded from a substrate-compatible reference level to a high concentration of more than 1E20 cm-3. In another such particular case, the boron-doped silicon germanium portion has a fixed germanium concentration, the device further comprises a buffer portion between the boron-doped silicon germanium portion and the boron-doped germanium cap layer, the buffer portion having a boron- Doped silicon germanium partially compatible reference level concentration to more than 50 atomic% high concentration grade germanium concentration and from boron-doped silicon germanium partially compatible reference level concentration to over 1E20 cm-3 high concentration fractionated boron concentration. In another particular case, the transistor has a Rext value of less than 100 Ohm * um (such as Rext = 70 Ohm * um, +/- 10%). As will be appreciated, the boron concentration may be set higher based on factors such as the expected conductivity, in some such exemplary cases more than 2E20 cm-3 or 3E20 cm-3 or 4E20 cm-3 or 5E20 cm-3 or 2E21cm-3.Another embodiment of the present invention provides a transistor device, in this exemplary case, the device includes a substrate having a channel region and a gate electrode over the channel region, wherein between the gate electrode and the channel A gate dielectric layer is provided between the regions, and spacers are provided on the sides of the gate electrodes. The device further includes source and drain regions formed in the substrate and adjacent to the channel region, each of the source and drain regions including a tip region extending between the gate dielectric layer and / or Wherein the source and drain regions comprise a boron-doped germanium layer having a germanium concentration exceeding 50 at% and a boron concentration exceeding 2E20 cm-3. The device further includes a metal-germanide source and drain contact. In some such cases, the device may further include a buffer portion between the substrate and the boron-doped germanium layer, wherein the buffer portion has a concentration gradient from a reference level concentration that is compatible with the substrate to a high concentration that exceeds 95 at% Germanium concentration, and a boron concentration graded from a reference level compatible with the substrate to a high concentration exceeding 2E20 cm-3. In other exemplary cases, the boron-doped germanium layer has a double-layered structure including a boron-doped silicon germanium portion and a boron-doped germanium cap layer thereon. In certain such specific cases, the boron-doped silicon germanium portion has a germanium concentration from a substrate-compatible reference level concentration to over 50 at%, the boron-doped germanium cap layer having a density of more than 95 atomic% Germanium concentration. In some such particular embodiments, the boron-doped silicon germanium portion has a boron concentration that is graded from a substrate-compatible reference level to a high concentration of more than 2E20 cm-3. In other specific cases, the boron-doped silicon germanium portion has a fixed germanium concentration, the device further includes a thin buffer portion between the boron-doped silicon germanium portion and the boron-doped germanium cap layer, A silicon germanium partially compatible reference level concentration to a high concentration rating exceeding 50 at% and a boron concentration from a reference level compatible with the boron-doped silicon germanium moiety to a high concentration rating exceeding 2E20 cm-3, The buffer has a thickness of less than 100 angstroms.Another embodiment of the present invention provides a method for forming a transistor device. The method includes providing a substrate having a channel region, and providing a gate electrode over the channel region, wherein a gate dielectric layer is provided between the gate electrode and the channel region, a space is provided on a side of the gate electrode body. The method continues by forming source and drain regions in and adjacent to the channel region in the substrate, each of the source and drain regions including a tip region extending between the gate dielectric layer and / Or a corresponding one of the spacers, wherein the source and drain regions comprise a boron-doped germanium layer having a germanium concentration exceeding 50 at% and a boron concentration exceeding 1E20 cm-3. In certain such embodiments, the method further includes providing a buffer between the substrate and the boron-doped germanium layer, wherein the buffer has a density from a reference level concentration compatible with the substrate to a high of more than 95 atomic% Concentration of germanium, and concentration of boron from a reference concentration that is compatible with the substrate to a high concentration that exceeds 1E20 cm-3. In other embodiments, the boron-doped germanium layer has a double-layered structure including a boron-doped silicon germanium portion and a boron-doped germanium cap layer thereon. In one such case, the boron-doped silicon germanium portion has a germanium concentration from a substrate-compatible reference level concentration to over 50 at%, the boron-doped germanium cap layer having more than 95 at% germanium concentration. In another such case, the boron-doped silicon germanium portion has a fixed germanium concentration, the method further comprising providing a buffer portion between the boron-doped silicon germanium portion and the boron-doped germanium cap layer, Doped silicon germanium partially compatible reference level concentration to a high concentration grading germanium concentration exceeding 50 at.% And a boron concentration from a reference level concentration compatible with the boron-doped silicon germanium portion to a high concentration grading boron exceeding 1E20 cm-3 concentration. In some such cases, the boron-doped silicon germanium portion has a boron concentration that is graded from a substrate-compatible reference level to a high concentration of more than 1E20 cm-3.The foregoing description of the exemplary embodiments of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the present disclosure. For example, some embodiments of the present invention utilize in situ boron doping of germanium, while other embodiments may use intrinsic germanium, which, after its deposition, is subsequently subjected to a boron implanting and annealing process to provide the desired boron doping concentration . In addition, some embodiments may include source and drain regions (eg, having a germanium concentration exceeding 50 atomic% and a boron concentration exceeding 1E20 cm-3) configured as described herein, but still using conventional processing (eg, implantation and Anneal) to form the tips of the source and drain regions. In such embodiments, the tip may have a germanium and / or boron concentration that is lower than the main source / drain region, which may be acceptable in some applications. In other embodiments, the tips of the source and drain regions may be configured with only high germanium and boron concentrations, and the majority of the source and drain regions may have a conventional, or lower germanium / boron concentration. It is intended that the scope of the invention be limited not only by this detailed description, but by the appended claims.
An apparatus is configured to store coded video data including a number of sequences of coded video pictures in an electronic file. The apparatus includes at least one processor configured to determine whether a sample description associated with at least one sample includes all parameter sets of a particular type associated with the at least one sample. The at least one sample comprises at least a portion of the plurality of sequences of coded video pictures. The particular type is one of a plurality of different particular types of parameter sets. The at least one processor is also configured to provide, in the electronic file, an indication indicating whether the sample description includes all parameter sets of the particular type based on the determination.
CLAIMS: 1. A method of storing coded video data comprising a plurality of sequences of coded video pictures in an electronic file, the method comprising: determining whether a sample description associated with at least one sample includes all parameter sets of a particular type associated with the at least one sample, wherein the at least one sample comprises at least a portion of the plurality of sequences of coded video pictures, and wherein the particular type is one of a plurality of different particular types of parameter sets; and providing, in the electronic file, an indication indicating whether the sample description includes all parameter sets of the particular type based on the determination. 2. The method of claim 1 , wherein the plurality of different particular types of parameter sets comprises one or more of sequence parameter sets (SPSs), picture parameter sets (PPSs), and video parameter sets (VPSs). 3. The method of claim 1, wherein determining whether the sample description includes all parameter sets of the particular type comprises: determining a name associated with the sample description; and determining whether the sample description includes all parameter sets of the particular type based on the name associated with the sample description. 4. The method of claim 3, wherein the determined name associated with the sample description is 'hvcl,' and wherein determining whether the sample description includes all parameter sets of the particular type comprises determining that the sample description includes all parameter sets of the particular type when the sample is named 'hvcl .' 5. The method of claim 3, wherein the determined name associated with the sample description is 'hevl,' and wherein determining whether the sample description includes all parameter sets of the particular type comprises determining that the sample description does not include all parameter sets of the particular type when the sample is named 'hevl .' 6. The method of claim 1, further comprising associating, in the electronic file, a name with the sample description, wherein the name does not indicate whether the sample description includes all parameter sets of the particular type. 7. The method of claim 1, wherein providing, in the electronic file, an indication indicating whether the sample description includes all parameter sets of the particular type comprises providing, in the electronic file, an indication indicating that the sample description does not include all parameter sets of the particular type based on the determination. 8. The method of claim 1, wherein providing, in the electronic file, an indication indicating whether the sample description includes all parameter sets of the particular type comprises providing, in the electronic file, an indication indicating that the sample description does include all parameter sets of the particular type based on the determination. 9. The method of claim 1, wherein providing, in the electronic file, an indication indicating whether the sample description includes all parameter sets of the particular type comprises providing, in the electronic file, an indication indicating whether a decoder configuration record of the sample description includes all parameter sets of the particular type based on the determination. 10. The method of claim 1, wherein all parameter sets of the particular type comprises all parameter sets of a first type of the plurality of different particular types of parameter sets, wherein the indication in the file comprises a first indication in the file, and the method further comprising: determining whether the sample description includes all parameter sets of a second type of the plurality of different particular types of parameter sets associated with the at least one; and providing, in the electronic file, a second indication indicating whether the sample description includes all parameter sets of the second type based on the determination. 11. The method of claim 10, wherein the indication in the file comprises a first indication in the file indicating whether the sample description includes all parameter sets of the first type, wherein providing, in the electronic file, the first indication indicating whether the sample description includes all parameter sets of the first type comprises providing, in the electronic file, an indication indicating that the sample description does not include all parameter sets of the first type based on the determination, and wherein providing, in the electronic file, the second indication indicating whether the sample description includes all parameter sets of the second particular type comprises providing, in the electronic file, an indication indicating that the sample description does include all parameter sets of the second type based on the determination. 12. An apparatus for storing coded video data comprising a plurality of sequences of coded video pictures in an electronic file, the apparatus comprising: at least one processor configured to: determine whether a sample description associated with at least one sample includes all parameter sets of a particular type associated with the at least one sample, wherein the at least one sample comprises at least a portion of the plurality of sequences of coded video pictures, and wherein the particular type is one of a plurality of different particular types of parameter sets; and provide, in the electronic file, an indication indicating whether the sample description includes all parameter sets of the particular type based on the determination. 13. The apparatus of claim 12, wherein the plurality of different types of parameter sets comprises one or more of sequence parameter sets (SPSs), picture parameter sets (PPSs), and video parameter sets (VPSs). 14. The apparatus of claim 12, wherein the at least one processor is configured to determine whether the sample description includes all parameter sets of the particular type at least by: determining a name associated with the sample description; and determining whether the sample description includes all parameter sets of the particular type based on the name associated with the sample description. 15. The apparatus of claim 14, wherein the determined name associated with the sample description is 'hvcl,' and wherein the at least one processor is configured to determine that the sample description includes all parameter sets of the particular type when the sample is named 'hvcl .' 16. The apparatus of claim 14, wherein the determined name associated with the sample description is 'hevl,' and wherein the at least one processor is configured to determine that the sample description does not include all parameter sets of the particular type when the sample is named 'hevl .' 17. The apparatus of claim 12, wherein the at least one processor is configured to associate, in the electronic file, a name with the sample description, wherein the name does not indicate whether the sample description includes all parameter sets of the particular type. 18. The apparatus of claim 12, wherein the at least one processor is configured to provide, in the electronic file, an indication indicating that the sample description does not include all parameter sets of the particular type based on the determination. 19. The apparatus of claim 12, wherein the at least one processor is configured to provide, in the electronic file, an indication indicating that the sample description does include all parameter sets of the particular type based on the determination. 20. The apparatus of claim 12, wherein the at least one processor is configured to provide, in the electronic file, an indication indicating whether a decoder configuration record of the sample description includes all parameter sets of the particular type based on the determination. 21. The apparatus of claim 12, wherein all parameter sets of the particular type comprises all parameter sets of a first type of the plurality of different particular types of parameter sets, wherein the indication in the file comprises a first indication in the file, and wherein the at least one processor is configured to: determine whether the sample description includes all parameter sets of a second type of the plurality of different particular types of parameter sets associated with the at least one; and provide, in the electronic file, a second indication indicating whether the sample description includes all parameter sets of the second type based on the determination. 22. An apparatus for storing coded video data comprising a plurality of sequences of coded video pictures in an electronic file, the apparatus comprising: means for determining whether a sample description associated with at least one sample includes all parameter sets of a particular type associated with the at least one sample, wherein the at least one sample comprises at least a portion of the plurality of sequences of coded video pictures, and wherein the particular type is one of a plurality of different particular types of parameter sets; and means for providing, in the electronic file, an indication indicating whether the sample description includes all parameter sets of the particular type based on the determination. 23. The apparatus of claim 22, wherein the plurality of different types of parameter sets comprises one or more of sequence parameter sets (SPSs), picture parameter sets (PPSs), and video parameter sets (VPSs). 24. The apparatus of claim 22, further comprising means for determining whether the sample description includes all parameter sets of the particular type at least by: determining a name associated with the sample description; and determining whether the sample description includes all parameter sets of the particular type based on the name associated with the sample description. 25. A computer-readable storage medium having stored thereon instructions that when executed cause one or more processors to perform operations comprising: determine whether a sample description associated with at least one sample includes all parameter sets of a particular type associated with the at least one sample, wherein the at least one sample comprises at least a portion of the plurality of sequences of coded video pictures, and wherein the particular type is one of a plurality of different particular types of parameter sets; and provide, in the electronic file, an indication indicating whether the sample description includes all parameter sets of the particular type based on the determination. 26. A method of processing coded video data comprising a plurality of sequences of coded video pictures stored in an electronic file, the method comprising: receiving an indication in the file indicating whether a sample description associated with at least one sample includes all parameter sets of a particular type associated with the at least one sample, wherein the at least one sample comprises at least a portion of the plurality of sequences of coded video pictures, and wherein the particular type is one of a plurality of different particular types of parameter sets; and determining whether all parameter sets of the particular type are stored in the sample description based on the indication; and processing the coded video data based at least in part on one or more of the parameter sets of the particular type base on the determination of whether all parameter sets of the particular type are stored in the sample description. 27. An apparatus for processing coded video data comprising a plurality of sequences of coded video pictures stored in an electronic file, the apparatus comprising: at least one processor configured to: receive an indication in the file indicating whether a sample description associated with at least one sample includes all parameter sets of a particular type associated with the at least one sample, wherein the at least one sample comprises at least a portion of the plurality of sequences of coded video pictures, and wherein the particular type is one of a plurality of different particular types of parameter sets; and determine whether all parameter sets of the particular type are stored in the sample description based on the indication; and process the coded video data based at least in part on one or more of the parameter sets of the particular type base on the determination of whether all parameter sets of the particular type are stored in the sample description. 28. The apparatus of claim 27, further comprising a decoder device, and wherein the decoder device is configured to process the coded video data based at least in part on one or more of the parameter sets of the particular type at least by decoding the coded video data based at least in part on one or more of the parameter sets of the particular type.
IDENTIFYING PARAMETER SETS IN VIDEO FILES [0001] This application claims the benefit of U.S. Provisional Application No. 61/638,393, filed April 25, 2012, the entire contents of which are hereby incorporated by reference. TECHNICAL FIELD [0002] This disclosure relates to storage and transport of encoded video data. BACKGROUND [0003] Digital video capabilities can be incorporated into a wide range of devices, including digital televisions, digital direct broadcast systems, wireless broadcast systems, personal digital assistants (PDAs), laptop or desktop computers, digital cameras, digital recording devices, digital media players, video gaming devices, video game consoles, cellular or satellite radio telephones, video teleconferencing devices, and the like. Digital video devices implement video compression techniques, such as those described in the standards defined by MPEG-2, MPEG-4, ITU-T H.263 or ITU-T H.264/MPEG-4, Part 10, Advanced Video Coding (AVC), and extensions of such standards, to transmit and receive digital video information more efficiently. [0004] Video compression techniques perform spatial prediction and/or temporal prediction to reduce or remove redundancy inherent in video sequences. Regardless of the particular methods, after video data has been encoded, the video data can be packetized for transmission or storage. The video data may be assembled into a video file conforming to any of a variety of standards, such as the International Organization for Standardization (ISO) base media file format (ISOBMFF) and extensions thereof, such as the AVC file format. SUMMARY [0005] In general, this disclosure describes techniques for storage and transport of video data. This disclosure provides techniques for specifying whether all parameter sets of a particular type are stored in a so-called "sample description" included in the video file independently of other types of parameter sets. [0006] One example includes a method of storing coded video data comprising a plurality of sequences of coded video pictures in an electronic file. The method includes determining whether a sample description associated with at least one sample includes all parameter sets of a particular type associated with the at least one sample. The at least one sample comprises at least a portion of the plurality of sequences of coded video pictures. The particular type is one of a plurality of different particular types of parameter sets. The method also includes providing, in the electronic file, an indication indicating whether the sample description includes all parameter sets of the particular type based on the determination. [0007] In another example, an apparatus is configured to store coded video data including a number of sequences of coded video pictures in an electronic file. The apparatus includes at least one processor configured to determine whether a sample description associated with at least one sample includes all parameter sets of a particular type associated with the at least one sample. The at least one sample comprises at least a portion of the plurality of sequences of coded video pictures. The particular type is one of a plurality of different particular types of parameter sets. The at least one processor is also configured to provide, in the electronic file, an indication indicating whether the sample description includes all parameter sets of the particular type based on the determination. [0008] Another example includes a computer-readable storage medium having stored thereon instructions that when executed cause one or more processors to perform operations including determining whether a sample description associated with at least one sample includes all parameter sets of a particular type associated with the at least one sample. The at least one sample comprises at least a portion of the plurality of sequences of coded video pictures. The particular type is one of a plurality of different particular types of parameter sets. The instructions, when executed, also cause one or more processors to perform operations including providing, in the electronic file, an indication indicating whether the sample description includes all parameter sets of the particular type based on the determination. [0009] Another example includes an apparatus for storing coded video data comprising a plurality of sequences of coded video pictures in an electronic file. The apparatus includes means for determining whether a sample description associated with at least one sample includes all parameter sets of a particular type associated with the at least one sample. The at least one sample comprises at least a portion of the plurality of sequences of coded video pictures. The particular type is one of a plurality of different particular types of parameter sets. The apparatus also includes means for providing, in the electronic file, an indication indicating whether the sample description includes all parameter sets of the particular type based on the determination. [0010] Another example includes a method of processing coded video data comprising a plurality of sequences of coded video pictures stored in an electronic file. The method includes receiving an indication in the file indicating whether a sample description associated with at least one sample includes all parameter sets of a particular type associated with the at least one sample. The at least one sample comprises at least a portion of the plurality of sequences of coded video pictures. The particular type is one of a plurality of different particular types of parameter sets. The method also includes determining whether all parameter sets of the particular type are stored in the sample description based on the indication and processing the coded video data based at least in part on one or more of the parameter sets of the particular type based on the determination of whether all parameter sets of the particular type are stored in the sample description. [0011] Another example includes an apparatus for storing coded video data comprising a plurality of sequences of coded video pictures in an electronic file. The apparatus includes at least one processor configured to receive an indication in the file indicating whether a sample description associated with at least one sample includes all parameter sets of a particular type associated with the at least one sample. The at least one sample comprises at least a portion of the plurality of sequences of coded video pictures. The particular type is one of a plurality of different particular types of parameter sets. The at least one processor is also configured to determine whether all parameter sets of the particular type are stored in the sample description based on the indication and process the coded video data based at least in part on one or more of the parameter sets of the particular type base on the determination of whether all parameter sets of the particular type are stored in the sample description. [0012] The details of one or more examples are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims. BRIEF DESCRIPTION OF DRAWINGS [0013] FIG. 1 is a block diagram illustrating an example system in which an audio/video (A/V) source device transfers audio and video data to an A/V destination device. [0014] FIG. 2 is a block diagram illustrating components of an example encapsulation unit. [0015] FIG. 3 is a conceptual diagram illustrating elements of an example video file. [0016] FIG. 4 is a conceptual diagram illustrating elements of another example video file. [0017] FIG. 5 is a flowchart illustrating an example method of storing coded video data in an electronic file. [0018] FIG. 6 is a flowchart illustrating an example method of processing coded video data. DETAILED DESCRIPTION [0019] In general, techniques are described for storing video content in a file. In particular, the techniques relate to various methods for storing high-efficiency video coding (HEVC) video content in a file based on International Standards Organization (ISO) base media file format (ISOBMFF). The techniques may enable specification of whether all parameter sets of a particular type are stored in a so-called "sample description" included in the video file independently of other types of parameter sets. The techniques may extend what is sometimes referred to as decoder configuration records, which is a syntax structure included in the sample description, to include one or more flags indicating whether all parameter sets of a particular type are stored in the sample description. The disclosed examples enable distinguishing whether all parameter sets of a particular type are included in the sample description, which, in turn, can allow determinations as to when to perform out-of-band transport of parameter sets of different types. In this manner, the disclosed examples can enable more efficient storage, processing, and transmission of coded video data, which, in turn, can improve the performance of video coding devices such as video encoders and decoders. [0020] Digital video capabilities can be incorporated into a wide range of devices, including digital televisions, digital direct broadcast systems, wireless broadcast systems, personal digital assistants (PDAs), laptop or desktop computers, digital cameras, digital recording devices, digital media players, video gaming devices, video game consoles, cellular or satellite radio telephones, video teleconferencing devices, and the like. Digital video devices implement video compression techniques, such as those described in the standards defined by MPEG-2, MPEG-4, ITU-T H.263 or ITU-T H.264/MPEG-4, Part 10, Advanced Video Coding (AVC), and extensions of such standards, to transmit and receive digital video information more efficiently. [0021] Video compression techniques perform spatial prediction and/or temporal prediction to reduce or remove redundancy inherent in video sequences. For block- based video coding, a video frame or slice may be partitioned into blocks, e.g. macroblocks. Each macroblock can also be further partitioned. Blocks in an intra- coded (I) frame or slice are encoded using spatial prediction with respect to neighboring blocks. Blocks in an inter-coded (P or B) frame or slice may use spatial prediction with respect to neighboring blocks in the same frame or slice or temporal prediction with respect to other reference frames. [0022] After video data has been encoded, the video data may be packetized for transmission or storage. The video data may be assembled into a video file conforming to any of a variety of standards, such as ISOBMFF. Additional example standards include Scalable Video Coding (SVC) file format, Advanced Video Coding (AVC) file format, Third Generation Partnership Project (3GPP) file format, and/or Multiview Video Coding (MVC) file format, or other similar video file formats. [0023] In one example, a file encapsulation unit or other device receives elementary streams comprising video data from a video encoder and elementary streams comprising audio data from an audio encoder. AV data along with parameters/attributes related thereto, e.g., bitrate, frame rate, resolutions, codec type (for video and/or audio data), language, etc. may form an AV "representation." [0024] The term "representation" may be used to refer to a section of encoded audio or video data corresponding to a particular period of the multimedia content and encoded in a particular way. Each individual stream of AV data can be referred to as an elementary stream. An elementary stream is a single, digitally-coded (possibly compressed) component of a representation. For example, the coded video or audio part of the representation can be an elementary stream. Additionally, information regarding parameters related to the video data included in a video elementary stream, e.g. sequence parameter sets as described below, may be included in a parameter set elementary stream. [0025] In some examples, the video and audio encoder may each include packetizers for forming packetized elementary streams (PES) packets from encoded data. In other examples, the video and audio encoder may each interface with respective packetizers for forming PES packets from encoded data. In still other examples, the encapsulation unit may include packetizers for forming PES packets from encoded audio and video data. [0026] The encapsulation unit can receive PES packets for elementary streams of a representation from the audio and video encoder and form corresponding network abstraction layer (NAL) units from the PES packets. In the example of H.264/AVC (Advanced Video Coding), coded video segments are organized into NAL units, which provide a "network-friendly" video representation addressing applications such as video telephony, storage, broadcast, or streaming. NAL units can be categorized as Video Coding Layer (VCL) NAL units and non-VCL NAL units. VCL units may contain the core compression engine and may include block, macroblock, and/or slice level data. Non-VCL NAL units may include parameter set NAL units, among others. [0027] Parameter sets were introduced in H.264/AVC in response to the effects of a loss of the sequence header and picture header, if, e.g., a picture is partitioned into multiple segments (also referred to as slices) and those segments are transported in their own transport unit (e.g. RTP packet). The loss of the first packet of a picture, which carries not only the first picture segment data, but also the picture header, might lead to a completely incorrectly reconstructed picture (and sometimes also the following pictures), even if all other packets were not lost. Some decoder implementations would not even attempt to decode the received packets of a picture, if the packet with the picture header was lost. [0028] Parameter sets can be either part of the video bitstream or can be received by a decoder through other means (including out-of-band transmission using a reliable channel, hard coding in encoder and decoder, and so on). A parameter set contains an identification, which is referenced, directly or indirectly, from, e.g., a slice header corresponding to a slice of a picture included in a coded video sequence. The referencing process is known as "activation." Depending on the parameter set type, the activation can occur once per picture or once per sequence. The concept of activation through referencing was introduced, among other reasons, because implicit activation by virtue of the position of the information in the bitstream (as common for other syntax elements of a video codec) is not available in the case of out-of-band transmission. [0029] HEVC includes a number of different types of parameter sets that apply to different levels of granularity of the video data, e.g. picture, sequence, layer, of a coded video sequence. The parameter sets included in HEVC are picture parameter sets (PPSs), sequence parameter sets (SPSs), and video parameter sets (VPSs). A VPS conveys information that is applicable to multiple layers as well as sub-layers. Examples of multi-layer video sequences include, e.g., multiple versions of the same video stream that include representations that differ by resolution, bit rate, frame rate, etc. Each layer of a given video sequence, regardless of whether such layers have the same or different SPSs, may generally refer to the same VPS. A VPS can convey information including (1) common syntax elements shared by multiple layers or operation points, in order to avoid unnecessary duplications; (2) information of operation points needed for session negotiation, including e.g., profile and level; and (3) other operation point specific information, which does not belong to one SPS. Examples of other operation point-specific information that does not belong to one SPS may include Hypothetical Reference Decoder (HRD) parameters for layers or sublayers. [0030] SPSs contain information which may apply to all slices of a coded video sequence. In HEVC, a coded video sequence starts from an instantaneous decoding refresh (IDR) picture, a clean random access (CRA) picture, or a broken link access (BLA) that is the first picture in the bitstream, and includes all subsequent pictures that are not an IDR or BLA picture. A bitstream consists of one or more coded video sequences. The content of an SPS can be divided into a number of categories of information, including, e.g.: (1) a self-reference (its own ID); (2) decoder operation point related (profile, level, picture size, number sub-layers, and so on); (3) enabling flags for certain tools within a profile, and associated coding tool parameters in case the tool is enabled; (4) information restricting the flexibility of structures and transform coefficient coding; (5) temporal scalability control; and (6) Visual Usability Information (VUI), which includes Hypothetical Reference Decoder (HRD) information. [0031] PPSs contain information that may change from picture to picture in a coded video sequence. The content of a PPS can be divided into a number of categories of information, including, e.g.: (1) a self-reference; (2) initial picture control information such as initial quantization parameter (QP), a number of flags indicating the use of, or presence of, certain tools or control information in the slice (sequence) header; and (3) tiling information. [0032] The ISO Base Media File Format (ISOBMFF, ISO/IEC 14496-12) is designed to contain timed media information for a media presentation in a flexible, extensible format that facilitates interchange, management, editing, and presentation of the media. ISOBMFF is specified in MPEG-4 Part- 12, which defines a general structure for time- based media files. The ISOBMFF is used as the basis for other file formats in the family such as AVC file format (ISO/IEC 14496-15) defined support for H.264/MPEG- 4 AVC video compression, 3 GPP file format, SVC file format, and MVC file format. 3GPP file format and MVC file format are extensions of the AVC file format. The ISO base media file format contains the timing, structure, and media information for timed sequences of media data, such as audio-visual presentations. The file structure is object- oriented. A file can be decomposed into basic objects and the structure of the objects is implied from their type. [0033] In the ISO base media file format, the overall presentation is called a movie, which is logically divided into tracks. Some tracks can represent a timed sequence of media (frames of video, for example). Additionally, tracks can contain other data such as media attributes/parameters, including, e.g., parameter sets by which coded video data can be decoded by a decoder device that receives the data encapsulated in the file. Within each track, each timed unit is called a sample, which could be, e.g., a frame of video or audio. Samples are implicitly numbered in sequence. Each track has one or more sample descriptions and each sample in the track is tied to a description by reference. The description defines how the sample may be decoded (e.g. the description identifies the compression algorithm used). [0034] Unlike some other multi-media file formats, the ISO base media file format, separates several concepts that are sometimes linked. The physical structure of the file may not be tied to the physical structures of the media itself. For example, the physical structure of the file and the layout of the media need not be tied to the time ordering of the media. Frames of video need not be laid down in the file in time order (though they may be). However, file structures can be used to describe the placement and timing of the media. Such file structures can permit, but not require, time-ordered files. [0035] Data within a file can be encapsulated in boxes. Metadata, including that defining the placement and timing of the media, can be contained in structured boxes and the media data (frames of video, for example) can be referred to by this metadata. The media data can be in the same file (contained in one or more boxes), or can be in other files. For example, the metadata permits referring to other files by means of URLs. The placement of the media data within these secondary files is entirely described by the metadata in the primary file. Such secondary files need not be formatted to this specification, though they may be; it is possible that there are no boxes, for example, in these secondary media files. [0036] Tracks can be of various kinds. Video tracks contain samples that are visual and audio tracks contain audio media. Files may also include hint tracks, which contain instructions for a streaming server regarding how to form packets for a streaming protocol, from the media tracks in a file. Hint tracks can be ignored when a file is read for local playback. The ISO base media file format also allows for other tracks. [0037] Extensions of the ISO base media file format have been formulated for a number of different coded video standards, including HEVC. In accordance with such extensions of the ISO base media file format, parameter sets, including the VPSs, SPSs, and PPSs can be associated with the video elementary stream, which is in the video track of the video. Additionally, parameter sets can also be stored in the sample description associated with a sample. It is also possible to have the parameter sets in another track, called a parameter set track, which includes a parameter set elementary stream containing the samples that are formed from one or more of the SPS, PPS, and/or VPS non-VCL parameter set NAL units. [0038] Sample descriptions associated with samples of video indicate the location of parameter sets. The sample description provides a syntax structure by which sample attribute information may be communicated to a device such as a video decoder. Previous HEVC file formats specified that either all parameter sets of all types are included in the sample description or all parameter sets of all types may be stored in the sample description and the samples. In some cases, however, it can be useful to distinguish whether a particular type of parameter sets are included in the sample description, e.g. to determine when to perform out-of-band transport of one or more of VPSs, SPSs, and PPSs. [0039] To facilitate determining whether all parameter sets of a particular type is included in a sample description or associated sample, or in some other location, e.g., a parameter set track, the techniques of this disclosure enable indications to be specified in the encapsulated file, e.g., in the sample description, which individually indicate whether each type of parameter sets are included in the sample description, in the sample data or both, or in some other location. In one example, one indication for each type of parameter sets is included in the decoder configuration record, which is a syntax structure that forms part of the sample description. [0040] FIG. 1 is a block diagram illustrating an example system 10 in which audio/video (A/V) source device 20 transports audio and video data to A/V destination device 40. System 10 of FIG. 1 may correspond to a video teleconference system, a server/client system, a broadcaster/receiver system, or any other system in which video data is sent from a source device, such as A/V source device 20, to a destination device, such as A/V destination device 40. In some examples, A/V source device 20 and A/V destination device 40 may perform bidirectional information exchange. That is, A/V source device 20 and A/V destination device 40 may be capable of both encoding and decoding (and transmitting and receiving) audio and video data. In some examples, audio encoder 26 may comprise a voice encoder, also referred to as a vocoder. [0041] A/V source device 20, in the example of FIG. 1, includes audio source 22, video source 24, audio encoder 26, video encoder 28, encapsulation unit 30, and output interface 32. Audio source 22 may include, for example, a microphone that produces electrical signals representative of captured audio data to be encoded by audio encoder 26. Alternatively, audio source 22 may comprise a storage medium storing previously recorded audio data, an audio data generator such as a computerized synthesizer, or any other source of audio data. Video source 24 may comprise a video camera that produces video data to be encoded by video encoder 28, a storage medium encoded with previously recorded video data, a video data generation unit, or any other source of video data. [0042] Raw audio and video data may comprise analog or digital data. Analog data may be digitized before being encoded by audio encoder 26 and/or video encoder 28. Audio source 22 may obtain audio data from a speaking participant while the speaking participant is speaking, and video source 24 may simultaneously obtain video data of the speaking participant. In this manner, the techniques described in this disclosure may be applied to live, streaming, real-time audio and video data or to archived, pre-recorded audio and video data. [0043] Video source 24 may provide a single or multiple simultaneous views of a scene. For example, video source 24 may correspond to one camera or a camera array, e.g., two or more cameras each separated by some amount of distance, such that each of the cameras in the array is directed to an approximately common focal point. In a multiple camera arrangement, each of the cameras may provide a slightly different perspective of the scene. [0044] Video source 24 may also provide multiple simultaneous views using other techniques. For example, video source 24 may provide one view and depth information for objects in a scene. The depth information may be used to generate a second view from a second, virtual camera perspective. Video source 24 may include a processor to generate the second view, or a preprocessing unit for video encoder 28 may generate the second view. In some examples, video source 24 may comprise a computer that generates computer graphics using two or more camera perspectives. [0045] Audio frames that correspond to video frames are generally audio frames containing audio data that was captured by audio source 22 contemporaneously with video data captured by video source 24 that is contained within the video frames. Hence, an audio frame may temporally correspond to one or more particular video frames. Accordingly, an audio frame corresponding to a video frame generally corresponds to a situation in which audio data and video data were captured at the same time and for which an audio frame and a video frame comprise, respectively, the audio data and the video data that was captured at the same time. [0046] In some examples, audio encoder 26 may encode a timestamp in each encoded audio frame that represents a time at which the audio data for the encoded audio frame was recorded, and similarly, video encoder 28 may encode a timestamp in each encoded video frame that represents a time at which the video data for encoded video frame was recorded. A/V source device 20 may include an internal clock from which audio encoder 26 and/or video encoder 28 may generate the timestamps, or that audio source 22 and video source 24 may use to associate audio and video data, respectively, with a timestamp. [0047] In some examples, audio source 22 may send data to audio encoder 26 corresponding to a time at which audio data was recorded, and video source 24 may send data to video encoder 28 corresponding to a time at which video data was recorded. In some examples, audio encoder 26 may encode a sequence identifier in encoded audio data to indicate a relative temporal ordering of encoded audio data but without necessarily indicating an absolute time at which the audio data was recorded, and similarly, video encoder 28 may also use sequence identifiers to indicate a relative temporal ordering of encoded video data. Similarly, in some examples, a sequence identifier may be mapped or otherwise correlated with a timestamp. [0048] To encode the video data received from video source 24, video encoder 28 performs intra and/or inter-prediction to generate one or more prediction blocks. Video encoder 28 subtracts the prediction blocks from the original video blocks to be encoded to generate residual blocks. Thus, the residual blocks can represent pixel-by-pixel differences between the blocks being coded and the prediction blocks. Video encoder 28 can perform a transform on the residual blocks to generate blocks of transform coefficients. Following intra- and/or inter-based predictive coding and transformation techniques, video encoder 28 can quantize the transform coefficients. Following quantization, entropy coding can be performed by encoder 28 according to an entropy coding methodology. [0049] A coded video block generated by video encoder 28 can be represented by prediction information that can be used to create or identify a predictive block, and a residual block of data that can be applied to the predictive block to recreate the original block. The prediction information can include motion vectors used to identify the predictive block of data. Using the motion vectors, video decoder 48 may be able to reconstruct the predictive blocks that were used by video encoder 28 to code the residual blocks. Thus, given a set of residual blocks and a set of motion vectors (and possibly some additional syntax), video decoder 28 can reconstruct a video frame or other block of data that was originally encoded. Inter-coding based on motion estimation and motion compensation can achieve relatively high amounts of compression without excessive data loss, because successive video frames or other types of coded units are often similar. An encoded video sequence may include blocks of residual data, motion vectors (when inter-prediction encoded), indications of intra- prediction modes for intra-prediction, and syntax elements. [0050] Video encoder 28 may also utilize intra-prediction techniques to encode video blocks relative to neighboring video blocks of a common frame or slice or other sub- portion of a frame. In this manner, video encoder 28 spatially predicts the blocks. Video encoder 28 may be configured with a variety of intra-prediction modes, which generally correspond to various spatial prediction directions. [0051] Video encoder 28 can apply transform, quantization, and entropy coding processes to further reduce the bit rate associated with communication of residual blocks resulting from encoding source video data provided by video source 24. Transform techniques can include, e.g., discrete cosine transforms (DCTs) or conceptually similar processes. Alternatively, wavelet transforms, integer transforms, or other types of transforms may be used. Video encoder 28 can also quantize the transform coefficients, which generally involves a process to possibly reduce the amount of data, e.g., bits used to represent the coefficients. Entropy coding can include processes that collectively compress data for output to a bitstream. The compressed data can include, e.g., a sequence of coding modes, motion information, coded block patterns, and quantized transform coefficients. Examples of entropy coding include context adaptive variable length coding (CAVLC) and context adaptive binary arithmetic coding (CAB AC). [0052] Video encoding and decoding by source device 20 and destination device 40 can support a number of different video coded block sizes for intra-prediction, such as 16 by 16, 8 by 8, or 4 by 4 for luma components, and 8x8 for chroma components. Additionally, source device 20 and destination device 40 can support a number of different video coded block sizes for inter-prediction, such as 16x16, 16x8, 8x16, 8x8, 8x4, 4x8 and 4x4 for luma components and corresponding scaled sizes for chroma components. In this disclosure, "NxN" and "N by N" may be used interchangeably to refer to the pixel dimensions of the block in terms of vertical and horizontal dimensions, e.g., 16x16 pixels or 16 by 16 pixels. In general, a 16x16 block will have 16 pixels in a vertical direction (y = 16) and 16 pixels in a horizontal direction (x = 16). Likewise, an NxN block generally has N pixels in a vertical direction and N pixels in a horizontal direction, where N represents a nonnegative integer value. The pixels in a block may be arranged in rows and columns. Blocks may have different numbers of pixels in the horizontal and vertical dimensions. That is, blocks may include NxM pixels, where N is not necessarily equal to M. [0053] Block sizes that are less than 16 by 16 may be referred to as partitions of a 16 by 16 macroblock. Video blocks may comprise blocks of pixel data in the pixel domain, or blocks of transform coefficients in the transform domain, e.g., following application of a transform such as a discrete cosine transform (DCT), an integer transform, a wavelet transform, or a conceptually similar transform to the residual video block data representing pixel differences between coded video blocks and predictive video blocks. In some cases, a video block may comprise blocks of quantized transform coefficients in the transform domain. [0054] Smaller video blocks can provide better resolution, and may be used for locations of a video frame that include high levels of detail. In general, macroblocks and the various partitions, sometimes referred to as sub-blocks, may be considered video blocks. In addition, a slice may be considered to be a plurality of video blocks, such as macroblocks and/or sub-blocks. Each slice may be an independently decodable unit of a video frame. Alternatively, frames themselves may be decodable units, or other portions of a frame may be defined as decodable units. The term "coded unit" or "coding unit" may refer to any independently decodable unit of a video frame such as an entire frame, a slice of a frame, a group of pictures (GOP) also referred to as a sequence, or another independently decodable unit defined according to applicable coding techniques. [0055] Referring again to FIG. 1 , video source 24 can provide one or more views of a scene to video encoder 28 or may provide the information directly to encapsulation unit 30. Encapsulation unit 30 can receive elementary streams including encoded video data from video encoder 28 and elementary streams including audio data from audio encoder 26. In some examples, video encoder 28 and audio encoder 26 may each include packetizers for forming PES packets from encoded data. In other examples, video encoder 28 and audio encoder 26 may each interface with respective packetizers for forming PES packets from encoded data. In still other examples, encapsulation unit 30 may include packetizers for forming PES packets from encoded audio and video data. [0056] Encapsulation unit 30 can receive PES packets for elementary streams of a representation from audio encoder 26 and video encoder 28 and form corresponding network abstraction layer (NAL) units from the PES packets. Within the same representation, a stream ID may be used to distinguish the PES-packets belonging to one elementary stream from the other. The basic unit of data of an elementary stream can be a PES packet. Thus, each view of MVC video data can correspond to respective elementary streams. Similarly, audio data corresponds to one or more respective elementary streams. In addition to media elementary streams, encapsulation unit 30 can receive other types of elementary streams, including, parameter sets streams corresponding to parameter sets by which the video data encoded by video decoder 28 can by decoded by a decoding device like video decoder 48 of A/V destination device 40. [0057] The techniques of this disclosure are generally directed to the storage and transport of encoded multimedia (e.g., audio and video) data, and reception and subsequent interpretation and decoding of the transported multimedia data. For example, the techniques of this disclosure enable indications to be specified in an encapsulated video file, which individually indicate whether each type of parameter sets, e.g. VPSs, SPSs, and PPSs are included in a sample description associated with a sample, in sample data, both the sample description and the sample, or in some other location. [0058] In one example, encapsulation unit 30 analyzes elementary streams received from video encoder 28 and determines whether all parameter sets of a particular type associated with a sample are stored in a sample description associated with the sample. Encapsulation unit 30 can then provide an indication in a file created from the elementary streams, which indicates whether all parameter sets of the particular type are stored in the sample description. Additional details regarding this and other functions of encapsulation unit 30 in accordance with this disclosure are provided below with reference to FIGS. 2-5. [0059] In one example, encapsulation unit 30 receives PES packets for elementary streams of a representation from audio encoder 26 and video encoder 28 and forms corresponding NAL units from the PES packets. Organizing coded video segments into NAL units can provide a "network-friendly" video representation of the data to address applications such as video telephony, storage, broadcast, or streaming. NAL units can be categorized as Video Coding Layer (VCL) NAL units and non-VCL NAL units. VCL units may contain the core compression engine and may include block, macroblock, and/or slice level data. Other NAL units may be non-VCL NAL units. [0060] Non-VCL NAL units may include parameter set NAL units and Supplemental Enhancement Information (SEI) NAL units, among others. Parameter sets may contain different header information for different levels of granularity of video data, e.g. sequence and picture. Parameters encapsulated in parameter NAL units can include VPSs, SPSs, and PPSs. With parameter sets, infrequently changing information need not to be repeated for each sequence or picture, hence coding and transmission efficiency may be improved. For example, the use of parameter sets may enable out-of- band transmission of the important header information, avoiding the need for redundant transmissions for error resilience. In out-of-band transmission examples, parameter set NAL units may be transmitted on a different channel than other NAL units, such as SEI NAL units. [0061] SEI may contain information that is not necessary for decoding the coded pictures samples from VCL NAL units, but may assist in processes related to decoding, display, error resilience, and other purposes. SEI messages may be contained in non- VCL NAL units. SEI messages are the normative part of some standard specifications, and thus are not always mandatory for standard compliant decoder implementation. SEI messages may be sequence level SEI messages or picture level SEI messages. Some sequence level information may be contained in SEI messages, such as scalability information SEI messages in the example of SVC and view scalability information SEI messages in MVC. These example SEI messages may convey information on, e.g., extraction of operation points and characteristics of the operation points. [0062] A NAL unit including video data in its payload may include various granularity levels of video data. For example, a NAL unit may include a block of video data, one or more macroblocks, a slice of video data, or an entire frame of video data. [0063] In one example, encapsulation unit 30 assembles access units from a number of NAL units. In general, an access unit can include one or more NAL units for representing a frame of video data, as well audio data corresponding to the frame when such audio data is available. An access unit generally includes all NAL units for one output time instance, e.g., all audio and video data for one time instance. For example, if each view has a frame rate of 20 frames per second (fps), then each time instance may correspond to a time interval of 0.05 second. During this time interval, the specific frames for all views of the same access unit (the same time instance) may be rendered simultaneously. The decoding order of access units need not necessarily be the same as the output or display order. [0064] After encapsulation unit 30 has assembled NAL units and/or access units into a video file based on received data, encapsulation unit 30 passes the video file to output interface 32 for output. In some examples, encapsulation unit 30 may store the video file locally or send the video file to a remote server via output interface 32, rather than sending the video file directly to destination device 40. In one example, the video data can be transferred to input interface 36 of A/V destination device 40 via link 34. In some examples, source device 20 includes a modem that modulates video data transmitted to destination device 40 according to a communication standard, e.g., such as code division multiple access (CDMA) or another communication standard. A modem may include various mixers, filters, amplifiers or other components designed for signal modulation. Output interface 32 may include circuits designed for transmitting data, including amplifiers, filters, and one or more antennas. In some examples, rather than transmitting over a communication channel, e.g., over link 34, source device 20 can store encoded video data onto a storage device, such as a digital video disc (DVD), Blu-ray disc, flash drive, or the like. [0065] A/V destination device 40, in the example of FIG. 1, includes audio output 42, video output 44, audio decoder 46, video decoder 48, decapsulation unit 38, and input interface 36. In destination device 40, video decoder 48 ultimately receives and decodes the encoded video data. For example, input interface 36 of destination device 40 receives information over link 34 or from a storage device, which is then decapsulated by decapsulation unit 38. Video decoder 48 receives decapsulated video data from decapsulation unit 38. In some examples, destination device 40 includes a modem that demodulates the information. Like output interface 32, input interface 36 may include circuits designed for receiving data, including amplifiers, filters, and one or more antennas. In some instances, output interface 32 and/or input interface 36 may be incorporated within a single transceiver component that includes both receive and transmit circuitry. A modem may include various mixers, filters, amplifiers or other components designed for signal demodulation. In some instances, a modem may include components for performing both modulation and demodulation. [0066] Decapsulation unit 38 may decapsulate elements of a video file into constituent PES streams, depacketize the PES streams to retrieve encoded data, and send the encoded data to either audio decoder 46 or video decoder 48, depending on whether the encoded data is part of an audio or video stream, e.g., as indicated by PES packet headers of the stream. Audio decoder 46 decodes encoded audio data and sends the decoded audio data to audio output 42, while video decoder 48 decodes encoded video data and sends the decoded video data, which may include a plurality of views of a stream, to video output 44. [0067] In one example, video decoder 48 entropy decodes the received encoded video data 8, such as a coded block, according to an entropy coding methodology, such as CAVLC or CABAC, to obtain the quantized coefficients. Video decoder 48 applies inverse quantization (de-quantization) and inverse transform functions to reconstruct the residual block in the pixel domain. Video decoder 48 also generates a prediction block based on control information or syntax information (e.g., coding mode, motion vectors, syntax that defines filter coefficients and the like) included in the encoded video data. Video decoder 48 calculates a sum of the prediction block and the reconstructed residual block to produce a reconstructed video block for display. [0068] In one example, video output 44 includes one or more display devices, which are configured to display the decoded video data to a user including, e.g., multi-view video including destination view(s) synthesized based on depth information included in a reference view or views. Display devices forming part or all of video output 44 can include any of a variety of one or more display devices such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, or another type of display device. In some examples, video output 44 includes a display device capable of three-dimensional playback. For example, video output 44 can include a stereoscopic display, which is used in conjunction with eyewear worn by a viewer. [0069] Video encoder 28, video decoder 48, audio encoder 26, audio decoder 46, encapsulation unit 30, and decapsulation unit 38 each may be implemented as any of a variety of suitable processing circuitry, as applicable, such as one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), discrete logic circuitry, software, hardware, firmware or any combinations thereof. Each of video encoder 28 and video decoder 48 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined video encoder/decoder (CODEC). Likewise, each of audio encoder 26 and audio decoder 46 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined CODEC. An apparatus including video encoder 28, video decoder 48, audio encoder audio encoder 26, audio decoder 46, encapsulation unit 30, and/or decapsulation unit 38 may comprise an integrated circuit, a microprocessor, and/or a wireless communication device, such as a cellular telephone. [0070] FIG. 2 is a block diagram illustrating components of an example encapsulation unit 30. In the example of FIG 2, encapsulation unit 30 includes video input interface 80, audio input interface 82, video file creation unit 60, and video file output interface 84. Video file creation unit 60, in this example, includes network abstraction layer (NAL) unit constructor 62, parameter sets extraction unit 64, and sample description creation unit 66. [0071] Video input interface 80 and audio input interface 82 receive encoded video and audio data, respectively. Video input interface 80 and audio input interface 82 may receive encoded video and audio data as the data is encoded, or may retrieve encoded video and audio data from a computer-readable medium. Upon receiving encoded video and audio data, video input interface 80 and audio input interface 82 pass the encoded video and audio data to video file creation unit 60 for assembly into a video file. [0072] Video file creation unit 60 may correspond to a control unit including hardware, software, and/or firmware configured to perform the functions and procedures attributed thereto. The control unit may further perform the functions attributed to encapsulation unit 30 generally. For examples in which video file creation unit 60 is embodied in software and/or firmware, encapsulation unit 30 may include a computer-readable medium comprising instructions for video file creation unit 60 and a processing unit to execute the instructions. Each of the sub-units of video file creation unit 60 (NAL unit constructor 62, parameter sets extraction unit 64, and sample description creation unit 66, in this example) may be implemented as individual hardware units and/or software modules, and may be functionally integrated or further separated into additional sub- units. Video file creation unit 60 may correspond to any suitable processing unit or processing circuitry, such as, for example, one or more microprocessors, application- specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), or any combination thereof. Video file creation unit 60 may further include a non-transitory computer-readable medium storing instructions for any or all of NAL unit constructor 62, parameter sets extraction unit 64, and sample description creation unit 66, as well as a processor for executing the instructions. [0073] In general, video file creation unit 60 may create a video file including the received audio and video data. NAL unit constructor 62 may form NAL units including encoded video and audio samples. Video file creation unit 60 may further be configured to assemble access units including all NAL units for a particular time instance. Furthermore, video file creation unit 60 may be configured to decouple sequence level SEI messages from encoded video pictures described by the sequence level SEI messages, and store the sequence level SEI messages in the video file separately from the encoded video pictures described by the sequence level SEI messages. [0074] Video encoder 28 (FIG. 1) may include data other than video data with samples of video data. Encoded video data received by video input interface 80 of encapsulation unit 30 from video encoder 28 can include, e.g., data representing parameter sets such as VPSs, SPSs, and PPSs, as well as SEI messages for samples of encoded video. In the context of an encapsulated video file, samples may refer to samples of video encoded data as well as samples of other data, including samples including data forming portions of parameter sets that can be used by a video decoder, e.g. video decoder 48 of destination device 40 to decode encoded video data also included in the encapsulated video file created by encapsulation unit 30. [0075] In examples according to this disclosure, video file creation unit 60 of encapsulation unit 30 is configured to store parameter sets data received as part of the encoded video data in particular locations and provide indications in the encapsulated file indicating where the parameter sets are located. For example, video file creation unit 60 of encapsulation unit 30 is configured to store parameter sets data in a sample description associated with a video sample, in sample data, both the sample description and the sample, or in some other location. [0076] As noted above, video file creation unit 60 includes parameter sets extraction unit 64 and sample description creation unit 66. In one example, parameter sets extraction unit 64 is configured to extract parameter set data from the encoded video data received by video input interface 80 of encapsulation unit 30. Parameter sets extraction unit 64 can, in one example, identify parameter sets data and thereby distinguish this data from encoded video data. Additionally, parameter sets extraction unit 64 can separate parameter sets data from encoded video data. [0077] Parameter sets extraction unit 64 of video file creation unit 60 can also be configured to store the parameter sets data in a number of different locations in the encapsulated video file. In one example, parameter sets extraction unit 64 is configured to store part or all of the parameter sets data in one or more sample descriptions associated with samples of video data. In another example, parameter sets extraction unit 64 is configured to store the parameter sets data in locations other than sample descriptions, including with the video samples in a video track, or in a separate track of the encapsulated video file like a parameter sets track. In parameter sets data is stored separate from the sample descriptions and the video samples, in some examples, parameter sets extraction unit 64 can create a file separate from the encapsulated video file in which to store and by which to transmit some or all of the parameter sets data. [0078] Sample description creation unit 66 of video file creation unit 60 is configured to generate sample descriptions associated with samples of video. As noted above, in a file formatted in accordance with the ISO base media file format, the overall media presentation is referred to as a movie. The movie is logically divided into tracks. Some tracks can represent a timed sequence of media (frames of video, for example). Additionally, tracks can contain other data such as media attributes/parameters, including, e.g., parameter sets by which coded video data can be decoded by a decoder device that receives the data encapsulated in the file. Within each track, each timed unit is called a sample, which could be, e.g., a frame of video or audio. Each track has one or more sample descriptions and each sample in the track is tied to a description by reference. The sample description provides a syntax structure by which sample attribute information may be communicated to a device such as a video decoder. The sample description defines how the sample may be decoded (e.g. the description identifies the compression algorithm used). Sample description creation unit 66 is configured to generate sample descriptions associated with samples of video included in the encoded video data received by video input interface 80 of encapsulation unit 30. [0079] Among other information, in one example, sample descriptions generated by sample description creation unit 66 indicate the location of parameter sets. Previous HEVC file formats specified that either all parameter sets of all types are included in the sample description or all parameter sets of all types may be stored in the sample description and the samples. In some cases, however, it can be useful to distinguish whether a particular type of parameter sets are included in the sample description, e.g. to determine when to perform out-of-band transport of one or more of VPSs, SPSs, and PPSs. [0080] To facilitate determining whether all parameter sets of a particular type is included in a sample description or in some other location, e.g., a parameter set track, the techniques of this disclosure enable indications to be specified by sample description creation unit 66 in a sample description, which individually indicate where each type of parameter sets are stored. In one example, sample description creation unit 66 provides one indication for each type of parameter sets, e.g. each of VPSs, SPSs, and PPSs in the decoder configuration record. The decoder configuration record is a syntax structure that forms part of the sample description. FIGS. 3 and 4 illustrate examples of files created by encapsulation unit 30, which include indications of the location of parameter sets associated with samples of video stored in the files. [0081] FIG. 3 is a conceptual diagram illustrating example video file 100 encapsulated by encapsulation unit 30. Video file 100 includes moov box 102, which includes video data track 104 and parameter sets track 106. Video file 100 or other encapsulated video files in accordance with this disclosure can include many more than two tracks, including, multiple video and audio data tracks as well as multiple parameter set tracks. In FIG. 3, video data track 104 includes sample description 108 and an associated sequence of video samples including video samples 110 and 11. Video data track 104 can include more video samples and additional sample descriptions. [0082] Moov box 102 forms the basic storage container for video data included in the ISO base media file format video file 100. As noted above, in practice, moov box 102 can include a number of different tracks, including video data, audio data, and, in some cases, parameter sets tracks. In example video file 100 of FIG. 3, moov box 102 includes video data track 104 and parameter sets track 106. Each of video data track 104 and parameter sets track 106 can represent a timed sequence of media or other information (frames of video, for example). Within each track, each timed unit is called a sample, which could be, e.g., a frame of video or audio, or a sample of data representing parameter sets by which samples of video are decoded. [0083] In one example, sample description 108 is generated by sample description creation unit 66 based at least in part on where in video file 100 parameter sets associated with video samples 110 and 111 are stored. In the example of FIG. 3, parameter sets associated with video samples 110 and 111 include a number of different types of parameter sets, including VPSs 120, SPSs 122, and PPSs 124. VPSs 120 are stored in parameter sets track 106, while SPSs 122 and PPSs 124 are stored either in sample description 108 or in with video samples 110 and 111 , or both. [0084] Sample description creation unit 66 can generate sample description 108 by determining where parameter sets are stored in video file 100, e.g., by parameter sets extraction unit 64. In one example, sample description creation unit 66 determines that VPSs 120 are stored in parameter sets track 106 of video file 100, while SPSs 122 and PPSs 124 are stored in sample description 108 associated with video samples 110 and 111. In such a case, sample description creation unit 66 can provide indications of the parameter sets locations in video file 100 in decoder configuration record 126, which is a syntax structure included in sample description 108. [0085] An example implementation is provided below. In particular, the syntax for decoder configuration record 126 included in sample description 108 associated with video samples 110 and 11 in encapsulated video file 100 may be as follows in the example HEVC decoder configuration record shown below. aligned(8) class HEVCDecoderConfigurationRecord { unsigned int(8) configuration Version = 1 ; unsigned int(8) Profilelndication; unsigned int(8) profileCompatibility; unsigned int(8) Levellndication; bit(3) reserved = Ί 1 l'b; bit(l) allSpsIncluded; bit(l) allPpsIncluded; bit(O) allVpsIncluded; } [0086] In the foregoing example, the allSpsIncluded indication is equal to 1, which can indicate that all SPSs for the video samples to which configuration record 126 applies, e.g. video samples 110 and 111, are included in decoder configuration record 126. The allPpsIncluded indication is equal to 1, which can indicate that all PPSs for the video samples to which configuration record 126 applies, e.g. video samples 110 and 111, are included in decoder configuration record 126. The allVpsIncluded indication, however, is equal to 0, which indicates that all VPSs for the video samples to which configuration record 126 applies, e.g. video samples 110 and 111, are not included in decoder configuration record 126. In the example of FIG. 3, VPSs 120 are included in parameter sets track 106. [0087] A parameter set to be used in a picture or other portion of coded video data may need to be sent prior to the sample containing that picture or in the sample for that picture. However, depending on the nature of the information included in the parameter sets as well as the video samples with which the parameter sets are associated, it may be possible to transmit some of the parameter sets separately from the video data, e.g., some of the parameter sets may be transmitted out-of-band, as described above. Thus, it may be advantageous to individually indicate the locations of different types of parameter sets and, as illustrated in the example of FIG. 3, specify that, while SPSs 122 and PPSs 124 are included in decoder configuration record 126 of sample description 108, VPSs 120 are stored in parameter sets track 106 separate from video data such as video samples 110 and 111 with which VPSs 120 are associated. [0088] FIG. 4 is a conceptual diagram illustrating another example video file 140 encapsulated by encapsulation unit 30. Video file 140 includes moov box 142, which includes video data track 144. In the example of FIG. 4, encapsulation unit 30 generates a separate parameter file 146, which includes parameter sets track 148. Video file 140 or other encapsulated video files in accordance with this disclosure can include many more than two tracks, including, multiple video and audio data tracks as well as multiple parameter set tracks. In FIG. 4, video data track 144 includes sample description 150 and an associated sequence of video samples including video samples 152 and 153. Video data track 144 can include more video samples and additional sample descriptions. [0089] In the example of FIG. 4, sample description creation unit 66 generates sample description 150, including decoder configuration record 152. Additionally, decoder configuration record 152 includes flags allVpsIncluded, allSpsIncluded, and allPpsIncluded, individually indicating whether or not VPSs 154, SPSs, 156, and PPSs 158 are stored in sample description 150. In the example of FIG. 4, VPSs 154 are stored in parameter sets track 148 of parameter file 146, while SPSs 156 and PPSs 158 are stored in sample description 150 of video data track 144 of video file 140. Thus, in this example, it may be possible to transmit VPSs 154 separately from video data 140, e.g., transmit VPSs 154 out-of-band, as described above. [0090] Sample descriptions associated with video samples in an encapsulated video file may include a name, which can be set to a number of different values. In some examples according to this disclosure, the name of a sample description may indicate the location of one or more parameter sets, e.g., may indicate whether or not one or more parameter sets of particular types are stored in the sample description. In one example, sample descriptions may include a number of either 'hvcl ' or 'hevl . ' In one example, for a sequence of video samples to which a particular sample description applies, the VPSs, SPSs, and PPSs, are stored only in the sample description when the sample description name is 'hvcl', and are stored in both the sample description and the samples when the sample description name is 'hevl'. In this manner, the name of the sample description, e.g. 'hvcl' or 'hevl', indicates where parameter sets are stored in the sample description or in the samples. [0091] Storing parameter sets in the sample descriptions of a video stream provides a simple and static way to supply parameter sets. Storing parameters in samples, on the other hand, while possibly more complex, may allow for more flexibility, e.g., in the case of parameter set updates and in the case of adding additional parameter sets. A decoder initializes with the parameter sets in the sample description, and then updates using the parameter sets as they occur in the stream. Such updating may replace parameter sets with a new definition using the same identifier. Each time the sample description changes, the decoder re-initializes with the parameter sets included in the sample description. [0092] In the foregoing implementation examples, the allSpsIncluded flag (or, alternatively, bit), when equal to 1, may indicate that all SPSs for the stream to which this configuration record applies are included in the sample description. When the sample description name is 'hvcl', the allSpsInc hided flag is typically set to 1. The allPpsIncluded flag, when equal to 1, likewise may indicate that all PPSs for the stream to which this configuration record applies are included in the sample description. Again, when the sample description name is 'hvcl', the allPpsIncluded flag is also typically set to 1. The allVpsIncluded flag, when equal to 1, may indicate that all VPSs for the stream to which this configuration record applies are included in the sample description. When the sample description name is 'hvcl', the allVpsIncluded flag is typically set to 1. [0093] As an alternative to having both sample description names 'hvcl' and 'hevl', one of the two sample description names 'hvcl' and 'hevl' may be removed as a possibility for sample description names such that the remaining sample description name does not indicate where the parameter sets are stored. In such an example, the location of the parameters can be indicated independent of the sample description name by the three flags allSpsIncluded, allPpsIncluded and allVpsIncluded. Consequently, in this alternative, the semantics of the three flags can be as follows: • allSpsIncluded equal to 1 indicates that all SPSs for the stream to which this configuration record applies are included in the sample description independent of the name of the sample description. • allPpsIncluded equal to 1 indicates that all PPSs for the stream to which this configuration record applies are included in the sample description independent of the name of the sample description. • allVpsIncluded equal to 1 may indicates that all VPSs for the stream to which this configuration record applies are included in the sample description independent of the name of the sample description. [0094] Alternatively (to any of the above listed alternatives), some aspects of the techniques may provide that, when the allSpsIncluded flag is equal to 0, at least one SPS for the stream to which this configuration record applies is not included in the sample description. Likewise, some aspects of the techniques may provide that, when the allPpsIncluded flag is equal to 0, at least one PPS for the stream to which this configuration record applies is not included in the sample description. Moreover, some aspects of the techniques may provide that, when the allVpsIncluded flag is equal to 0, at least one VPS for the stream to which this configuration record applies is not included in the sample description. [0095] FIG. 5 is a flowchart illustrating an example method of storing coded video data in an electronic file. The method of FIG. 5 includes determining whether a sample description associated with at least one sample includes all parameter sets of a particular type associated with the at least one sample (200) and providing, in the electronic file, an indication indicating whether the sample description includes all parameter sets of the particular type based on the determination (202). The at least one sample includes at least a portion of a plurality of sequences of coded video pictures in the electronic file. The particular type is one of a plurality of different particular types of parameter sets. The functions of the example method of FIG. 5 are described in more detail below with reference to the example method of FIG. 6, which illustrates an example method of processing coded video data in accordance with this disclosure. [0096] FIG. 6 is a flowchart illustrating an example method of processing coded video data. Although described with respect to the components of source device 20 and destination device 40 (FIG. 1) for purposes of example and explanation, it should be understood that any suitable device may implement the techniques of FIG. 6. [0097] Initially, encapsulation unit 30 may receive a sequence of encoded video pictures (210). An encoder, such as video encoder 28, may have included parameter sets of different types with the coded video samples, including, e.g., VPSs, SPSs, and PPSs. Additionally or alternatively, encapsulation unit 30 may create parameter sets separately from video encoder 28. In any case, encapsulation unit 30 may separate parameter sets data from coded video pictures with which the parameter sets are associated (212). For example, parameter sets extraction unit 64 of video file creation unit 60 of encapsulation unit 30 can separate the parameter sets data from coded video pictures with which the parameter sets are associated. [0098] That is, encapsulation unit 30 may create a video file including parameter sets and coded video pictures with which the parameter sets are associated (214). In doing so, however, encapsulation unit 30 may store one or more of the parameter sets separately from the coded video pictures with which the parameter sets are associated. In this manner, the parameter sets may be transmitted and processed separately of the coded video pictures. For example, in accordance with the techniques of this disclosure, encapsulation unit 30 may store one or more parameter sets in a parameter set track of the created video file or of another file separate from the video file. In another example, encapsulation unit 30 may store one or more of the parameter sets in one or more sample descriptions associated with coded video pictures. [0099] Encapsulation unit 30, e.g., sample description creation unit 66 of encapsulation unit 30 can be configured to generate one or more sample descriptions associated with the coded video pictures included in the encapsulated video file (216). As part of this process, sample description creation unit 66 can be configured to determine the location of different types of parameter sets and provide indications in a sample description regarding whether all parameter sets of a particular type are stored in the sample description, as described above with reference to the examples of video files 100 and 140 of FIGS. 3 and 4, respectively. [0100] Encapsulation unit 30 may then output the video file (218). For example, encapsulation unit 30 may cause source device 20 to write the video file to a storage medium, such as, for example, an optical disc, a floppy disk, a flash drive, a hard drive, a solid state drive, or other storage medium. Such storage media may be physically transported to destination device 40. Alternatively, source device 20 may transmit the video file to destination device 40, e.g., via broadcast, network transmission, or other transmission techniques. In any case, destination device 40 may ultimately receive the video file (220). [0101] In some examples, source device 20 may provide distinct portions of the video file to destination device 40, e.g., in response to one or more HTTP-Get or partial-Get requests issued by destination device 40 to source device 20. Destination device 40 may issue a first HTTP-Get or partial-Get request to source device 20 to retrieve a sequence data set, e.g., all or a portion of a parameter set track including sequence level SEI messages, and a second (or more) HTTP-Get or partial-Get request(s) to retrieve coded video pictures described by the sequence data set. [0102] After receiving the video file, destination device 40 may decode the video file based on the parameter sets (222). That is, video decoder 48 may use data of the parameter sets, including one or more of VPSs, SPSs, and PPSs to assist in the decoding process. In one example, video decoder 48 analyzes sample descriptions associated with one or more sets of coded video pictures included in the video file received from source device 20. For example, video decoder 48 can receive a sample description including flags, e.g. allSpsIncluded, allPpsIncluded and allVpsIncluded flags, individually indicating whether SPSs, PPSs, and VPSs are included in the sample description. Depending on the indications provided in the sample description, video decoder can retrieve or otherwise reference the parameter sets to decode the video included in the video file received from source device 20. [0103] In one example, encapsulation unit 30 of source device 20 stores all VPSs in a parameter file separate from the video file and transmits the parameter file to destination device 40 before transmitting the video file. Video decoder 48 can reference sample descriptions, including reference the decoder configuration record with respect to different sets of video samples and determine, based on indications provided in the decoder configuration record, that all VPSs are not stored in the sample description. In such an example, video decoder 48 can retrieve or otherwise reference the VPSs included in the parameter file provided by source device 20 separate from the video file. [0104] In one or more examples, the functions, methods, and techniques described in this disclosure may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium. [0105] By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transient, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. [0106] Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term "processor," as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques could be fully implemented in one or more circuits or logic elements. [0107] The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a codec hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware. [0108] Various examples have been described. These and other examples are within the scope of the following claims.
The present invention discloses a semiconductor device and a semiconductor dice including electrically conductive interconnects between die rings. The semiconductor device includes a semiconductor diecomprising integrated circuitry over a substrate of a semiconductor material. A first die ring comprises one or more electrically conductive materials at least partially surrounding the integrated circuitry, the one or more electrically conductive materials comprising an electrically conductive path from proximate a surface of the substrate to an exposed surface of the semiconductor die. A seconddie ring comprises an electrically conductive material and is disposed around the first die ring. A first electrically conductive interconnect electrically connects the first die ring and to second die ring. Related semiconductor devices and semiconductor dice are disclosed.
1.A semiconductor device comprising:a semiconductor die including an integrated circuit;a first die ring including one or more electrically conductive materials at least partially surrounding the integrated circuit, the one or more electrically conductive materials including electrically conductive from a surface of the semiconductor die to the semiconductor die path;a second die ring including a conductive material disposed about the first die ring; anda first conductive interconnect electrically connecting the first die ring to the second die ring.2.The semiconductor device of claim 1 wherein the first conductive interconnect extends from a surface adjacent the substrate to the surface of the semiconductor die.3.The semiconductor device of claim 1 wherein said first die ring and said second die ring each comprise a conductive pad and is formed from said surface adjacent the substrate to said surface of said semiconductor die a conductive via of the conductive path.4.The semiconductor device of claim 1 wherein said first conductive interconnect and said first die ring and said second die ring comprise one or more of the same materials.5.The semiconductor device of claim 1 wherein said first die ring exhibits a decrease in width relative to its width at said first conductive interconnect at a location adjacent said first conductive interconnect Small width.6.The semiconductor device of claim 5 wherein said reduced width comprises a width of said first die ring at a location of said first die ring remote from said first conductive interconnect Between about 50% and about 80%.7.The semiconductor device of claim 1 further comprising a third die ring disposed about the second die ring, wherein the third die ring is in electrical communication with the second die ring.8.The semiconductor device of claim 7 wherein said third die ring is in electrical communication with said second die ring via said first conductive interconnect.9.The semiconductor device of claim 7 wherein said third die ring comprises a discontinuous segmented structure, said different portions of said third die ring being electrically conductive via at least one of said first conductive interconnects An interconnect is electrically coupled to the second die ring.10.The semiconductor device of claim 7 further comprising a fourth die ring disposed about the third die ring and in electrical communication with the third die ring.11.The semiconductor device of claim 1 further comprising a third die ring disposed about the second die ring and a fourth die ring disposed about the third die ring, wherein the third A die ring is in electrical communication with the second die ring and includes a continuous structure surrounding the second die ring, and wherein the fourth die ring is in electrical communication with the third die ring and includes a continuous structure .12.The semiconductor device of claim 1, wherein at least one of the first die ring and the second die ring comprises four edges, wherein the first die ring and the second die Each edge of the at least one of the rings is electrically connected to at least four conductive interconnects.13.The semiconductor device of claim 1 wherein a vertical edge of said at least one of said first die ring and said second die ring is electrically connected to a greater number of conductive mutuals than said horizontal edge thereof Connected pieces.14.A semiconductor die comprising:a first die ring in a peripheral region of the semiconductor die, the first die ring including a continuous conductive structure extending from an upper surface of the semiconductor die into the semiconductor die and comprising a conductive material;a second die ring surrounding the first die ring, the second die ring including a conductive material;a first conductive interconnect electrically connecting the first die ring to the second die ring.15.The semiconductor die of claim 14 wherein the second die ring is electrically connected to the first die ring via a plurality of conductive interconnects.16.The semiconductor die of claim 14 wherein said first die ring is electrically coupled to said second die ring via a plurality of first conductive interconnects, wherein said first die ring is vertical The edge is electrically coupled to more conductive interconnects than its horizontal edge.17.The semiconductor die of claim 14 wherein said second die ring comprises a discontinuous segmented structure extending around said first die ring, wherein said first portion of said second die ring and said A second portion of the second die ring is electrically connected to the first die ring.18.The semiconductor die of claim 14 further comprising a third die ring disposed about the second die ring, wherein the third die ring is via the at least one second conductive interconnect The first die ring and the second die ring are in electrical communication.19.The semiconductor die of claim 18 wherein the first die ring, the second die ring, and the third die ring each comprise a continuous conductive structure.20.The semiconductor die of claim 18 wherein the first die ring and the second die ring each comprise a continuous conductive structure and the third die ring comprises a discontinuous segmented conductive structure.21.A semiconductor die according to claim 18, further comprising a fourth die ring disposed about said third die ring, wherein said fourth die ring is coupled to said said via said at least one second conductive interconnect The first die ring, the second die ring, and the third die ring are in electrical communication.22.The semiconductor die of claim 21 wherein said first conductive interconnects cause said first die ring, said second die ring, said third die ring, and said fourth bare Each of the patch rings is electrically connected to each other.23.The semiconductor die of claim 21, wherein each of the first die ring, the second die ring, the third die ring, and the fourth die ring each comprise a continuous Conductive structure.24.A semiconductor device comprising:a first die ring extending around an integrated circuit of a semiconductor die, wherein the first die ring includes a continuous conductive structure extending around the integrated circuit;a second die ring including a conductive material surrounding the first die ring; anda conductive interconnect that electrically couples the first die ring to the second die ring.25.The semiconductor device of claim 24 wherein said second die ring comprises a discontinuous segmented structure extending around said first die ring.26.The semiconductor device of claim 25 additionally comprising a third die ring extending around the second die ring.27.The semiconductor device of claim 24 wherein said second die ring comprises a continuous conductive structure extending around said first die ring.28.The semiconductor device of claim 24, further comprising a third die ring extending around the second die ring and a fourth die ring extending around the third die ring, wherein the second The die ring, the third die ring, and the fourth die ring each comprise a continuous conductive structure.29.The semiconductor device of claim 24 wherein electrically coupling said first die ring to said conductive interconnect of said second die ring comprises connecting a horizontal edge of said first die ring a greater number of conductive interconnects to the horizontal edge of the second die ring than a vertical extension of the first die ring connecting the vertically extending edge of the first die ring to the second die ring Conductive interconnects at the edges.30.The semiconductor device of claim 24 wherein said conductive interconnect comprises at least about four conductive layers electrically coupled to each side of said first die ring and each side of said second die ring Interconnects.
Semiconductor device and semiconductor die including conductive interconnects between die ringsPriority claimU.S. Patent Application Serial No. SEMICONDUCTOR DEVICES AND SEMICONDUCTOR DICE INCLUDINGELECTRICALLY CONDUCTIVE INTERCONNECTS BETWEEN DIE RINGS, filed on August 30, 2017, entitled "Semiconductor and SEMICONDUCTOR DICE INCLUDINGELECTRICALLY CONDUCTIVE INTERCONNECTS BETWEEN DIE RINGS" Application date of 15/691, 303.Technical fieldEmbodiments disclosed herein relate to semiconductor devices and semiconductor dies that include conductive interconnects between die rings. More specifically, embodiments of the present disclosure relate to semiconductor devices and semiconductor dies that include die dies that extend around an integrated circuit of a semiconductor die and are electrically connected to each other through one or more conductive interconnects, and related methods.Background techniqueA large number of semiconductor dice are fabricated on a single wafer or other bulk semiconductor substrate during semiconductor die fabrication. After fabricating the components and circuitry associated with each die, a so-called slicing operation is performed on the wafer to separate individual patches from the wafer (e.g., to singulate the pellets) and to separate the patches from one another. After slicing, individual dice may be packaged or mounted directly to the semiconductor device to form a printed circuit board.Slicing involves sawing using a mechanical saw having, for example, a diamond saw blade, along a scribe line (referred to as "street") passing through portions of the wafer between the dies. Unfortunately, the slicing operation typically imposes significant stress on the semiconductor wafer and may damage the die during singulation. Current embodiments of extremely thin wafer singulation sheets from a thickness of, for example, 50 μm or less exacerbate the possibility of damage. For example, the slice may initiate a break at the edge of the individual patches, such as near the region of the score line, during sawing. If the break is severe enough, it can propagate through the die and interrupt the integrated circuit of the die. Fracture can also cause delamination of the material within the die and can also expose the integrated circuit of the die to the surrounding environment and contaminants (eg, moisture and ionic contaminants), which can cause corrosion and undesired oxidation of such materials. In some cases, the die or package associated with the die may fail due to one or more of the rupture, moisture, or contaminants to which the die is exposed.To compensate for die cracking, in some cases, the die may be formed with a so-called "die ring" around its peripheral portion of the integrated circuit that surrounds the die (sometimes referred to in the art as a "seal ring" or "guard ring" "). The die ring may comprise a material that is less susceptible to cracking or delamination when subjected to a slicing operation than an adjacent material adjacent the periphery of the die. Thus, the die ring can help reduce crack propagation from the periphery of the die to the integrated circuit region of the die during or after the slicing operation.Summary of the inventionEmbodiments disclosed herein relate to semiconductor devices and semiconductor dies that include conductive interconnects between die rings. For example, in accordance with some embodiments, a semiconductor device includes a semiconductor die including an integrated circuit; a first die ring including one or more electrically conductive materials at least partially surrounding the integrated circuit, the Or a plurality of electrically conductive materials including a conductive path from a surface of the semiconductor die into the semiconductor die; a second die ring including a conductive material disposed about the first die ring; and a first conductive An interconnect that electrically connects the first die ring to the second die ring.In an additional embodiment, a semiconductor die includes a first die ring in a peripheral region of a semiconductor die, the first die ring including an extension from an upper surface of the semiconductor die to the semiconductor a continuous conductive structure in the die and including a conductive material; a second die ring surrounding the first die ring, the second die ring including a conductive material; and a first conductive interconnect, which will be The first die ring is electrically connected to the second die ring.In a further embodiment, a semiconductor device includes a first die ring extending around an integrated circuit of a semiconductor die, wherein the first die ring includes a continuous conductive structure extending around the integrated circuit; a die ring including a conductive material surrounding the first die ring; and a conductive interconnect electrically coupling the first die ring to the second die ring.DRAWINGS1 is a top plan view of a wafer including a plurality of semiconductor dice;2A is a top plan view of a semiconductor die including a die ring electrically connected through a conductive interconnect, in accordance with an embodiment of the present disclosure;2B is a top plan view of the interconnection between the first die ring and the second die ring taken from the broken line frame B in FIG. 2A;2C is a cross-sectional view of the semiconductor die taken along section line C-C of FIG. 2A;2D is a cross-sectional view of the semiconductor die taken along section line D-D of FIG. 2A;2E is a cross-sectional view of a semiconductor die during its manufacture;3 is a top plan view of a semiconductor die including a die ring electrically connected through conductive interconnects, in accordance with an embodiment of the present disclosure;4 is a top plan view of a semiconductor die including a die ring electrically connected by conductive interconnects, in accordance with other embodiments of the present disclosure;5 is a top plan view of a semiconductor die including a die ring structure including a die ring electrically connected by a conductive interconnect, in accordance with still other embodiments of the present disclosure;6 is a top plan view of a semiconductor die including a die ring electrically connected by conductive interconnects in accordance with additional embodiments of the present disclosure.Detailed waysThe drawings are not intended to be an actual view of any particular system or semiconductor structure, but are merely idealized representations for describing the embodiments herein. Elements and features that are common between the figures may retain the same designated numerals, but for ease of description, most of the reference numerals begin with the reference numerals on which the elements of the elements are introduced or most fully described.The following description provides specific details such as material type, material thickness, and processing conditions in order to provide a sufficient description of the embodiments described herein. However, those skilled in the art will appreciate that the embodiments disclosed herein may be practiced without these specific details. In fact, embodiments can be practiced in conjunction with conventional fabrication techniques employed in the semiconductor industry. Additionally, the description provided herein does not form a complete description of a semiconductor die, a semiconductor device, or a process flow for fabricating such a semiconductor die or semiconductor device. The structures described below do not form a complete semiconductor die or semiconductor device. Only those process actions and structures necessary to understand the embodiments described herein are described in detail below. Additional actions to form a complete semiconductor die or semiconductor device comprising the structures described herein can be performed by conventional techniques.In accordance with embodiments disclosed herein, a die ring structure can be disposed in a peripheral region of a semiconductor die and can include a plurality of die rings disposed around an integrated circuit in an integrated circuit region of the semiconductor die. The die ring can form a continuous conductive path from the semiconductor material of the substrate at the level below the active circuit to the upper surface of the semiconductor die. In some embodiments, the die ring can include, for example, a conductive pad and a conductive interconnect extending from a surface of the die substrate material to an upper surface of the semiconductor die. The die ring structure can include a first die ring (eg, an inner die ring), a second die ring disposed about the first die ring, a third die ring disposed about the second die ring, and a surrounding The fourth die ring placed on the three die rings. The fourth die ring may circumferentially surround the third die ring, the third die ring may circumferentially surround the second die ring, and the second die ring may circumferentially surround the first die ring. The die ring can reduce or prevent one or more of the semiconductor die from cracking during the slicing operation, crack propagation after the slicing operation, and contamination of the integrated circuit of the semiconductor die. For example, the die ring can form a barrier to moisture and contaminants (eg, ionic contaminants) that diffuse into the integrated circuit region of the semiconductor die.A conductive interconnect can electrically couple at least one of the die rings to at least one other of the die rings. In some embodiments, at least the first die ring can be electrically coupled to at least one adjacent die ring via one or more conductive interconnects. The first die ring may include a continuous conductive structure that extends around the integrated circuit region of the semiconductor die when viewed from the top of the semiconductor die. In other words, the first die ring can include an uninterrupted conductive path that circumferentially surrounds the integrated circuit region of the semiconductor die.In some embodiments, each of the die rings is electrically connected to other die rings in the die ring (eg, electrically connected to each of the other die rings) and exhibits the same potential as the other die rings . Each of the die rings can include a continuous conductive path that extends around the integrated circuit region of the semiconductor die. In some embodiments, one or more of the die rings may exhibit a staggered (eg, discontinuous) structure in which the die ring does not form a continuous structure around the perimeter of the semiconductor die when viewed from the top of the semiconductor die . Electrically coupling the die ring can reduce or prevent arc breakdown during material removal and patterning processes (eg, etching, such as reactive ion etching, plasma etching, etc.) during fabrication of the semiconductor die. In some embodiments, the electrically connected die ring can exhibit a smaller capacitor coupling than a conventional die ring that is electrically isolated from each other. By comparison, a die containing conventional die rings that are electrically isolated from each other does not exhibit an equipotential and can exhibit arc breakdown during the patterning process. Such arc breakdown can damage or even destroy semiconductor dies and associated integrated circuits. In some embodiments, staggering the die ring can reduce the amount of capacitive coupling between the die ring structures.FIG. 1 is a top plan view of a semiconductor wafer 101 that can include a plurality of semiconductor die 100. After fabrication of the front and back stages of fabrication, the wafer 101 can be divided into individual semiconductor dice 100 that can be physically separated from one another by the score line 102. The wafer 101 may be diced at the scribe line 102 in a "slice" operation to singulate the semiconductor dies 100 to each other.2A is a top plan view of a semiconductor die 200 including a die ring structure 201 including a plurality of die rings 210, 212, 214, 216 located in a peripheral region 206 of a semiconductor die 200. The die ring structure 201 can be in the form of a ring structure surrounding the integrated circuit region 204 of the die 200. In some embodiments, the die ring structure 201 can have a rectangular shape. In other embodiments, the die ring structure 201 can have a square shape, a circular shape, an elliptical shape, or another shape.Integrated circuit region 204 can include active circuitry associated with, for example, a 3D NAND semiconductor device. In some such embodiments, integrated circuit region 204 can comprise alternating levels of electrically conductive material (eg, polysilicon) and an insulating material (eg, silicon dioxide). However, the present disclosure is not so limited and integrated circuit region 204 can include other types of semiconductor devices.In some embodiments, the die rings 210, 212, 214, 216 can extend from the surface of the semiconductor material of the die substrate to the upper exposed surface of the die 200. In some such embodiments, the die rings 210, 212, 214, 216 form a barrier structure (eg, "wall") that can reduce or prevent material from diffusing from the peripheral region 206 to the integrated circuit region 204 of the semiconductor die 200. . The die rings 210, 212, 214, 216 may reduce or prevent penetration of moisture and ionic contaminants into the integrated circuit region 204. The die ring structure 201 can also reduce or prevent delamination of material (eg, dielectric material) of the semiconductor die 200 during a slicing (eg, sawing) operation. In some embodiments, the die rings 210, 212, 214, 216 provide mechanical support for the die 200.The die rings 210, 212, 214, 216 can include conductive structures such as conductive traces, conductive traces, conductive pads, conductive vias, and combinations thereof. The die rings 210, 212, 214, 216 can comprise one or more electrically conductive materials. By way of non-limiting example, the die rings 210, 212, 214, 216 may comprise tungsten, aluminum, silver, polysilicon, titanium, titanium nitride, copper, tantalum, cobalt, niobium, tantalum nitride, another electrically conductive material, and Its combination.In some embodiments, each die ring 210, 212, 214, 216 can include a continuous conductive structure disposed about the integrated circuit region 204. The continuous conductive structure can be continuous when viewed from the top of the semiconductor die 200. In some such embodiments, each die ring 210, 212, 214, 216 may not include any interruptions therein such that the potential at the first portion of each die ring 210, 212, 214, 216 may be correspondingly bare The potentials at the opposite sides of the patch ring are substantially the same.The conductive interconnect 220 can electrically connect one or more of the first die ring 210, the second die ring 212, the third die ring 214, and the fourth die ring 216 to the first die ring 210, The other of the second die ring 212, the third die ring 214, and the fourth die ring 216. Conductive interconnects 220 may extend laterally between at least one die ring 210, 212, 214, 216 and at least one other die ring 210, 212, 214, 216. Conductive interconnect 220 can comprise a conductive material. By way of non-limiting example, conductive interconnects 220 can include tungsten, aluminum, silver, polysilicon, titanium, titanium nitride, copper, tantalum, cobalt, tantalum, tantalum nitride, another conductive material, and combinations thereof. In some embodiments, the conductive interconnects 220 comprise the same material as the die rings 210, 212, 214, 216. In some such embodiments, the conductive interconnects 220 can include tungsten.In some embodiments, the first die ring 210 can be electrically coupled to the second die ring 212 via one or more conductive interconnects 220, and the second die ring 212 can be via one or more conductive interconnects 220 Electrically coupled to the third die ring 214, and the third die ring 214 can be electrically coupled to the fourth die ring 216 via one or more conductive interconnects 220. In some embodiments, each die ring 210, 212, 214, 216 can be electrically connected to at least one other die ring 210, 212, 214, 216 on each side of the die 200. In other words, each side of the die 200 can include electrically connecting the first die ring 210 to the at least one conductive interconnect 220 of the second die ring 212, electrically connecting the second die ring 212 to the third At least one conductive interconnect 220 of the die ring 214 and at least one conductive interconnect 220 electrically connecting the third die ring 214 to the fourth die ring 216.Although FIG. 2A illustrates only four conductive interconnections between the four conductive interconnects 220, the second die ring 212, and the third die ring 214 between the first die ring 210 and the second die ring 212. The member 220 and the four conductive interconnects 220 between the third die ring 214 and the fourth die ring 216, but the disclosure is not so limited. In some embodiments, each side of the semiconductor die 200 can be between the first die ring 210 and the second die ring 212, between the second die ring 212 and the third die ring 214, and Each of the third die ring 214 and the fourth die ring 216 includes a conductive interconnect between about five conductive interconnects 220 and twenty conductive interconnects 220, such as Between five and about ten conductive interconnects 220, between ten and fifteen conductive interconnects 220, or between fifteen and twenty conductive interconnects 220 .In some embodiments, each side of each of the die rings 210, 212, 214, 216 can be coupled to at least about four conductive interconnects 220, at least about eight conductive interconnects 220, at least about ten Two conductive interconnects 220, at least about sixteen conductive interconnects 220, at least about twenty conductive interconnects, at least about twenty conductive interconnects, or at least about twenty-five conductive interconnects 220.In some embodiments, the vertical sides of the die rings 210, 212, 214, 216 (eg, the edges of the die rings 210, 212, 214, 216 that extend up and down on the view page illustrated in FIG. 2A) are horizontally The sides (eg, the edges of the die rings 210, 212, 214, 216 that extend perpendicular to their vertical edges and extend from left to right in the view illustrated in FIG. 2A) can be electrically coupled to more conductive interconnects than the sides Pie. 220. The vertical and horizontal sides of the die rings 210, 212, 214, 216 may extend in a direction parallel to the major surface of the semiconductor die 200.In some embodiments, the vertical sides of the die rings 210, 212, 214, 216 can be electrically coupled to between about fifteen and about twenty-five conductive interconnects 220, such as between about ten Between five and about seventeen, between about seventeen and about nineteen, between about nineteen and about twenty-one, between about twenty-one and about twenty A conductive interconnect 220 between three, or between about twenty-three and about twenty-five. In some embodiments, each vertical side of the die rings 210, 212, 214, 216 can be electrically coupled to nineteen or twenty conductive interconnects 220. The horizontal sides of the die rings 210, 212, 214, 216 can be electrically coupled to between about ten and about twenty conductive interconnects 220, such as between about ten and about twelve, Between about twelve and about fourteen, between about fourteen and about sixteen, between about sixteen and about eighteen or between about eighteen and about two Ten conductive interconnects 220 between. In some embodiments, the horizontal sides of the die rings 210, 212, 214, 216 can be electrically coupled to fourteen or fifteen conductive interconnects 220.2B is a top plan view of the interconnect structure 222 of the conductive interconnects 220 between the first die ring 210 and the second die ring 212 taken from the dashed box B of FIG. 2A. Although FIG. 2B illustrates conductive interconnects 220 only between first die ring 210 and second die ring 212, it should be understood that conductive interconnects 220 between other die rings may be similar to the illustrated conductive interconnects. Connected to 220.Conductive interconnect 220 can extend from first die ring 210 to second die ring 212. The first die ring 210 and the second die ring 212 may include recessed portions 230 in the width of the first die ring 210 and the second die ring 212 at locations adjacent to the conductive interconnects 220. The first die ring 210 and the second die ring 212 may have a width W1 at a region contacting and adjacent to the conductive interconnect 220 and may have a width W2 at a location remote from the conductive interconnect 220.In some embodiments, the width W1 can be less than the width W2. The width W1 may be between about 40% and about 80% of the width W2, such as between about 40% and about 50% of the width W2, between about 50% and about 60% of the width W2. Between about 60% and about 70% of the width W2, or between about 70% and about 80% of the width W2. In some embodiments, the width W1 can be equal to about 75% of the width W2.The conductive interconnect 220 can have a width W3 that can be equal to the width W1. Thus, in some embodiments, the width W3 of the conductive interconnects 220 can be equal to the width W1 of the die rings 210, 212 at locations adjacent the conductive interconnects 220 that intersect the die rings 210, 212. In some embodiments, the sidewalls of the conductive interconnects 220 can be longitudinally spaced apart from the sidewalls of the die rings 210, 212 by a thickness W2 by a width W3. In some embodiments, the interconnect structure 222 can be formed using optical proximity correction (OPC) to facilitate formation of each of the first die ring 210 and the second die ring 212 as depicted and described with respect to FIG. 2B. An electrically conductive conductive interconnect 220.In some embodiments, the reduced width W1 of the first die ring 210 and the second die ring 212 at the area contacting the conductive interconnect 220 may reduce the area of the interconnect structure 222. Reducing the area of the interconnect structure 222 may reduce the area exposed to the etchant during the patterning of the interconnect structure 222. Since the etch rate can be proportional to the area exposed to the etchant, reducing the exposed area of the interconnect structure 222 can reduce its etch rate relative to other portions of the die rings 210, 212, 214, 216. In other words, the interconnect structure 222 including the recessed portion 230 is formed at the intersection of the conductive interconnect 220 and each of the first die ring 210 and the second die ring 212, which can reduce the interconnect structure An etch rate of 222 relative to portions of first die ring 210 and second die ring 212 that are external to interconnect structure 222 (eg, having a width W2).2C is a cross-sectional view of the die 200 taken along section line C-C of FIG. 2A. As described above, each of the die rings 210, 212, 214, 216 can define a conductive path between the surface of the material of the semiconductor die substrate 202 and the exposed surface of the semiconductor die 200. The die ring structure 201 can be located adjacent to the integrated circuit region 204 of the semiconductor die 200 and can include a first die ring 210, a second die ring 212, a third die ring 214, and a fourth die ring 216. . In embodiments where the integrated circuit region 204 includes an active circuit associated with, for example, a 3D NAND semiconductor device, the integrated circuit region 204 can comprise a conductive material 240 (eg, polysilicon) and an insulating material 242 (eg, silicon dioxide). Alternate area.Each die ring 210, 212, 214, 216 can define a conductive path from a surface of the material of the semiconductor die substrate 202 to an upper surface of the die 200. By way of non-limiting example, each of the first die ring 210, the second die ring 212, the third die ring 214, and the fourth die ring 216 can include interconnecting conductive pads 208 and slave semiconductor dies The surface of the material of the substrate 202 extends to the conductive vias 209 of the upper surface of the semiconductor die 200. In some embodiments, the conductive pads 208 can include a continuous structure that forms a ring around the perimeter of the die 200.The conductive pads 208 and the conductive vias 209 may comprise tungsten, aluminum, silver, polysilicon, titanium, titanium nitride, copper, tantalum, cobalt, tantalum, tantalum nitride, another conductive material, and combinations thereof. In some embodiments, the conductive pads 208 and the conductive vias 209 comprise tungsten.The insulating material 242 can surround the conductive pads 208 and the conductive vias 209. Insulating material 208 may comprise, by way of non-limiting example, silicon dioxide, silicon nitride, a spin-on dielectric material, or another dielectric material.2D is a cross-sectional view of the die 200 taken along section line D-D of FIG. 2A illustrating the conductive interconnects 220 between the first die ring 210 and the second die ring 212. Conductive interconnect 220 can electrically couple first die ring 210 to second die ring 212 and can extend from the upper surface of die 200 to the surface of the material of semiconductor die substrate 202.The conductive interconnects 220 between the first die ring 210 and the second die ring 212 may be reduced during the patterning process (eg, during etching of material in the die, such as during material in the integrated circuit region 204) The possibility of arc breakdown between the first die ring 210 and the second die ring 212. By way of non-limiting example, conductive interconnects 220 may reduce or even prevent arc breakdown during a plasma etch process, such as a reactive ion etch process.Without wishing to be bound by any particular theory, it is believed that electrically connecting the first die ring 210 and the second die ring 212 via the conductive interconnects 220 may result in a potential of the first die ring 210, for example, during fabrication of the die ring structure 201. The upper is equal to the potential of the second die ring 212. In other words, the first die ring 210 and the second die ring 212 can assume an equal potential. Thus, since the first die ring 210 and the second die ring 212 are electrically coupled, the first die ring 210 and the second die ring 212 may not be capacitively coupled. Referring to Figure 2E, a semiconductor die 200' during fabrication of the die ring structure 201 (Figure 2A) is illustrated. The semiconductor die 200' can include trenches 250, 252, 254 formed at locations corresponding to the first die ring 210, the second die ring 212, the third die ring 214, and the fourth die ring 216, respectively. 256. The trenches 250, 252, 254, 256 can extend through alternating conductive material 240 and insulating material 242, for example, in a NAND semiconductor device. During fabrication of the die ring structure 201 (FIG. 2A), the dry etch process used to form the trenches 250, 252, 254, 256 can include electrons, ions, or both that can energize the conductive material 240. Different portions of the semiconductor die 200' can exhibit differences in stored charge. The magnitude of the difference in stored charge may increase as the depth of trenches 250, 252, 254, 256 increases. It is believed that since the first die ring 210 and the second die ring 212 are electrically connected, the potential of the first die ring 210 and the second die ring 212 can be balanced before any arc breakdown can occur. Thus, during the etching process, such as a dry etch process including plasma etching (eg, reactive ion etching), since the first die ring 210 and the second die ring 212 exhibit an equipotential, they may be in the first die ring Undesirably accumulated charges on 210 and second die ring 212 do not lengthen arc breakdown between each other. By way of comparison, in embodiments where the first die ring 210 and the second die ring 212 are not electrically connected, separate and distinct charges may be present in each of the first die ring 210 and the second die ring 212. The accumulation is made and an arc can be formed between the first die ring 210 and the second die ring 212. It is believed that where the first die ring 210 and the second die ring 212 are not in electrical communication, each die ring 210, 212 can act as a capacitor plate and can store charge during such an etching process. After the significant charge has been stored, the charge can be discharged, which can create an arc between the die rings 210, 212. The arc can be an explosive event that can damage the die 200 and its integrated circuit.Referring back to FIG. 2A, since each of the first die ring 210, the second die ring 212, the third die ring 214, and the fourth die ring 216 are electrically connected via the conductive interconnects 220, the die ring Each of 210, 212, 214, 216 can exhibit substantially the same electrical potential. Thus, since the die rings 210, 212, 214, 216 may not be capacitively coupled to each other, the conductive interconnects 220 may reduce or even prevent any of the die rings 210, 212, 214, 216 during the patterning process. An arc breakdown between one.Although the die ring structure 201 has been described as including four die rings, each of which includes a continuous conductive structure surrounding the integrated circuit region, the present disclosure is not so limited. In other embodiments, the die ring structure 201 can include fewer or more die rings, such as two die rings, three die rings, five die rings, six die rings, and the like. In some such embodiments, the die rings can be in electrical communication with each other via one or more conductive interconnects.Although FIGS. 2A-2D have been described as including conductive interconnects 220 that each extend only between adjacent die rings 210, 212, 214, 216, the disclosure is not so limited. In other embodiments, the conductive interconnects 220 can be electrically connected to more than two die rings. FIG. 3 is a top plan view of a semiconductor die 300 including conductive interconnects 320 that electrically connect different portions of the die ring structure 301. The die ring structure 301 can be located in the peripheral region 306 and disposed about the integrated circuit region 304 of the semiconductor die 300. The die ring structure 301 can include a first die ring 310 electrically connected to each of the second die ring 312, the third die ring 314, and the fourth die ring 316, each of the die rings One can be electrically connected to each other via one or more conductive interconnects 320. Thus, in some embodiments, each die ring can be in electrical communication with another die ring via one or more conductive interconnects 320. For example, the first die ring 310 can be electrically connected to each of the second die ring 312, the third die ring 314, and the fourth die ring 316 via one or more conductive interconnects 320, The second die ring 312 can be electrically connected to the first die ring 310, the third die ring 314, and the fourth die ring 316, and the third die ring 314 can be electrically connected to the first die ring 310, the second The die ring 312 and the fourth die ring 316, and the fourth die ring 316 can be electrically connected to the first die ring 310, the second die ring 312, and the third die ring 314. In some embodiments, each of the conductive interconnects 320 can electrically couple all of the die rings.Although FIG. 3 illustrates only four conductive interconnects 320, the present disclosure is not so limited. In other embodiments, each side of the die can include between two and twenty conductive interconnects 320 that electrically connect the die rings 310, 312, 314, 316 to each other, such as Two and five conductive interconnects 320, between five and ten conductive interconnects 320, between ten and fifteen conductive interconnects 320, or A conductive interconnect 320 between fifteen and twenty.In some embodiments, the vertical side (eg, edge) of the first die ring 310, the second die ring 312, the third die ring 314, and the fourth die ring 316 and its horizontal side (eg, edge) It can be electrically coupled to more conductive interconnects 320 than to. In some embodiments, the vertical sides of the die rings 310, 312, 314, 316 can be electrically coupled to between about fifteen and about twenty-five conductive interconnects 320, such as between about ten Between five and about seventeen, between about seventeen and about nineteen, between about nineteen and about twenty-one, between about twenty-one and about twenty Conductive interconnects 320 between three, or between about twenty-three and about twenty-five. In some embodiments, each vertical side of the die rings 310, 312, 314, 316 can be electrically coupled to nineteen or twenty conductive interconnects 320. The horizontal sides of the die rings 310, 312, 314, 316 can be electrically coupled to between about ten and about twenty conductive interconnects 320, such as between about ten and about twelve, Between about twelve and about fourteen, between about fourteen and about sixteen, between about sixteen and about eighteen, or between about eighteen About twenty conductive interconnects 320. In some embodiments, the horizontal sides of the die rings 310, 312, 314, 316 can be electrically coupled to fourteen or fifteen conductive interconnects 320.4 is a top plan view of another semiconductor die 400 including a die ring structure 401, in accordance with another embodiment of the present disclosure. The semiconductor die 400 can include an integrated circuit region 404 that includes an active circuit and a plurality of die rings disposed in a peripheral region 406 surrounding the integrated circuit region 404.The die ring can include a first die ring 410, a second die ring 412, a third die ring 414, and a fourth die ring 416. The first die ring 410 can include a continuous structure that extends around the integrated circuit region 404 of the semiconductor die 400 (when viewed from the top of the semiconductor die 400). The first die ring 410 can be substantially identical to the first die ring 210 described above with respect to Figures 2A and 2C.The second die ring 412 can include a staggered conductive structure that includes discrete segments disposed about the first die ring 410 when viewed from the top of the semiconductor die 400. In other words, the first portion of the second die ring 412 may not be in direct electrical communication with other portions of the second die ring 412. The third die ring 414 can include a staggered conductive structure that includes discrete segments disposed about the second die ring 412. The fourth die ring 416 can include a staggered conductive structure that includes discrete segments disposed about the third die ring 414.Conductive interconnect 420 can electrically connect successive first die rings 410 to different portions of interleaved second die rings 412. Although FIG. 4 illustrates four conductive interconnects 420 that electrically connect the first die ring 410 to the second die ring 412, the present disclosure is not so limited. In other embodiments, the semiconductor die 400 can include more than one conductive interconnect 420 between the first die ring 410 and the second die ring 412 on each side of the semiconductor die 400. In some embodiments, each side of the first die ring 410 can be via between about two and about twenty-five conductive interconnects 420, such as between about two and about five Between about five and about ten, between about ten and about fifteen, between about fifteen and about twenty, or between about twenty and about twenty-five Conductive interconnects 420 between them are electrically connected to the second die ring 412.In some embodiments, the vertical sides (eg, edges) of the die rings 410, 412, 414, 416 can be electrically coupled to more conductive interconnects 420 than their horizontal sides (eg, edges). In some embodiments, the vertical sides of the die rings 410, 412, 414, 416 can be electrically coupled to between about fifteen and about twenty-five conductive interconnects 420, such as between about ten Between five and about seventeen, between about seventeen and about nineteen, between about nineteen and about twenty-one, between about twenty-one and about twenty Conductive interconnects 420 between three, or between about twenty-three and about twenty-five. In some embodiments, each vertical side of the die rings 410, 412, 414, 416 can be electrically coupled to nineteen or twenty conductive interconnects 420. The horizontal sides of the die rings 410, 412, 414, 416 can be electrically coupled between about ten and about twenty, for example between about ten and about twelve, between about twelve Between about fourteen, between about fourteen and about sixteen, between about sixteen and about eighteen, or between about eighteen and about twenty Conductive interconnect 420. In some embodiments, the horizontal sides of the die rings 410, 412, 414, 416 can be electrically coupled to fourteen or fifteen conductive interconnects 420.Although FIG. 4 illustrates conductive interconnect 420 electrically coupling first die ring 410 only to second die ring 412, the present disclosure is not so limited. In other embodiments, the first die ring 410 can be in electrical communication with one or both of the third die ring 414 and the fourth die ring 416 via one or more conductive interconnects 420.Without wishing to be bound by any particular theory, the die ring structure 401 comprising only one continuous die ring (e.g., continuous first die ring 410) may reduce the likelihood of forming a capacitor-type structure between adjacent die rings. And, therefore, the possibility of capacitor charging between adjacent die rings can be reduced. Additionally, the discontinuous segments of the second die ring 412, the third die ring 414, and the fourth die ring 416 can reduce the amount of charge that can accumulate on any particular portion of such die rings. In other words, the discontinuous segments can reduce the amount of capacitive coupling between the conductive structures as compared to conventional semiconductor devices. Moreover, electrically coupling the first die ring 410 to the second die ring 412 via the conductive interconnects 420 can form an equipotential between the first die ring 410 and the second die ring 412. Thus, arc breakdown between the die rings during the etching operation can be reduced or prevented.Although FIG. 4 illustrates a continuous die ring including the first die ring 410, the present disclosure is not so limited. In other embodiments, at least one of the second die ring 412, the third die ring 414, and the fourth die ring 416 can include a continuous conductive structure surrounding the integrated circuit region 404, and the first die ring 410 includes A staggered structure that includes discontinuous segments. In some such embodiments, the conductive interconnect 420 can be between a continuous die ring and at least one adjacent die ring. In other words, in some embodiments, the first die ring 410 can include discontinuous segments, and at least one of the second die ring 412, the third die ring 414, and the fourth die ring 416 can include continuous Conductive structure.FIG. 5 is a top plan view of a semiconductor die 500 including a die ring structure 501 in accordance with another embodiment of the present disclosure. The die ring structure 501 can include a first die ring 510 disposed about the integrated circuit region 504, a second die ring 512 disposed about the first die ring 510, and a third die disposed about the second die ring 512. Ring 514 and a fourth die ring 516 disposed about third die ring 514. The die rings 510, 512, 514, 516 can be disposed in a peripheral region 506 of the semiconductor die 500.First die ring 510 and second die ring 512 can each comprise a continuous conductive structure that extends around integrated circuit region 504. The first die ring 510 and the second die ring 512 can be substantially identical to the first die ring 210 described above with respect to Figures 2A and 2C. The third die ring 514 can include a staggered conductive structure that includes discrete segments disposed about the second die ring 512. The fourth die ring 516 can include a staggered conductive structure that includes discontinuous segments surrounding the third die ring 514.Conductive interconnect 520 can electrically connect continuous first die ring 510 to continuous second die ring 512. Although FIG. 5 illustrates four conductive interconnects 520 that electrically connect the first die ring 510 to the second die ring 512, the present disclosure is not so limited. In other embodiments, semiconductor die 500 can include more than one conductive interconnect 520 between first die ring 510 and second die ring 512 on each side of semiconductor die 500. In some embodiments, each side of the first die ring 510 can be via between about two and about twenty-five conductive interconnects 520, such as between about two and about five Between about five and about ten, between about ten and about fifteen, between about fifteen and about twenty, or between about twenty and about twenty-five Conductive interconnects 520 between them are electrically connected to the second die ring 512.As described above with respect to Figures 2 through 4, the vertical sides of the die rings 510, 512, 514, 516 can be electrically coupled to more conductive interconnects than their horizontal sides.Although FIG. 5 illustrates conductive interconnect 520 electrically coupling first die ring 510 only to second die ring 512, the present disclosure is not so limited. In other embodiments, the first die ring 510 can be in electrical communication with one or both of the third die ring 514 and the fourth die ring 516 through one or more conductive interconnects 520. Similarly, the second die ring 512 can be in electrical communication with one or both of the third die ring 514 and the fourth die ring 516 through one or more conductive interconnects 520.Although FIG. 5 has been described as including a first die ring 510 and a second die ring 512 including a continuous conductive structure and a third die ring 514 and a fourth die ring 516 including staggered conductive structures, the present disclosure does not. Restricted. In other embodiments, the die ring structure 501 can include two consecutive die rings, such as a second die ring 512 and a third die ring 514, or a third die ring 514 and a fourth die ring 516. Two interleaved die rings, such as first die ring 510 and fourth die ring 516, or first die ring 510 and second die ring 512. In still other embodiments, first die ring 510 and third die ring 514 can comprise a continuous conductive structure, and second die ring 512 and fourth die ring 516 can comprise staggered conductive structures. In other embodiments, first die ring 510 and third die ring 514 can comprise staggered conductive structures, and second die ring 512 and fourth die ring 516 can comprise a continuous conductive structure.FIG. 6 is a top plan view of a semiconductor die 600 including a die ring in accordance with another embodiment of the present disclosure. Semiconductor die 600 can include an integrated circuit region 604 and a peripheral region 606 disposed about integrated circuit region 604. The die ring structure 601 can include a first die ring 610 that can be disposed in the peripheral region 606 and surrounding the integrated circuit region 604. The second die ring 612 can be disposed about the first die ring 610.The first die ring 610 and the second die ring 612 can include a continuous conductive structure that extends around the integrated circuit region 604. The first die ring 610 and the second die ring 612 can be substantially identical to the first die ring 210 described above with respect to Figures 2A and 2C.Conductive interconnect 620 can electrically connect continuous first die ring 610 to continuous second die ring 612. Although FIG. 6 illustrates four conductive interconnects 620 that electrically connect the first die ring 610 to the second die ring 612, the present disclosure is not so limited. In other embodiments, semiconductor die 600 can include more than one conductive interconnect 620 between first die ring 610 and second die ring 612 on each side of semiconductor die 600. In some embodiments, each side of the first die ring 610 can be via a conductive interconnect between about two conductive interconnects 620 and about twenty-five conductive interconnects 620, such as Between about two and about five, between about five and about ten, or between about ten and about fifteen, between about fifteen and about twenty or A conductive interconnect 620 between about twenty and about twenty five is electrically coupled to the second die ring 612.As described above with reference to Figures 2 through 5, the vertical sides of the first die ring 610 and the second die ring 612 can be electrically coupled to more conductive interconnects than their horizontal sides.Although the conductive interconnects described above with respect to FIGS. 2A-6 have been described herein as extending from the upper surface of the die to the die substrate, the present disclosure is not so limited. In some embodiments, the interconnect may not extend completely to the surface of the substrate. By way of non-limiting example, the interconnects may extend over the associated die ring or between each associated die ring to a portion of the depth below the surface in electrical communication with the die ring, and do not extend to the die pad The surface of the bottom.Accordingly, in some embodiments, a semiconductor device includes: a semiconductor die including an integrated circuit; a first die ring including one or more electrically conductive materials at least partially surrounding the integrated circuit, the Or a plurality of electrically conductive materials including a conductive path from a surface of the semiconductor die into the semiconductor die; a second die ring including a conductive material disposed about the first die ring; and a first conductive An interconnect that electrically connects the first die ring to the second die ring.Accordingly, in other embodiments, a semiconductor die includes a first die ring in a peripheral region of a semiconductor die, the first die ring including an extension from an upper surface of the semiconductor die a continuous conductive structure in a semiconductor die and comprising a conductive material; a second die ring surrounding the first die ring, the second die ring comprising a conductive material; and a first conductive interconnect, The first die ring is electrically connected to the second die ring.Accordingly, in some embodiments, a semiconductor device includes a first die ring extending around an integrated circuit of a semiconductor die, wherein the first die ring includes a continuous conductive structure extending around the integrated circuit; a two die ring including a conductive material surrounding the first die ring; and a conductive interconnect electrically coupling the first die ring to the second die ring.While certain illustrative embodiments have been described in connection with the drawings, the embodiments of the present invention will be understood and understood that the embodiments of the present disclosure are not limited to those embodiments that are specifically shown and described herein. Rather, various embodiments of the embodiments described herein can be made without departing from the scope of the embodiments of the present disclosure, such as those claimed herein, including legal equivalents. Add, delete, and modify. Additionally, the features of one disclosed embodiment can be combined with the features of another disclosed embodiment, and still fall within the scope of the disclosure.
In one embodiment, a pipelined processor is described that includes an execution pipeline having a plurality of stages and a multi-cycle instruction (MCI) controller adapted to assert a stall signal to stall the multi-cycle instruction within one of the stages of the execution pipeline. The MCI controller is adapted to issue a plurality of instructions to subsequent stages in the pipeline while the multi-cycle instruction is stalled.
What is claimed is: 1. A method comprising: receiving a multi-cycle instruction in a pipelined processor; stalling the multi-cycle instruction in a stage within a pipeline of a pipelined processor; and issuing a plurality of instructions to subsequent stages in the pipeline while the multi-cycle instruction is stalled.2. The method of claim 1 further comprising detecting a stall condition prior to decoding the multi-cycle instruction.3. The method of claim 2, wherein detecting a stall condition comprises: determining whether the received instruction comprises a multi-cycle instruction; determining a number of registers specified by the multi-cycle instruction.4. The method of claim 1 further comprising: deasserting a pre-stall signal at least two cycles prior to the completion of the multi-cycle instruction; and deasserting a stall signal at least one cycle prior to the completion of the multi-cycle instruction.5. The method of claim 1, wherein stalling the MCI includes stalling the MCI in a decode stage of the pipeline.6. The method of claim 1, wherein issuing a plurality of instructions includes issuing the same instruction a number of times when the multi-cycle instruction is stalled.7. The method of claim 6, wherein issuing the same instruction a number of times includes issuing a push instruction.8. The method of claim 1, wherein issuing a plurality of instructions includes issuing a number of different instructions when the multi-cycle instruction is stalled.9. The method of claim 8, wherein issuing a number of different instructions includes issuing instructions to push a return address for a subroutine on a stack, push a frame pointer on the stack, move a stack pointer to the frame pointer and update the stack pointer based on a frame size for the subroutine.10. The method of claim 1, wherein issuing a plurality of instructions includes issuing instructions according to a state machine.11. The method of claim 1, wherein stalling the multicycle instruction comprises asserting a stall signal when the multi-cycle instruction specifies at least one of popping and pushing more than one register.12. A method comprising: receiving a multi-cycle instruction directing a pipelined processor to pop one or more registers from a stack; stalling the multi-cycle instruction in a stage within a pipeline of a pipelined processor; and issuing a plurality of instructions to subsequent stages in the pipeline according to a state machine. 13. The method of claim 12, wherein when the multi-cycle instruction specifies popping two registers, issuing a plurality of instructions comprises: transitioning from a first state to a second state and issuing an instruction a first pop instruction; and transitioning from the second state back to the first state and issuing a second pop instruction.14. The method of claim 12, wherein when the multi-cycle instruction specifies popping three or more registers from a stack, and issuing a plurality of instructions comprises: transitioning from a first state to a second state and issuing an instruction a first pop instruction; issuing a number of pop instructions until two registers remain to be popped; transitioning from the second state to a third state and issuing another pop instruction to pop the second to last register; and transitioning from the third state back to the first state and issuing a pop instruction to pop the last register. 15. The method of claim 12, wherein when the multi-cycle instruction specifies popping multiple data registers and multiple pointer registers from a stack, and issuing a plurality of instructions comprises: transitioning from a first state to a second state and issuing an instruction a first instruction to pop a pointer register; issuing a number of pop instructions until one pointer register remains to be popped; transitioning from the second state to a third state and issuing another pop instruction to pop the final pointer register; issuing a number of pop instructions until two data registers remains to be popped; transitioning from the third state to a fourth state and issuing a pop instruction to pop the second to last data register; and transitioning from the fourth state back to the first state and issuing a pop instruction to pop the last data register.16. The method of claim 12, wherein stalling the multicycle instruction comprises asserting a stall signal when the multi-cycle instruction specifies popping more than one register.17. A method comprising: receiving a multi-cycle instruction directing a pipelined processor to push one or more registers on a stack; stalling the multi-cycle instruction in a stage within a pipeline of a pipelined processor; and issuing a plurality of push instructions to subsequent stages in the pipeline according to a state machine.18. The method of claim 17, wherein when the multi-cycle instruction specifies pushing two registers and issuing a plurality of instructions comprises: transitioning from a first state to a second state and issuing a first push instruction; and transitioning from the second state back to the first state and issuing a second push instruction.19. The method of claim 17, wherein when the multi-cycle instruction specifies pushing three or more registers on stack, issuing a plurality of instructions comprises: transitioning from a first state to a second state and issuing an instruction a first push instruction ; issuing a number of push instructions until two registers remain to be pushed ; transitioning from the second state to a third state and issuing another push instruction to push the second to last register; and transitioning from the third state back to the first state and issuing a push instruction to push the last register.20. The method of claim 17, wherein when the multi-cycle instruction specifies pushing multiple data registers and multiple pointer registers on a stack, and issuing a plurality of instructions comprises: transitioning from a first state to a second state and issuing an instruction a first instruction to push a data register; issuing a number of push instructions until one data register remains to be pushed; transitioning from the second state to a third state and issuing another push instruction to push the final data register; issuing a number of push instructions until two pointer registers remains to be pushed ; transitioning from the third state to a fourth state and issuing a push instruction to push the second to last pointer register; and transitioning from the fourth state back to the first state and issuing a push instruction to push the last pointer register.21. The method of claim 17, wherein stalling the multicycle instruction comprises asserting a stall signal when the multi-cycle instruction specifies pushing more than one registers.22. An apparatus comprising: an execution pipeline having a plurality of stages; and a multi-cycle instruction (MCI) controller adapted to assert a stall signal to stall a multi-cycle instruction entering one of the stages of the execution pipeline, wherein the MCI controller issues a plurality of instructions to subsequent stages in the pipeline when the multi-cycle instruction is stalled. 23. The apparatus of claim 22 further comprising a stall controller receiving the MCI controller stall signal from the MCI controller and generating a plurality of stall signals to stall the stage holding the MCI instruction and prior stages in the pipeline.24. The apparatus of claim 22, wherein the MCI controller is adapted to issue the same instruction a number of times when the multi-cycle instruction is stalled.25. The apparatus of claim 22, wherein the MCI controller is adapted to issue a push instruction to direct the pipeline to push a plurality of registers.26. The apparatus of claim 22, wherein the MCI controller is adapted to issue a number of different instructions when the multi-cycle instruction is stalled.27. The apparatus of claim 22, wherein the MCI controller is adapted to issue instructions to push a return address for a subroutine on a stack, push a frame pointer on the stack, move a stack pointer to the frame pointer and update the stack pointer based on a frame size for the subroutine.28. The apparatus of claim 22, wherein the MCI controller asserts the stall signal when the multi-cycle instruction specifies either popping or pushing more than one registers.29. A system comprising: a Flash memory device; and a processor coupled to the Flash memory device, wherein the processor includes an execution pipeline having a plurality of stages and a multi-cycle instruction (MCI) controller adapted to assert a stall signal to stall the multi-cycle instruction within one of the stages of the execution pipeline; wherein the MCI controller is adapted to issue a plurality of instructions to subsequent stages in the pipeline when the multi-cycle instruction is stalled.30. The system of claim 29, wherein the processor further comprises a stall controller receiving the MCI controller stall signal from the MCI controller and generating a plurality of stall signals to stall the stage holding theMCI instruction and prior stages in the pipeline.31. The system of claim 29, wherein the MCI controller is adapted to issue the same instruction a number of times when the multi-cycle instruction is stalled.32. The system of claim 29, wherein the MCI controller is adapted to issue a number of different instructions when the multi-cycle instruction is stalled.33. The system of claim 29, wherein the MCI controller asserts the stall signal when the multi-cycle instruction specifies at least one of popping and pushing more than one registers.34. A method comprising: receiving a link machine instruction; stalling the multi-cycle instruction in a stage within a pipeline of a pipelined processor; and executing the link instruction by issuing a plurality of instructions to subsequent stages in the pipeline according to a state machine. 35. The method of claim 34, wherein issuing a plurality of instructions comprises: transitioning from a first state to a second state and issuing an instruction to push a return address on a stack; transitioning from the second state to a third state and issuing an instruction to push a frame pointer on the stack; transitioning from the third state to a fourth state and issuing an instruction to move a stack pointer to the frame pointer ; and transitioning from the fourth state back to the first state and issuing an instruction to update a stack pointer based on a frame size specified by the multi-cycle machine instruction.36. A method comprising: receiving an unlink machine instruction; stalling the multi-cycle instruction in a stage within a pipeline of a pipelined processor; and executing the unlink instruction by issuing a plurality of instructions to subsequent stages in the pipeline according to a state machine. 37. The method of claim 36, wherein issuing a plurality of instructions comprises: transitioning from a first state to a second state and issuing an instruction to restore a return address from a stack; transitioning from the second state to a third state and issuing an instruction to restore a stack pointer; and transitioning from the third state back to the first state and issuing an instruction to restore a frame pointer from the stack.38. An apparatus comprising: state machine logic to control the issuing of a plurality of sub-operations in a pipelined processor in response to a multi-cycle machine instruction; and an address generation unit adapted to generate register addresses for use during the execution of the suboperations.39. The apparatus of claim 38, wherein the address generation unit comprises a counter for incrementing or decrementing a current register address. 40. The apparatus of claim 38, wherein the address generation unit comprising a clocked storage circuit to store a current register address.
MULTI-CYCLE INSTRUCTIONSBACKGROUNDThis invention relates to executing multi-cycle instructions in a programmable processor. A programmable processor, such as a microprocessor for a computer or a digital signal processing system, may support one or more"multi-cycle"machine instructions in which a single machine instructions directs the processor to perform multiple operations. For example, a typical multi-cycle machine instruction is a Load Multiple in which the processors performs a series of load operations in response to a single machine instruction. Another example is a"Push-Pop Multiple"instruction that directs the processor to push or pop multiple registers to or from a stack. Because multi-cycle instructions pack multiple operations into a single machine instruction, they typically reduce code size and improve operational efficiency of the programmable processor. DESCRIPTION OF DRAWINGSFigure 1 is a block diagram illustrating an example of a pipelined programmable processor according to an embodiment of the invention. Figure 2 is a schematic illustrating an example execution pipeline according to an embodiment of the invention. Figure 3 illustrates an example state diagram for pushing multiple registers onto a stack. Figure 4 illustrates an example state diagram for popping multiple registers off a stack. Figure 5 illustrates an example state diagram for execution of a Link instruction. Figure 6 illustrates an example state diagram for execution of an Unlink instruction. Figure 7 is a schematic diagram illustrating an example embodiment of a stall controller. Figure 8 is a timing diagram for a stall generator. Figure 9 is a schematic diagram of an example circuit for generating a stall signal for multi-cycle instructions. Figures 10 and 11 are schematic diagrams of example address generation circuits. DESCRIPTIONFigure 1 is a block diagram illustrating a programmable processor 2 that supports a number of multicycle machine instructions. Processor 2 includes an execution pipeline 4 and a control unit 6. Control unit 6 controls the flow of instructions and data through pipeline 4 in accordance with a system clock. During the processing of an instruction, control unit 6 may direct the various components of the pipeline to decode the instruction and correctly perform the corresponding operation including, for example, writing the results back to memory. Instructions may be loaded into a first stage of pipeline 4 and processed through the subsequent stages.Each stage typically processes concurrently with the other stages. Data passes between the stages in pipeline 4 in accordance with the system clock signal. The results of the instructions emerge at the end of the pipeline 4 in rapid succession. As described in detail below, processor 2 supports a number of multi-cycle instructions. In response to a multi-cycle instruction, stall controller 8 may stall one or more stages of pipeline 4 by asserting stall signals 9 in order to prevent pipeline 4 from fetching and decoding 10559/276W01/P9283 additional instructions. After stalling a portion of pipeline 4, multi-cycle instruction (MCI) controller 5 may assert MCI signals 7 and direct pipeline 4 to perform additional operations defined by the current multi-cycle instruction. Figure 2 illustrates an example pipeline 4 according to the invention. Pipeline 4 may have, for example, five stages: instruction fetch (IF), instruction decode (DEC), address calculation (AC), execute (EX) and write back (WB).Instructions may be fetched from a memory device such as, for example, main memory or an instruction cache during the first stage (IF) by fetch unit 11 and decoded during the second stage (DEC) by instruction decode unit 12. At the next clock cycle, the results are passed to the third stage (AC), where data address generators 13 calculate any memory addresses to perform the operation. During the execution stage (EX), execution unit 15, performs a specified operation such as, for example, adding or multiplying two numbers. Execution unit 15 may contain specialized hardware for performing the operations including, for example, one or more arithmetic logic units (ALU's), floating-point units (FPU) and barrel shifters. A variety of data may be applied to execution unit 15 such as 10559/276W01/P9283 the addresses generated by data address generators 13, data retrieved from memory or data retrieved from data registers 14. During the final stage (WB), the results are written back to data memory or to data registers 14. A multi-cycle instruction behaves as multiple instructions being issued from the decode stage of pipeline 4 over several clock cycles. When an MCI is executing, it remains stalled in the decode stage of pipeline 4 while multiple"sub instructions"are sent down pipeline 4 under control of MCI controller 5. MCI controller 5 operates according to a number of internal state machines in order to direct instruction decode unit 12 to dispatch a number of operations over a number of clock cycles during the execution of the MCI. Stall controller 8 may stall one or more stages of pipeline 4 by asserting stall signals 9 in order to prevent pipeline 4 from fetching and decoding additional instructions. More specifically, the stages of pipeline 4 include storage circuits, such as stage registers 19, for storing the results of the current stage. Stage registers 19 typically latch the results according to the system clock. Stage registers 19 receive the stall signals 9, which control whether or not stage registers 19 latch the 10559/276W01/P9283 results from the previous stage. In this manner, stall controller 8 may stall one or more stages of pipeline 4 in a response to a multi-cycle instruction. Examples of multi-cycle instructions supported by processor 2 include a PushPopMultiple machine instruction, a Link instruction and an Unlink instruction. ThePushPopMultiple instruction directs processor 2 to push or pop from 1 to N data registers and/or pointer registers.The PushPopMultiple remains stalled in the decode stage for a number of clock cycles equal to the number of registers being accessed. The following illustrates an example push multiple machine instruction: [--sp] = (r7-r4, p5-p0) In this example, a single machine instruction directs processor 2 to push four data registers (r4 through r7) and six pointer registers (pO through p5). Generally, a single machine instruction may specify zero or more data registers and zero or more pointer registers, as long as at least one register is specified. Figure 3 illustrates an example state diagram 30 for a state machine within MCI controller 5 for pushing multiple registers onto a stack. As described below, MCI controller 5 operates according to state diagram 30 in response to a 10559/276WOl/P9283 push multiple instruction in order to push one or more registers. While operating according to state diagram 30,MCI controller 5 may assert one or more MCI signals 7 including a PUSHDREG signal, which directs decoder 12 to generate pipeline control signals for dispatching a push of a data register, and a PUSHPREG signal, which directs decoder 12 to generate pipeline control signals for dispatching a push of a pointer register. In addition, MCI controller 5 may assert a D REG-PRESELECT signal that initializes a counter whose count indicates which data register to push, or a PREGPRESELECT signal that initializes a counter whose count indicates which pointer register to push. MCI controller 5 may also assert a MCIPRESTALL signal that directs stall controller 5 to stall pipeline 4 on the following clock cycle. The following table summarizes the conditions that cause the Push Multiple state machine in MCI controller 5 to transition from one state to another and the corresponding output signals that are asserted, where D is an instruction bit that indicates a data register is to be pushed, P is an instruction bit that indicates a pointer register is to be pushed, DR is an instruction field that indicates a starting data register to push, PR is an instruction field that indicates a starting pointer register to push, D~TAG represents the current data register being pushed, PTAG represents the current pointer register being pushed, DMAX represents the maximum data register in the range of available data registers, PMAX represents the maximum pointer register in the range of available pointer registers: <tb> <SEP> PATH <SEP> CONDITIONS <SEP> OUTPUT<tb> <SEP> 34A <SEP> Not <SEP> a <SEP> push <SEP> multiple <SEP> None<tb> <SEP> instruction<tb> <SEP> 34B <SEP> D <SEP> & <SEP> ! <SEP> P <SEP> & <SEP> DR=DMAXassert <SEP> PUSHDREG<tb> <SEP> 34C <SEP> ! <SEP> D <SEP> & <SEP> P <SEP> & <SEP> PR=PMAX <SEP> assert <SEP> PUSH~PREG<tb> <SEP> 34D <SEP> D <SEP> & <SEP> ! <SEP> P <SEP> & <SEP> DR=DMAX-1assert <SEP> PUSHDREG<tb> <SEP> assert <SEP> D~REG~PRESELECT<tb> <SEP> 34E <SEP> None <SEP> assert <SEP> PUSH~DREG<tb> <SEP> 34F <SEP> ! <SEP> D <SEP> & <SEP> P <SEP> & <SEP> PR=PMAX-1assert <SEP> PUSHPREG<tb> <SEP> assert <SEP> P~REG~PRESELECT<tb> <SEP> 34F'D <SEP> & <SEP> P <SEP> & <SEP> DR=DMAX <SEP> & <SEP> assert <SEP> PUSH~DREG<tb> <SEP> PR=PMAX<tb> <SEP> 34G <SEP> None <SEP> assert <SEP> PUSH~DREG<tb> <SEP> 34H <SEP> D <SEP> & <SEP> ! <SEP> P <SEP> & <SEP> DR < DMAX-1assert <SEP> PUSHDREG<tb> <SEP> assert <SEP> D~REG~PRESELECT<tb> <SEP> assert <SEP> MCI~PRESTALL<tb> <SEP> 34H'D <SEP> & <SEP> P <SEP> & <SEP> DR < DMAX <SEP> assert <SEP> PUSH~DREG<tb> <SEP> assert <SEP> D~REG~PRESELECT<tb> <SEP> assert <SEP> MCI~PRESTALL<tb> 34I <SEP> ( <SEP> ( <SEP> ! <SEP> P <SEP> & <SEP> D~TAG < DMAX-1) <SEP> I <SEP> assert <SEP> PUS-REG<tb> <tb> <SEP> (P <SEP> & <SEP> DTAG < DMAX)) <SEP> assertDREGPRESELECT<tb> <SEP> assert <SEP> MCI~PRESTALL<tb> 34J <SEP> ! <SEP> D <SEP> & <SEP> P <SEP> & <SEP> PR < PMAX-1 <SEP> assert <SEP> PUSH~DREG<tb> <SEP> assert <SEP> P~REG~PRESELECT<tb> <SEP> assert <SEP> MCI~PRESTALL<tb> 34J'D <SEP> & <SEP> P <SEP> & <SEP> DR=DMAX <SEP> & <SEP> assert <SEP> PUSH <SEP> DREG<tb> <SEP> PR < PMAX <SEP> assert <SEP> D~REG~PRESELECT<tb> <SEP> assert <SEP> MCI~PRESTALL<tb> 34KPR=PMAX-1assertPUSHPREG<tb> <SEP> assert <SEP> P~REG~PRESELECT<tb> 34L <SEP> P <SEP> & <SEP> D~TAG=DMAX <SEP> & <SEP> assert <SEP> PUSH~DREG<tb> <SEP> P~TAG < PMAX <SEP> assert <SEP> D~REG~PRESELECT<tb> <SEP> assert <SEP> MCI~PRESTALL<tb> 34M <SEP> P <SEP> & <SEP> D <SEP> TAG=DMAX <SEP> & <SEP> assert <SEP> PUSH <SEP> DREG<tb> <SEP> P~TAG=PMAX <SEP> assert <SEP> D~REG~PRESELECT<tb> 34NPTAG < PMAX-1assertPUSHPREG<tb> <SEP> assert <SEP> P~REG~PRESELECT<tb> <SEP> assert <SEP> MCI~PRESTALL<tb> 340DTAG=DMAX-1 <SEP> & <SEP> !P <SEP> assert <SEP> PUSH~DREG<tb> <SEP> assert <SEP> D-REG-PRESELECT<tb> Table 1Initially, MCI controller 5 starts in the WAIT state until an instruction is fetched by fetch unit 11 and decoded by decode unit 12. If the instruction is not aPushPopMultiple instruction, the MCI controller 5 returns to the WAIT state as indicated by path 34A. 10559/276WOl/P9283 If the instruction is a PushPopMultiple instruction, but only instructs processor 2 to push a single data register, the state machine asserts a PUSH-DREG signal and returns to the WAIT state via path 34B. If the instruction is a PushPopMultiple instruction that instructs processor 2 to push a single pointer register, the state machine asserts the PUSHPREG signal and returns to the WAIT state via path 34C. If the instruction specifies pushing two data registers or two pointer registers, state machine changes states to the state 32A or state 32C, respectively. The state machine transitions to these states via path 34D or 34F, and asserts the PUSHDREG signal while transitioning to the state 32A or the PUSHPREG signal while transitioning to the state 32C. In addition, while transitioning along path 34D, the state machine asserts the D REG PRESELECT signal initializing the counter that indicates which data registers to push. Similarly, while transitioning along path 34F, the state machine asserts the PREGPRESELECT signal initializing the counter that indicates which pointer registers to push. The state machine returns to the WAIT state from the state 32A via path 34E. During this transition, MCI 10559/276WOl/P9283 controller 5 again asserts PUSHDREG and deassertsD REG-PRESELECT, causing decode unit 12 to dispatch the push of another data register. Similarly, the state machine returns to the WAIT state from the state 32C via path 34G. During this transition, MCI controller 5 asserts PUSHPREG and deasserts P REG PRESELECT, causing execution unit 15 to push another pointer register. For a PushPopMultiple instruction that requires instruction decode unit 12 to dispatch the push of three or more data registers, the state machine transitions from theWAIT state to the state 32B via path 34H. During the transition, MCI controller 8 asserts PUSHDREG signal and asserts D REG PRESELECT, causing execution unit 15 to push a first data register. In addition, MCI controller 5 asserts MCI~PRESTALL signal causing stall controller 8 to stall one or more stages of pipeline 4 on the following clock. For example, in one embodiment, stall controller 8 asserts STALL~DEC to stall the decode stage of pipeline 4.Once in the state 32B, MCI controller 5 continues to push data registers until two registers remain to be pushed.For example, if the instruction called for six data registers to be pushed, MCI controller 5 traverses path 34I three times, pushing a data register each time, until the current data register to be pushed equals the maximum available data register (DMAX) minus one, i. e., when two data registers remain to be pushed. While traversing path 34I, MCI controller 5 asserts the PUS-REG signal, the DREGPRESELECT signal and theMCI-PRESTALL signal. When two data registers remain to be pushed, MCI controller 5 transitions to the state 32A via path 340 while pushing one of the remaining data registers.During this transition, MCI controller 5 deasserts MCIPRESTALL. Instruction decoder 12 receives a new instruction on the cycle after MCI controller 5 has traversed path 34E and has pushed the remaining data register. Similarly, for a PushPopMultiple instruction that requires instruction decode unit 12 to dispatch the push of three or more pointer registers, the state machine transitions from the WAIT state to the state 32D via path 34J. During the transition, MCI controller 8 asserts PUSHPREG signal and asserts PREGPRESELECT, causing execution unit 15 to push a first pointer register. In addition, MCI controller 5 asserts MCIPRESTALL signal causing stall controller 8 to stall one or more stages of pipeline 4. In the state 32D, MCI controller 5 pushes 10559/276W01/P9283 pointer registers by traversing path 34N until two pointer registers remain to be pushed. While traversing path 34N,MCI controller 5 asserts the PUSHDREG signal, the DREGPRESELECT signal and the MCI-PRESTALL signal. Once two data registers remain to be pushed, MCI controller 5 transitions to the state 32C via path 34K while pushing a pointer register. During this transition, MCI controller 5 deasserts MCIPRESTALL. In this manner, pipeline 4 resumes operation on the next clock cycle after MCI controller 5 has transitioned to the WAIT state via path 34G and has pushed the remaining pointer register. In addition to the above functionality, aPushPopMultiple instruction may specify multiple data registers and multiple pointer registers. Generally, state machine 30 is designed to first push the data registers, followed by the pointer registers, although the invention is not limited as such. For a Push Multiple instruction that specifies pushing a single data register and a single pointer register, MCI controller 5 transitions to the state 32C via path 34F'and asserts the PUSHDREG signal to push the data register.Next, MCI controller 5 transitions back to the WAIT state via path 34G and pushes the pointer register. For a Push Multiple that specifies pushing one data register and more than one pointer register, MCI controller 5 transitions to the state 32D via path 34J'and asserts the PUSH~DREG signal, the DREGPRESELECT signal and the MCI~PRESTALL signal to push the data register. Next, MCI controller 5 pushes all but two of the pointer registers by traversing path 34N, pushes a pointer register by traversing path 34K and pushes the last pointer register and returning the WAIT state by traversing path 34G. Finally, for a PushPopMultiple instruction that specifies pushing multiple data registers and at least one pointer registers, MCI controller 5 transitions to the state 32B via path 34H'and asserts the PUS-REG signal, the DREGPRESELECT signal and the MCIPRESTALL signal to push a first data register. Next, MCI controller 5 pushes all but one of the data registers by traversing path 34I'. If the instruction specifies a single pointer register to be pushed, MCI controller 5 pushes the final data register by traversing path 34M and pushing the single pointer register by traversing path 34G. Otherwise, MCI controller 5 pushes the final data register by traversing path 34L and pushes the multiple pointer registers by traversing paths 34N if necessary, followed by 34K and 34G. 10559/276W01/P9283 Figure 4 illustrates an example state diagram 40 of a state machine within MCI controller 5 for popping multiple registers from a stack. MCI controller 5 operates according to state diagram 40 in response to aPushPopMultiple instruction that specifies one or more registers to be popped from a stack in memory. While operating according to stage diagram 40, MCI controller 5 may assert one or more MCI signals 7 including a POP DREG signal, which directs pipeline 4 to pop a data register, and POPPREG signal, which directs pipeline 4 to pop a pointer register. In addition, MCI controller 5 may assert a DREGPRESELECT signal initializing a counter that indicates which data register to pop or a PREGPRESELECT signal initializing a counter that indicates which pointer register to pop. MCI controller 5 may also assert MCIPRESTALL signal to stall pipeline 4 on the following clock cycle. The following table summarizes the conditions that cause MCI controller 5 to transition between state of state diagram 40 and the corresponding output signals that are asserted, where D is an instruction bit that indicates a data register is to be popped, P is an instruction bit that indicates a pointer register is to be popped, DR is an instruction field indicating the last data register to pop from the stack, PR is a instruction field indicating the last pointer register to pop from the stack, DTAG represents the current data register being popped, PTAG represents the current pointer register being popped, DMAX represents the maximum data register in the range of available data registers and PMAX represents the maximum pointer register in the range of available pointer registers: PATH <SEP> CONDITIONS <SEP> OUTPUT<tb> 44A <SEP> Not <SEP> a <SEP> pop <SEP> multiple <SEP> None<tb> <SEP> instruction<tb> 44B <SEP> P <SEP> & <SEP> ! <SEP> D <SEP> & <SEP> PR=PMAXassert <SEP> POPPREG<tb> 44C <SEP> ! <SEP> P <SEP> & <SEP> D <SEP> & <SEP> DR=DMAXassert <SEP> POPDREG<tb> 44D <SEP> P <SEP> & <SEP> ! <SEP> D <SEP> & <SEP> PR=PMAX-1assert <SEP> POPPREG<tb> <SEP> assert <SEP> PREGPRESELECT<tb> 44E <SEP> none <SEP> assert <SEP> POP~PREG<tb> 44F <SEP> ! <SEP> P <SEP> & <SEP> D <SEP> & <SEP> DR=DMAX-1assert <SEP> POPDREG<tb> <SEP> assert <SEP> D~REG~PRESELECT<tb> 44F'D <SEP> & <SEP> P <SEP> & <SEP> DR=DMAX <SEP> & <SEP> assert <SEP> POP~PREG<tb> <SEP> PR=PMAX<tb> 44G <SEP> none <SEP> assert <SEP> POP~DREG<tb> 44H <SEP> P <SEP> & <SEP> ! <SEP> D <SEP> & <SEP> PR < PMAX-1assert <SEP> POPPREG<tb> <SEP> assert <SEP> P~REG~PRESELECT<tb> <SEP> assert <SEP> MCI~PRESTALL<tb> 44H'D <SEP> & <SEP> P <SEP> & <SEP> PR < PMAX <SEP> assert <SEP> POP-PREG<tb> <tb> <SEP> assert <SEP> PREGPRESELECT<tb> <SEP> assert <SEP> MCIPRESTALL<tb> 44I <SEP> ( <SEP> ( <SEP> ! <SEP> D <SEP> & <SEP> P~TAB > PR+1) <SEP> assert <SEP> POP~PREG<tb> <SEP> (D <SEP> & <SEP> PTAG > PR)) <SEP> assert <SEP> P~REG~PRESELECT<tb> <SEP> assert <SEP> MCI~PRESTALL<tb> 44J <SEP> ! <SEP> P <SEP> & <SEP> D <SEP> & <SEP> DR < DMAX-1assert <SEP> POPDREG<tb> <SEP> assert <SEP> D~REG~PRESELECT<tb> <SEP> assert <SEP> MCI~PRESTALL<tb> 44J'D <SEP> & <SEP> P <SEP> & <SEP> PR=PMAX <SEP> & <SEP> assert <SEP> POP~PREG<tb> <SEP> DR < DMAX <SEP> assert <SEP> P~REG~PRESELECT<tb> <SEP> assert <SEP> MCI~PRESTALL<tb> 44K <SEP> D~TAG=DR+1 <SEP> assert <SEP> POP~DREG<tb> <SEP> assert <SEP> D~REG~PRESELECT<tb> 44L <SEP> D <SEP> & <SEP> P~TAG=PR <SEP> & <SEP> DR < DMAX <SEP> assert <SEP> POP~PREG<tb> <SEP> assert <SEP> P~REG~PRESELECT<tb> <SEP> assert <SEP> MCI~PRESTALL<tb> 44M <SEP> D <SEP> & <SEP> P~TAG=PR <SEP> & <SEP> DR=DMAX <SEP> assert <SEP> POP~PREG<tb> <SEP> assert <SEP> P~REG~PRESELECT<tb> 44N <SEP> D~TAG > DR+1 <SEP> assert <SEP> POP~DREG<tb> <SEP> assert <SEP> D~REG~PRESELECT<tb> <SEP> assert <SEP> MCI~PRESTALL<tb> 44O <SEP> P~TAG=PR+1 <SEP> & <SEP> !D <SEP> assert <SEP> POP~PREG<tb> <SEP> assert <SEP> P~REG~PRESELECT<tb> Table 2Initially, MCI controller 5 starts in the WAIT state until an instruction is fetched by fetch unit 11 and decoded by decode unit 12. If the instruction is not a 10559/276W01/P9283 PushPopMultiple instruction, MCI controller 5 returns to the WAIT state as indicated by path 44A. If the instruction is a PushPopMultiple instruction that directs processor 2 to pop a single pointer register and no data registers, MCI controller 5 asserts a POP DREG signal and returns to the WAIT state via path 44B. If the instruction is a Pop Multiple command and instructs processor 2 to pop a single data register, MCI controller 5 asserts the POPDREG signal and returns to the WAIT state via path 44C. If the instruction specifies popping two pointer registers or data pointer registers, MCI controller 5 changes states to the state 42A or state 42C, respectively, via paths 44D or 44F. MCI controller 5 asserts thePOP REG signal while transitioning to the state 42A or the POPDREG signal while transitioning to the state 42C. In addition, while transitioning along path 44D, MCI controller 5 asserts the PREGPRESELECT signal initializing a counter whose count indicates which pointer registers to pop. Similarly, while transitioning along path 44F, MCI controller 5 asserts the DREGPRESELECT signal, initializing a counter whose count indicates which data registers to pop. 10559/276W01/P9283 After popping the first of the two pointer registers,MCI controller 5 returns to the WAIT state from the state 42A via path 44E. During this transition, MCI controller 5 again asserts POPPREG and deasserts PREGPRESELECT, causing execution unit 15 to pop another pointer register.Similarly, after popping the first of the two data registers, MCI controller 5 returns to the WAIT state from the state 42C via path 44G. During this transition, MCI controller 5 asserts POPDREG and deasserts D REG PRESELECT, causing execution unit 15 to pop another data register. For a Pop Multiple instruction that requires instruction decode unit 12 to dispatch the pop of three or more pointer registers, MCI controller 5 transitions from the WAIT state to the state 42B via path 44H. During the transition, MCI controller 8 asserts POPPREG signal and asserts PREGPRESELECT, causing execution unit 15 to pop a first pointer register. In addition, MCI controller 5 asserts MCI~PRESTALL signal causing stall controller 8 to stall one or more stages of pipeline 4 on the following clock. Once in the state 42B, MCI controller 5 continues to pop pointer registers until only two pointer registers remain to be popped. For example, if the instruction 10559/276W01/P9283 called for six pointer registers to be pushed, MCI controller 5 traverses path 44I three times, popping a pointer register each time, until the current pointer register to be popped equals the maximum available pointer register (PMAX) minus one, i. e., when two pointer registers remain to be popped. While traversing path 44I, MCI controller 5 asserts the POP~PREG signal, the PREGPRESELECT signal and the MCIPRESTALL signal. When two pointer registers remain to be popped, MCI controller 5 transitions to the state 42A via path 440 while popping one of the remaining pointer registers. During this transition, MCI controller 5 deasserts MCI PRESTALL. Instruction decoder 12 receives a new instruction on the cycle after MCI controller 5 has traversed path 44E and has popped the remaining pointer register. Similarly, for a PushPopMultiple instruction that requires instruction decode unit 12 to dispatch the pop of three or more data registers and no pointer registers, MCI controller 5 transitions from the WAIT state to the state 42D via path 44J. During the transition, MCI controller 8 asserts POP~DREG signal and asserts D REG-PRESELECT, causing execution unit 15 to pop a first pointer register. 10559/276WOl/P9283 In addition, MCI controller 5 asserts MCI-PRESTALL signal causing stall controller 8 to stall one or more stages of pipeline 4. In state 42D, MCI controller 5 pops data registers by traversing path 44N until two data registers remain to be popped. While traversing path 44N, MCI controller 5 asserts the POP DREG signal, theD REG-PRESELECT signal and the MCI-PRESTALL signal. Once two data registers remain to be popped, MCI controller 5 transitions to the state 42C via path 44K while popping a data register. During this transition, MCI controller 5 deasserts MCIPRESTALL. MCI controller 5 then transitions to the WAIT state via path 44G and pushes the remaining data register. In addition to the above functionality, aPushPopMultiple instruction may specify multiple data registers and multiple pointer registers to be popped.Generally, state machine 30 is designed to first pop the pointer registers, followed by the data registers, although the invention is not limited as such. For a PushPopMultiple instruction that specifies popping a single pointer register and a single data register, MCI controller 5 transitions to the state 42C via path 44F'and asserts the POP~PREG signal to pop the 10559/276WOl/P9283 pointer register. Next, MCI controller 5 transitions back to the WAIT state via path 44G, asserts the POP~DREG signal and pops the data register. For a PushPopMultiple instruction that specifies popping one pointer register and more than one data register, MCI controller 5 transitions to the state 42D via path 44J'and asserts the POPPREG signal, the PREGPRESELECT signal and the MCI-PRESTALL signal to pop the pointer register. Next, MCI controller 5 pops all but two of the data registers by traversing path 44N, pops a data register by traversing path 44K and pops the last data register and returns to the WAIT state by traversing path 44G. Finally, for a Pop Multiple that specifies popping multiple pointer registers and at least one data registers,MCI controller 5 transitions to the state 42B via path 44H' and asserts the POP-PRE signal and the PREGPRESELECT signal to push a first pointer register. Next, MCI controller 5 pops all but one of the pointer registers by traversing path 44I. If the instruction specifies a single data register to be popped, MCI controller 5 pops the final pointer register by traversing path 44M and popping the single data register by traversing path 44G. Otherwise, 10559/276W01/P9283 MCI controller 5 pops the final pointer register by traversing path 44L and pops the multiple data registers by traversing paths 44N if necessary, followed 44K and 44G. Additional examples of instructions that direct pipelined processor 2 to perform multiple operations according to the invention are the Link instruction and theUnlink instruction. The Link instruction is typically used when invoking a subroutine and causes pipeline processor 4 to push a return address for a subroutine on a stack, push a frame pointer on the stack, move the stack pointer to the frame pointer and update the stack pointer based on a frame size for the subroutine, as specified by the instruction.The Unlink instruction is used when exiting the subroutine and causes pipelined processor 2 to restore the return address from the stack, restore the stack pointer and restore the frame pointer from the stack. The following examples illustrate the Link and Unlink instructions: link 1234; unlink;Figure 5 illustrates an example state diagram 50 for a state machine within MCI controller 5 for carrying out the operations of the Link command. While operating according to stage diagram 50, MCI controller 5 may assert one or more MCI signals 7 directing pipeline 4 to perform a corresponding operation. In addition, MCI controller 5 may assert the MCI~PRESTALL signal to stall pipeline 4. The following table summarizes the output signals that are asserted while MCI controller 5 transitions through state machine 50: PATH <SEP> OUTPUT <SEP> SIGNAL<tb> 54A <SEP> None<tb> 54BPUSHRTS, <SEP> MCIPRESTALL<tb> 54C <SEP> PUSH~FP, <SEP> MCI~PRESTALL<tb> 54 <SEP> D <SEP> MOVE~SP~TO~FP<tb> 54E <SEP> UPDATE <SEP> SP<tb> Table 3If the present instruction is not a link command, then state machine directs MCI controller 5 to return to theWAIT state via path 54A. If the instruction is a Link instruction, MCI controller 5 transitions to stage 52B via path 54B and asserts the PUSH~RTS signal, causing decode unit 12 to dispatch a push of the return address on the stack. In addition, MCI controller 5 asserts MCI~PRESTALL to stall pipeline 4 on the following cycle. Next, MCI controller 5 transitions to state 52C via path 54C, asserts PUSH~FP, causing decode unit 12 to dispatch a push of the frame pointer register onto the stack and asserts MCI~PRESTALL to stall pipeline 4 on the following cycle. MCI controller 5 then transitions to state 52D via path 54D and asserts MOVESP TOFP, causing instruction decode unit 12 to dispatch a move of the contents of the stack pointer register to the frame pointer register. Finally, MCI controller 5 transitions to theWAIT state via path 54E and asserts UPDATES, causing instruction decode unit 12 to dispatch a subtract of the frame size from the stack pointer as specified by the instruction. Figure 6 illustrates an example state diagram 60 for a state machine within MCI controller 5 for carrying out the operations of the Unlink command. The following table summarizes the output signals that are asserted while MCI controller 5 transitions through state machine 60: PATH <SEP> OUTPUT <SEP> SIGNAL<tb> 64A <SEP> None<tb> 64B <SEP> LOAD <SEP> RTS, <SEP> MCI <SEP> PRESTALL<tb> 64C <SEP> LOAD~SP<tb> 64D <SEP> UPDATE-FP<tb> Table 4If the present instruction is not an Unlink command, then state machine 60 directs MCI controller 5 to return to 10559/276W01/P9283 the WAIT state via path 64A. If the instruction is anUnlink instruction, MCI controller 5 transitions to stage 62B via path 64B and asserts the LOADRTS signal, causing instruction decode unit 12 to assert control signals that cause a return address to be to retrieved from the stack as follows: RETS = [FP + 4]. In addition, MCI controller 5 asserts MCI-PRESTALL to stall pipeline 4 on the following cycle. Next, MCI controller 5 transitions to state 62C via path 64C and asserts LOADSP, causing instruction decode unit 12 to assert control signals that cause the setting of the stack pointer as follows: SP = FP + 8. Finally, MCI controller 5 transitions back to the WAIT state via path 64D and asserts UPDATE~FP, causing instruction decode unit 12 to assert control signals that cause the frame pointer to be loaded from the stack as follows: FP= [FP]. Figure 7 is a schematic diagram illustrating an example embodiment of a portion of stall controller 8.Stall controller 8 may receive a number of input signals, such as stall condition 1 through stall condition 8, which may be asserted when a respective stall condition has been detected. The input signals are for exemplary purposes 10559/276WOl/P9283 only; for example, stall controller 8 may receive any number of different stall conditions for the various stages of pipeline 4. In response to the input stall condition signals, stall controller 8 may generate stall signals 9 to stall pipeline 4. Stall controller 8 may produce a plurality of stall signals 9, which correspond to the stages of pipeline 4. For example, when either stall condition 1 or stall-condition-2 is asserted, and processor 2 is not in reset, stall controller 8 may assert the stallwb output signal, resulting in a stall of the WB stage of pipeline 4.Notably, the stallwb output signal is used to generate stall output signals for earlier stages of pipeline 4, such as the stall-ex output signal. More specifically, stall controller 8 asserts the stall ex output signal when stall condition 3, stall condition4 or stallwb is asserted and processor 2 is not in reset. In this manner, a stall in the WB stage forces a stall in the EX stage.Stall controller 8 similarly generates the stall~ac and stall~dec signals based on independent hazard conditions as well as stalls in later stages of pipeline 4. When conditions arise that cause the decode stage to stall, stall controller 8 asserts the stall~dec~mci signal, which causes MCI controller 5 to stall. More specifically,MCI controller 5 does not transition from its current state when stall dec mci is asserted. Stall timing circuit 72 of stall controller 8 receives the MCI~PRESTALL signal from MCI controller 5 and, in response, asserts the MCISTALL signal. OR gate 70 receives the MCI~STALL signal provided by stall timing circuit 72 and asserts the STALL-DE signal, thereby stalling the decode stage and the earlier stages of pipeline 4. Figure 8 is a timing diagram illustrating that stall controller 8 may take advantage of detecting an MCI instruction prior to the decode stage of pipeline 4 in order to increase performance of pipeline 4. In one embodiment, a pre-decoder in the IF stage of pipeline 4 decodes an MCI instruction one stage earlier than the decode stage. If an MCI instruction is pre-decoded, the pre-decoder asserts the MCI~PREDECODE signal. On the following clock when the MCI instruction moves to the decode stage, the MCISTALL1STCYCLE signal is asserted, which is a flopped version of the MCI~PREDECODE signal.STALL controller 5 provides the MCI-STALL signal based on the ORing of the MCI~STALL~FIRST~CYCLE signal and the 10559/276WOl/P9283 MCISTALLREMAINDER signal. The MCISTALLREMAINDER signal is a flopped version of MCI-PRESTALL that is controlled by the state logic of MCI controller 5 as described above. Figure 9 is a schematic diagram of an example stall timing circuit 72 for generating MCI STALL from the MCIPREDECODE signal received from the pre-decoder and the MCI~PRESTALL signal received from MCI controller 5. During the first cycle that an MCI instruction is in the decode stage, OR gate 92 asserts MCISTALL when the flopped version of MCIPREDECODE is asserted and there is no current MCI being executed. For MCI instructions that require more than one stall cycle, stall timing circuit 72 generates the remaining stall cycles based upon theMCI~PRESTALL signal received from MCI controller 5. Figure 10 is a schematic of an example data register address generation circuit 100 for generating a data register value (D~TAG) representing the current data register to be pushed or popped. D~TAG may include a plurality of data lines, such as three data lines, capable of indicating a range of data register. For push operations, circuit 100 counts up from a starting data register to the maximum data register. For pop operations, circuit 100 counts down through the range of registers to the last data register. More specifically, on the first cycle, multiplexer 102 selects between a maximum data register (DMAX) or a starting data register from an instruction field (DR), such as data register five, based on whether PPMSELECT, which is asserted for push operations and deasserted for pop operations. The D REG-PRESELECT output signal provided by MCI controller 5 enables multiplexer 104 to select the output of multiplexer 102 for the first cycle of an MCI instruction and the output of storage circuit 108 for the remaining cycles. The output of multiplexer 104, D TAG, is incremented or decremented by adder 106, depending PPM~SELECT, and fed back to storage circuit 108 The output signal, D TAG, is fed to instruction decode unit 12 of pipeline 4 for pushing or popping registers and is also fed back as an input to the state control logic of MCI controller 5. Figure 11 is a schematic of an example pointer register address generation circuit 110 for outputting a pointer register value (P TAG) representing the current pointer register to be pushed or popped. Similar to circuit 100 of Figure 10, circuit 110 counts down through 10559/276WOl/P9283 the range of registers for pop operations and counts up from a starting pointer register for push operations. Various embodiments of the invention have been described. For example, a pipelined processor has been described that includes a reset unit that provides an output reset signal to at least one stage of an execution pipeline. The reset unit handles reset requests, such as hard resets, soft resets and emulation resets, as a reset event having an assigned priority. The processor can be implemented in a variety of systems including general purpose computing systems, digital processing systems, laptop computers, personal digital assistants (PDA's) and cellular phones. In such a system, the processor may be coupled to a memory device, such as a Flash memory device or a static random access memory (SRAM), that stores an operating system or other software applications. These and other embodiments are within the scope of the following claims.
A technique is provided that enables the formation of metal suicide individually for N-channel transistors and P-channel transistors, while at the same time a strain-inducing mechanism is also provided individually for each transistor type. In this way, a cobalt suicide (130, 230) having a reduced distance to the channel region of an NMOS transistor (120, 220) may be provided, while a P-channel transistor (140, 240) may receive a highly conductive nickel suicide (150, 250), without unduly affecting or compromising the characteristics of the N-channel transistor (120, 220).
CLAIMS WHAT IS CLAIMED: 1. A method, comprising: forming a first transistor element (120, 220) comprising a first gate electrode structure (121, 221) including a first sidewall spacer structure (122, 260) having a first width (122A, 222A); forming a second transistor element (140, 240) comprising a second gate electrode structure (141, 241) including a second sidewall spacer structure (142, 270) having a second width (142A, 242A) other than said first width (122A, 222A); forming a first metal suicide (130, 230) in said first transistor element (120, 220); forming a second metal suicide (150, 250) in said second transistor element (140, 240), said first (130, 230) and second (150, 250) metal suicides differing in at least one of a material composition, a thickness, and a process condition used during formation; forming a first contact liner layer (131, 231) above said first transistor element (120, 220); and forming a second contact liner layer (151, 251) above said second transistor element (140, 240), said first (131, 231) and second (151, 251) contact liner layers differing in at least one of material composition and internal stress. 2. The method of claim 1, wherein forming said first (120, 220) and second (140, 240) transistor elements comprises: forming said first (121, 241) and second (141, 241) gate electrode structures each comprising at least an inner (124, 144) and an outer spacer element (146); selectively removing the outer spacer element (146) of said first gate electrode structure (121, 221); and removing said outer spacer element (146) of said second sidewall spacer structure (141, 241) after the formation of said second metal suicide (150, 250). 3. The method of claim 1, wherein forming said first metal suicide (130, 230) comprises depositing a cobalt layer and initiating a chemical reaction with silicon (127) prior to forming said second metal silicide and wherein forming said second metal suicide (150, 250) comprises forming a nickel suicide after the formation of said first metal silicide (130, 230). 4. The method of claim 1, wherein forming said first (130, 230) and second (150, 250) metal suicides comprises selecting at least one of a layer thickness of refractory metal, a heat treatment temperature and a heat treatment duration differently for said first (130, 230) and second (150, 250) metal suicides. 5. The method of claim 1, wherein forming said first (131, 231) and second (151, 251) contact liner layers comprises forming said first contact liner (131, 231) layer above said first (120, 220) and second (140, 240) transistor elements, selectively removing said first contact liner (131, 231) layer above said second transistor element (140, 240), and forming said second contact liner layer (151, 251) above said first (120, 220) and second (140, 240) transistor elements. 6. The method of claim 5, further comprising: forming a hard mask (107A) to expose said first transistor element (120, 220) and cover said second transistor element (140, 240), forming said first metal suicide (130, 230) and forming said first contact liner layer (131, 231); selectively removing said hard mask (107A) and said first contact liner (131, 231) layer above said second transistor element (140, 240); forming said second metal suicide (150, 250); depositing said second contact liner layer (151, 251); and selectively removing said second contact liner layer (151, 251) above said first transistor element (120, 220). 7. The method of claim 1, further comprising forming an embedded compound semiconductor region (274) in a drain and source region of at least one of the first and second transistor elements (220, 240). 8. A semiconductor device (100, 200), comprising: a first transistor element (120, 220) having a first gate electrode structure (121, 221) including a first spacer structure (122, 222) having a first width (122A, 222A); a second transistor element (140, 240) having a second gate electrode structure (141, 241) including a second spacer structure (142, 242) having a second width (142A, 242A) other than said first width (142, 242); a first metal suicide (130, 230) formed in said first transistor element (120, 220) and having a first characteristic; a second metal suicide (150, 250) formed in said second transistor element (140, 240) and having a second characteristic other than said first characteristic; a first contact liner layer (131, 231) having a first internal stress and formed above said first transistor element (120, 220); and a second contact liner layer (151, 251) formed above said second transistor element (140, 240) and having a second internal stress other than said first internal stress. 9. The semiconductor device (100,200) of claim 8, wherein said first transistor element (120, 220) represents anN-channel transistor and said second transistor (140, 240) represents a P-channel transistor. 10. The semiconductor device (100, 200) of claim 8, further comprising an embedded semiconductor compound (274) in a drain and source region of one of said first and second transistor elements (220, 240).
TECHNIQUE FOR FORMEVG CONTACT INSULATION LAYERS AND SILICIDE REGIONS WITHDEFFERENT CHARACTERISTICSBACKGROUND OF THE INVENTION 1. TECHNICAL FIELDGenerally, the present invention relates to the formation of integrated circuits, and, more particularly, to an integration scheme for individually enhanced performance characteristics of NMOS transistors and PMOS transistors. 2. BACKGROUND ART The fabrication of integrated circuits requires the formation of a large number of circuit elements on a given chip area according to a specified circuit layout. Generally, a plurality of process technologies are currently practiced, wherein, for complex circuitry, such as microprocessors, storage chips and the like, CMOS technology is currently the most promising approach, due to the superior characteristics in view of operating speed and/or power consumption and/or cost efficiency. During the fabrication of complex integrated circuits using CMOS technology, millions of complementary transistors, i.e., N-channel transistors and P-channel transistors, are formed on a substrate including a crystalline semiconductor layer. A MOS transistor, irrespective of whether an N-channel transistor or a P-channel transistor is considered, comprises so-called PN junctions that are formed by an interface of highly doped drain and source regions with an inversely doped channel region disposed between the drain region and the source region. The conductivity of the channel region, i.e., the drive current capability of the conductive channel, is controlled by a gate electrode formed above the channel region and separated therefrom by a thin insulating layer. The conductivity of the channel region upon formation of a conductive channel, due to the application of an appropriate control voltage to the gate electrode, depends on the dopant concentration, the mobility of the charge carriers, and, for a given extension of the channel region in the transistor width direction, on the distance between the source and drain regions, which is also referred to as channel length. Hence, in combination with the capability of rapidly creating a conductive channel below the insulating layer upon application of the control voltage to the gate electrode, the conductivity of the channel region substantially determines the performance of the MOS transistors. Thus, the reduction of the channel length, and associated therewith the reduction of the channel resistivity, renders the channel length a dominant design criterion for accomplishing an increase in the operating speed of the integrated circuits.The reduction of the transistor dimensions, however, creates a plurality of issues associated therewith that have to be addressed so as to not unduly offset the advantages obtained by steadily decreasing the channel length of MOS transistors. One major problem in this respect is the development of enhanced photolithography and etch strategies to reliably and reproducibly create circuit elements of critical dimensions, such as the gate electrode of the transistors, for a new device generation having reduced features sizes. Moreover, highly sophisticated dopant profiles, in the vertical direction as well as in the lateral direction, are required in the drain and source regions to provide low sheet and contact resistivity in combination with a desired channel controllability. In addition, the vertical location of the PN junctions with respect to the gate insulation layer also represents a critical design criterion in view of leakage current control. Hence, reducing the channel length also requires reducing the depth of the drain and source regions with respect to the interface formed by the gate insulation layer and the channel region, thereby requiring sophisticated implantation techniques.Irrespective of the technological approach used, sophisticated spacer techniques are necessary to create the highly complex dopant profile and to serve as a mask in forming metal suicide regions in the gate electrode and the drain and source regions in a self-aligned fashion. The metal suicide regions are provided to improve the contact resistance of the drain and source regions as well as the conductivity of the gate electrode, when formed from polysilicon, since some metal suicides exhibit an increased conductivity compared to even highly doped silicon. It turns out that different metal suicides as well as their position have different influences on the performance of NMOS transistors and PMOS transistors, respectively. For instance, locating the metal suicide region more closely to the channel region of an NMOS transistor enhances the performance thereof, while the performance of a PMOS transistor may be improved by using nickel suicide instead of cobalt suicide, which is a frequently used material. However, nickel suicide tends to form so-called "piping" defects, that is, suicide "stingers," which may extend into the channel region, thereby possibly not allowing the nickel suicide to be located near the channel region as closely as desired without unduly affecting the transistor behavior. Since the continuous size reduction of the critical dimensions, i.e., the gate length of the transistors, necessitates the adaptation and possibly the new development of process techniques concerning the above- identified process steps, it has been proposed to enhance device performance of the transistor elements by increasing the charge carrier mobility in the channel region for a given channel length. In principle, at least two mechanisms may be used, in combination or separately, to increase the mobility of the charge carriers in the channel region. First, the dopant concentration within the channel region may be reduced, thereby reducing scattering events for the charge carriers and thus increasing the conductivity. However, reducing the dopant concentration in the channel region significantly affects the threshold voltage of the transistor device, thereby making a reduction of the dopant concentration a less attractive approach unless other mechanisms are developed to adjust a desired threshold voltage. Second, the lattice structure in the channel region may be modified, for instance by creating tensile or compressive strain, which results in a modified mobility for electrons and holes.For example, creating tensile strain in the channel region increases the mobility of electrons, wherein, depending on the magnitude of the tensile strain, an increase in mobility of up to 20% or more may be obtained, which in turn directly translates into a corresponding increase in the conductivity. On the other hand, compressive stress in the channel region may increase the mobility of .holes, thereby providing the potential for enhancing the performance of P-type transistors. Consequently, it has been proposed to introduce, for instance, a silicon/germanium layer or a silicon/carbon layer in or below the channel region to create tensile or compressive stress.Another promising approach is the creation of stress in the insulating layer, which is formed after the formation of the transistor elements to embed the transistors and which receives metal contacts to provide the electrical connection to the drain/source regions and the gate electrode of the transistors. Typically, this insulation layer comprises at least one etch stop layer or liner and a further dielectric layer that may selectively be etched with respect to the etch stop layer or liner. In the following, this insulation layer will be referred to as contact layer and the corresponding etch stop layer will be denoted as contact liner layer. In order to obtain an efficient stress transfer mechanism to the channel region of the transistor for creating strain therein, the contact liner layer that is located in the vicinity of the channel region has to be positioned closely to the channel region. In advanced transistor architectures requiring a triple spacer approach for achieving the highly complex lateral dopant profile, a significant amount of the stress of the contact liner layer is, however, "absorbed" by the spacers, thereby making conventional triple spacer approaches, despite their advantages with respect to process complexity compared to epitaxially grown stress layers, less attractive for creating strain in channel regions of advanced transistors. For this reason, in some approaches, one or more of the spacers is removed prior to the formation of metal suicides, wherein the removal process may be performed differently for PMOS and NMOS transistors, depending on the device requirements.Consequently, a plurality of mechanisms are known, which individually may improve the performance of transistor elements, which may, however, not be compatible with currently used integration schemes, as NMOS transistors and PMOS transistors may typically require a different treatment with respect to, for instance, strained channel regions, type and location of metal silicide regions and the like.In view of the above-described situation, there exists a need for an improved technique that enables an enhanced integration scheme to address some or all of the above-identified performance improving mechanisms.DISCLOSURE OF INVENTION The following presents a simplified summary of the invention in order to provide a basic understanding of some aspects of the invention. This summary is not an exhaustive overview of the invention. It is not intended to identify key or critical elements of the invention or to delineate the scope of the invention. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is discussed later. Generally, the present invention is directed to a technique that enables the formation of different types of transistor elements, such as P-channel transistors and N-channel transistors, in which an enhanced performance characteristic is obtained by combining strain-creating mechanisms and silicide formation mechanisms that are individually adapted to the specific transistor element in order to obtain an overall synergetic effect. According to one illustrative embodiment of the present invention, a method comprises forming a first transistor element comprising a first gate electrode structure including a first sidewall spacer structure having a first width. The method further comprises forming a second transistor element comprising a second gate electrode structure including a second sidewall spacer structure having a second width that differs from the first width. Moreover, a first metal silicide is formed in the first transistor element and a second metal silicide is formed in the second transistor element, wherein the first and second metal suicides differ in at least one of a material composition, a thickness and a process condition. Furthermore, a first contact liner layer is formed above the first transistor element and a second contact liner layer is formed above the second transistor element, wherein the first and second contact liner layers differ in at least one of material composition and internal stress.According to another illustrative embodiment of the present invention, a semiconductor device comprises a first transistor element having a first gate electrode structure including a first spacer structure having a first width, and a second transistor element having a second gate electrode structure including a second spacer structure having a second width that differs from the first width. The semiconductor device further comprises a first metal silicide formed in the first transistor element, wherein the first metal silicide has a first characteristic. Furthermore, a second metal silicide is formed in the second transistor element and has a second characteristic that differs from the first characteristic. The semiconductor device further comprises a first contact liner layer having a first internal stress which is formed above the first transistor element, and also comprises a second contact liner layer that is formed above the second transistor element and has a second internal stress that differs from the first internal stress.BRIEF DESCRIPTION OF DRAWINGSThe invention may be understood by reference to the following description taken in conjunction with the accompanying drawings, in which like reference numerals identify like elements, and in which:Figures la-Ik schematically show cross-sectional views of a semiconductor device including two different transistor types during various manufacturing stages in accordance with illustrative embodiments of the present invention; and Figures 2a-2c schematically show cross-sectional views of a semiconductor device during various manufacturing stages, wherein an embedded semiconductor compound for creating internal stress is formed in addition to other strain-creating mechanisms and suicide formation techniques according to further illustrative embodiments of the present invention.While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.DETAILED DESCRIPTION OF THE INVENTION Illustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure.The present invention will now be described with reference to the attached figures. Various structures, systems and devices are schematically depicted in the drawings for purposes of explanation only and so as to not obscure the present invention with details that are well known to those skilled in the art. Nevertheless, the attached drawings are included to describe and explain illustrative examples of the present invention. The words and phrases used herein should be understood and interpreted to have a meaning consistent with the understanding of those words and phrases by those skilled in the relevant art. No special definition of a term or phrase, i.e., a definition that is different from the ordinary and customary meaning as understood by those skilled in the art, is intended to be implied by consistent usage of the term or phrase herein. To the extent that a term or phrase is intended to have a special meaning, i.e., a meaning other than that understood by skilled artisans, such a special definition will be expressly set forth in the specification in a definitional manner that directly and unequivocally provides the special definition for the term or phrase.Generally, the present invention addresses the problem of efficiently transferring stress from a contact liner layer, Le., from an etch stop layer that is used in combination with a contact dielectric layer, to the channel region of respective transistor elements, while an enhanced process flexibility in forming appropriate metal suicide regions in the respective transistor elements is provided. For this purpose, the location of a respective metal suicide region with respect to its distance from the channel region and/or the material composition or other characteristics of the metal suicide, which may be determined by the process conditions during the formation of the metal suicide, may appropriately be tailored for the respective transistor element, substantially without adversely affecting the corresponding formation of the metal suicide in the other transistor type. Thus, a different strain in the respective channel regions may be created, such as tensile strain in the channel region of an N-channel transistor and compressive strain in the channel region of a P-channel transistor, while nevertheless the respective metal suicides may be formed such that the overall performance of each transistor type may be increased even more. With reference to the accompanying drawings, further illustrative embodiments of the present invention will now be described in more detail. Figure Ia schematically shows a semiconductor device 100 comprising a substrate 101, which may represent any appropriate semiconductor substrate for forming silicon-based transistor elements. Thus, the substrate 101 may represent a silicon bulk substrate or a silicon-on-insulator (SOI) substrate having formed thereon an appropriate silicon-based crystalline layer for forming respective transistor devices. In the embodiment shown in Figure Ia, the substrate 101 represents an SOI substrate having formed thereon a first transistor element 120 and a second transistor element 140, which may be separated by an isolation structure 102, which may be provided in the form of a shallow trench isolation. In the first transistor element 120, which may represent, in one illustrative embodiment, an N-channel transistor, a gate electrode structure 121 is formed on a gate insulation layer 129, wherein the gate electrode structure 121 may be comprised of highly doped polysilicon, which is to receive a metal suicide region, as will be described later on. It should be appreciated that, in highly sophisticated applications, the gate electrode structure 121 may have a gate length, i.e., the horizontal dimension of the gate electrode structure 121 in Figure Ia, of 100 nm and even less, or even 50 nm and less for devices corresponding to the 90 nm technology. Formed on sidewalls of the gate electrode structure 121 is a sidewall spacer structure 122, which may be comprised, in the manufacturing stage as shown in Figure Ia, of at least one etch stop layer 123 and a spacer element 124. For example, the etch stop layer 123 may be comprised of silicon dioxide, while the spacer element 124 may be comprised of silicon nitride. However, other configurations may be used in which, for example, the etch stop layer 123 is comprised of silicon oxynitride or silicon nitride and the spacer element 124 is comprised of silicon oxynitride, silicon dioxide and the like. Moreover, a width 122a of the spacer structure 122 is substantially defined by the lateral extension at the foot of the spacer element 124 and is selected to specifically determine a lateral distance of a metal suicide to be formed within drain and source regions 127 with respect to a channel region 128 located between the drain and source regions 127.Similarly, the second transistor element 140 may comprise a gate electrode structure 141 comprised of highly doped polysilicon, which is formed on a gate insulation layer 149. A sidewall spacer structure 142 is formed at the sidewalls of the gate electrode structure 141, wherein the spacer structure 142 may comprise at least an inner spacer element 144 formed on a corresponding etch stop layer 143 and an outer spacer element 146 formed on a respective etch stop layer 145. With respect to the material composition of the etch stop layers 143, 145 and the spacer elements 144, 146, the same criteria apply as explained above for the spacer element 124 and the etch stop layer 123 of the first transistor element 120. Moreover, a width 142a of the spacer structure 142, Ie., its lateral extension at the foot of the spacer structure 142, differs from the corresponding width 122a, since the lateral distance of a metal suicide region to be formed in the second transistor element 140 may require a different value for enhanced performance of the transistor element 140, as is previously explained with respect to the different performance of NMOS and PMOS transistors in view of metal suicide.Furthermore, the semiconductor device 100 comprises, at this stage of manufacturing, an etch mask 104 to cover the second transistor element 140 and to expose the first transistor element 120 to an etch ambient 105.The semiconductor device 100 as shown in Figure Ia may be formed in accordance with the following processes. After the formation of the trench isolation 103 on the basis of well-established photolithography, etch, deposition and polish techniques, a layer of gate insulation material may be formed, for instance, by advanced oxidation and/or deposition processes, to provide the required material composition and thickness as are necessary in highly advanced transistor elements. For example, a silicon dioxide based layer may be formed with a thickness of 1.5-5.0 nm in advanced applications. Thereafter, a layer of gate electrode material, such as pre-doped polysilicon, may be deposited by established process recipes, for instance involving low pressure chemical vapor deposition (CVD) and the like. Subsequently, advanced photolithography techniques in accordance with well-established recipes may be performed, followed by sophisticated etch processes in order to form the gate electrode structures 121 and 141 having the required gate lengths.Thereafter, the spacer structures 122 and 142 may be formed in accordance with well-established processes, such as depositing corresponding etch stop layers and conformally depositing a spacer material, which is then anisotropically etched to obtain the respective spacer elements. During and after the process sequence for forming the gate electrode structures 121, 141, implantation processes may be performed to form the corresponding dopant profile for the drain and source regions 127, 147, wherein the spacer structures 122, 142 act in their corresponding manufacturing stage as respective implantation masks. It should be appreciated that, depending on the complexity of the lateral dopant profile in the drain and source regions, 127, 147, one, two, three or more individual spacer formation steps may be used. For instance, in currently advanced process strategies, a so-called triple spacer approach is frequently used. The process for forming the spacer structures 122, 142 may in some embodiments be performed substantially identically for the first and the second transistor elements 120, 140 wherein the spacer width 142a of the second transistor element is selected so as to substantially meet the requirements for a subsequent formation of metal silicide in the drain and source regions 147. For example, experimental data seem to indicate that the transistor performance of P-channel transistors may be enhanced by providing a highly conductive metal silicide, such as nickel silicide, rather than forming a cobalt silicide, even if the spacer width 142a would be reduced for cobalt silicide. A small value for the width142a which could be used with cobalt silicide may, however, be inappropriate with nickel silicide due to the previously explained piping effect of nickel silicide. On the other hand, a reduced lateral distance of metal silicide from the channel region of an N-channel transistor may provide enhanced performance even at the cost of a reduced conductivity of the respective metal silicide so that, for instance, cobalt silicide may advantageously be used in combination with N-channel transistors, since the formation of nickel silicide may not allow as small a spacer width as desired for an N-channel configuration. Consequently, the dimension of the inner spacer element 144 and thus of the spacer element 124 may be selected such that an appropriate masking effect during the implantation sequence in combination with a desired small width 122a may be accomplished. For this purpose, the etch mask 104, for instance in the form of a resist mask, is formed in accordance with well- established photolithography techniques to enable a selective removal of outer spacer elements, such as the spacer elements 146 and the corresponding etch stop layer 145, to finally obtain the spacer structure 122 for the first transistor element 120. Corresponding recipes for the etch process 105 are well-established in the art.Figure Ib schematically shows the semiconductor device 100 in a further advanced manufacturing stage. Here, an etch mask 106, for example provided in the form of a photoresist mask, is formed above the device 100 to expose a portion of a hard mask layer 107 above the first transistor element 120, while covering the portion of the hard mask layer 107 formed above the second transistor element 140. Moreover, the semiconductor device 100 is exposed to a selective etch ambient 108 for selectively removing the exposed portion of the hard mask layer 107. The hard mask layer 107 may be formed on the basis of well-established plasma enhanced CVD techniques in the form of a silicon nitride layer, a silicon dioxide layer, a silicon oxynitride layer and the like. In some embodiments, a thin etch stop layer (not shown) may be formed prior to the formation of the hard mask layer 107 to reliably stop the etch process 108 without substantially damaging sensitive areas of the first transistor element 120. For instance, a silicon dioxide layer may be deposited followed by the deposition of a silicon nitride layer as the hard mask layer 107. In this case, the etch process 108 may also include a selective etch step, which may be provided as an isotropic etch process in order to remove the etch stop layer after etching through the hard mask layer 107.Figure Ic schematically shows the semiconductor device 100 after the completion of the above- described etch process 108 and after the removal of the etch mask 106. Consequently, the semiconductor device 100 comprises a hard mask 107a that covers the second transistor element 140 but does not cover the first transistor element 120. In this state, a first metal suicide may be formed in the first transistor element 120, wherein the width 122a substantially determines the lateral distance of the respective metal suicide from the channel region 128. Moreover, process conditions as well as the selection of any desired metal precursor may be performed substantially without adversely affecting the second transistor element 140, which is covered by the hard mask 107a.Figure Id schematically shows the semiconductor device 100 after the formation of a first metal suicide in the first transistor element 120. Thus, the first transistor element 120 may comprise respective metal suicide regions 130 formed in and on the drain and source regions 127 and in and on the gate electrode structure 121. In one illustrative embodiment, at least the metal suicide regions 130 formed in and on the drain and source regions 127 may be comprised of cobalt suicide, while, in other embodiments, other suicides formed from refractory metals, such as titanium, tungsten, or combinations thereof, and the like, may be provided. The first metal suicide in the form of the regions 130 may be formed by the following process sequence. First, a cleaning process may be performed to remove any contaminants and material residues from the preceding etch and resist strip processes. Thereafter, a layer of refractory metal, such as a cobalt layer, may conformally be deposited with a specified thickness in accordance with established techniques, such as sputter deposition. Next, a first heat treatment may be carried out, wherein a process temperature and a duration of the first heat treatment are appropriately selected to initiate a chemical reaction between the cobalt and the silicon contained within the gate electrode structure 121 and the drain and source regions 127. For example, a temperature in the range of approximately 400-600[deg.]C may be applied for several seconds up to 60 seconds, depending on the desired thickness of the regions 130. Thereafter, any non-reacted refractory metal formed on the hard mask 107a and other dielectric regions, such as the spacer structure 122 and the isolation structure 102, as well as any non-reacted refractory metal that may still be present above the gate electrode structure 121 and the drain and source regions 127 may be removed by a selective etch process, for which well-established process recipes are known in the art for materials such as cobalt, titanium, tungsten and the like.Next, a second heat treatment may be performed with a specified higher temperature and for a specified duration in order to convert the cobalt silicide formed during the first heat treatment to a highly conductive phase comprising a significant amount of cobalt disilicide. It should be appreciated that the process conditions used during the first heat treatment and/or the second heat treatment, such as the temperature, the duration of the heat treatment, the initial thickness of the refractory metal layer, may significantly affect the characteristics of the regions 130 with respect to their electrical behavior and with respect to their performance during the further manufacturing sequence. In some embodiments, the process conditions for the formation of the first metal silicide, Le., the regions 130, may be designed such that further processes, in particular involving further heat treatments for forming a second metal silicide in the second transistor element 140, may be taken into consideration. For instance, if the formation of the second metal silicide to be formed in the second transistor element 140 may require a heat treatment with a moderately high temperature, the second heat treatment during the formation of the regions 130 may be omitted or may correspondingly be shortened. In this way, the combined effect of a corresponding heat treatment during the formation of the second metal silicide and the process sequence prior to and during the first and, if performed, during the second heat treatment for forming the regions 130 may then in combination establish the first metal silicide in the regions 130 having the desired characteristics.Moreover, in one illustrative embodiment, the order of forming respective metal silicide regions may be selected in accordance with the temperature required for each of the metal silicide formation processes so that the process requiring the higher anneal temperature may be performed first, thereby providing a high degree of "decoupling" in forming the first and second metal suicides. For instance, the formation of the hard mask 107a may be performed to cover the first transistor element 120 and expose the second transistor element 140, when the formation of the second metal silicide in the second transistor element 140 may require a higher anneal temperature compared to the metal silicide to be formed in the first transistor element 120. In other embodiments, the first and second transistor elements 120, 140 may receive metal suicides formed from the same precursor metal, wherein a difference in the first and second metal silicide is substantially obtained by using different process conditions and hence the order of forming the first and second metal suicides may be selected in accordance with these conditions. By way of example, the metal silicide requiring the higher anneal temperature may be formed first. Similarly, if a difference in process conditions is to be obtained by varying the anneal duration, the metal silicide requiring the shorter heat treatment may be formed last.Figure Ie schematically shows the semiconductor device 100 in a further advanced manufacturing stage. In this stage, a first contact liner layer 131, i.e., an etch stop layer used in combination with a dielectric layer to be formed to enclose the first and second transistors 120, 140, is formed above the first transistor element 120 and the second transistor element 140, which is still covered by the hard mask 107a. In one illustrative embodiment, an etch stop layer 132 is also formed on the first contact liner layer 131. For example, the first contact liner layer 131 may be comprised of any appropriate dielectric material that may be formed with a specific internal stress so as to serve as a strain-inducing layer for the first transistor element 120. In one illustrative embodiment, the first contact liner layer 131 may be comprised of silicon nitride or silicon oxynitride, for which well-established deposition recipes on the basis of plasma enhanced CVD techniques are known, wherein the internal stress of the first contact liner layer 131 may be appropriately adjusted by controlling one or more deposition parameters, such as pressure, temperature, bias power and the like, of the plasma enhanced CVD process. For example, silicon nitride may confo[pi]nally be deposited with an internal stress that ranges from approximately 1.5 GPa compressive stress to approximately 1.5 GPa tensile stress. Similarly, silicon oxynitride may be formed within a wide range of compressive to tensile stress. Depending on the material composition of the first contact liner layer 131, an appropriate material having a high etch selectivity to the layer 131 may be selected so as to sufficiently protect the first contact liner layer 131 above the first transistor element 120 during an etch process for exposing the second transistor element 140 in a later stage. For instance, silicon dioxide may be selected as an appropriate material for the etch stop layer 132 when the first contact liner layer 131 is substantially comprised of silicon nitride. On the other hand, silicon nitride may be used as the etch stop layer 132 if silicon oxynitride is the material of the first contact liner layer 131.Figure If schematically shows the semiconductor device 100 during an etch process 109 for exposing the second transistor element 140. Thus, the device 100 may have formed thereon an etch mask 110, which may be provided in the form of a resist mask. During the etch process 109, the etch stop layer 132, if provided, i.e., the exposed portion thereof, may be removed first by an appropriate etch chemistry. Thereafter, the first contact liner layer 131 may be removed and finally the hard mask layer 107a may be etched away on the basis of well- established recipes. In some embodiments, as previously explained, an additional etch stop layer (not shown) may have been provided prior to the formation of the hard mask 107a, which may now be used during the removal of the hard mask 107a to avoid undue damage in the underlying second transistor element 140. Figure Ig schematically shows the semiconductor device 100 after the completion of the etch process109 and after the removal of the etch mask 110. Hence, the first transistor element 120 comprises the first contact liner layer 131 having the first internal stress and optionally the etch stop layer 132 formed thereon. On the other hand, the second transistor element 140 having the spacers 144, 146 is exposed and may have been subjected to a preceding cleaning process for removing any contaminants and material residues resulting from the previously performed etch process 109.Figure Ih schematically shows the semiconductor device 100 with a second metal suicide in the form of metal suicide regions 150 formed in the second transistor element 140. The metal silicide regions 150 may be comprised of a material that differs from that of the respective metal silicide regions 130, at least regarding the metal silicide regions 150 formed in the drain and source regions 147 and the metal silicide regions 130 formed in the drain and source regions 127, when a process strategy is used, in which the metal silicide in the drain and source region 127 and in the gate electrode structure 121 is formed in separate steps. In some embodiments, the metal suicides 150 and 130 may differ in thickness so that a depth in the corresponding drain and source regions 127 and 147 and/or the corresponding gate electrode structures 121 and 141 may also be adjusted in a transistor- specific manner. In one illustrative embodiment, the metal silicide regions 150 may be comprised of nickel silicide, wherein a lateral distance of the regions 150 with respect to the channel region 148 is substantially determined by the width 142a so as to provide a sufficient safety margin in view of the piping effect frequently observed with nickel silicide. In other embodiments, the metal silicide regions 150 may be comprised of other materials, such as cobalt silicide, titanium silicide, tungsten silicide and the like. As previously explained, however, the regions 150 formed in the drain and source regions 147 differ from the corresponding metal silicide regions 130 in at least one characteristic to provide an individual adaptation and performance increase for each of the transistor elements 120, 140.The second metal suicide regions 150 maybe formed in accordance with well-established processes, for instance by depositing a layer of refractory metal and heat treating the device 100 as is required for initiating a chemical reaction with the underlying silicon in accordance with device requirements. Regarding the selection of appropriate process conditions for the formation of the second metal suicide regions 150, such as initial layer thickness of the refractory metal, anneal temperature, anneal duration and the like, the same criteria apply as previously explained with reference to the first metal suicide regions 130. In one illustrative embodiment, nickel suicide may be formed by a CVD-like technique, in which a gaseous precursor, such as nickel tetra carbonyl (Ni(CO)4)) may be provided in a deposition ambient at an elevated temperature of approximately 250-400<0>C.Subsequently, further anneal cycles may be performed to stabilize the metal suicide in the regions 150. In other process strategies, a second anneal cycle for converting the metal suicide into a highly conductive phase may be required, depending on the material used. For instance, when using cobalt or titanium, a second anneal process is carried out after the removal of any non-reacted metal, thereby creating the highly conductive metal silicide phase. As previously discussed, if a significant influence of the process for forming the second metal silicide regions 150 on the first metal silicide regions 130 is not desired, the second metal silicide is selected so as to require a lower anneal temperature compared to the first metal silicide. For instance, in the illustrative embodiment, in which nickel silicide is formed in the regions 150, the required anneal temperature of approximately 250-400<0>C may be significantly less than a corresponding anneal temperature for forming the first metal silicide regions 130, if for instance comprised of cobalt silicide.Figure Ii schematically shows the semiconductor device 100 with a second contact liner layer 151 formed above the first and second transistor elements 120, 140. The second contact liner layer 151 may exhibit a specific internal stress, which is different from the respective internal stress of the first contact liner layer 131. In one illustrative embodiment, the second contact liner layer 151 is formed with compressive stress so as to provide a compressive strain within the channel region 148 of the transistor 140. In some illustrative embodiments, the outer spacer element 146 or both spacer elements 144, 146 may be removed prior to forming the second contact liner layer 151 so as to enhance the stress transfer efficiency. As previously explained with reference to the first contact liner layer 131, appropriate process recipes for generating internal stress in a dielectric layer are well-established in the art and may effectively be used in forming the second contact liner layer 151. For instance, the second contact liner layer 151 may be comprised of silicon nitride, silicon oxynitride and the like, wherein the first and second contact liner layers 131, 151 may be formed of similar or different materials, depending on process and device requirements. In some embodiments, the internal stress of the first contact liner layer 131 may be selected such that a desired strain in the channel region 128 is created in combination with the second contact liner layer 151. That is, if the layer 131 is formed to exhibit a tensile stress, while the layer 151 exhibits a compressive stress, the tensile stress in the layer 131 may be selected sufficiently high so as to significantly "over compensate" the compressive stress of the layer 151, thereby finally inducing the desired strain in the channel region 128. In other embodiments, the internal stress of the portion of the second contact liner layer 151 formed above the first transistor element 120 may be modified in order to substantially suppress any influence on the internal stress of the layer 131. Figure Ij schematically shows the semiconductor device 100 according to one illustrative embodiment, in which the internal stress of the second contact liner layer 151 is efficiently modified to reduce its influence on the first transistor element 120. For this purpose, a mask 111, such as a resist mask, may be formed which covers the second transistor element 140 while exposing the first transistor element 120. The device 100 may be subjected to a treatment 112, which may represent, in one illustrative embodiment, a selective etch process for removing the exposed portion of the second contact liner layer 151, wherein the etch front may reliably be stopped within the etch stop layer 132. In other illustrative embodiments, the treatment 112 may comprise an ion bombardment, such as an ion implantation with an appropriate ion species, such as xenon, argon, germanium and the like, which are implanted into the exposed portion of the layer 151, thereby substantially relaxing the internal stress thereof by severely damaging the crystalline structure of the layer 151. An appropriate set of implantation parameters may readily be established on the basis of simulation calculation in order to avoid undue penetration of the first contact liner layer 131.Figure Ik schematically shows the semiconductor device 100 after the completion of the treatment 112, wherein, in the embodiment shown, the second contact liner layer 151 formed above the first transistor element 120 has been removed as a result of the treatment 112. Thus, the device 100 comprises the transistor 120 having formed therein a first metal suicide in the form of the regions 130, which may be comprised of a metal suicide that is appropriate for being formed close to the channel region 128, while the second transistor element 140 comprises a second metal suicide in the form of the regions 150, which are laterally spaced apart from the respective channel region 148 substantially according to the width 142a. In illustrative embodiments, the regions 130 may be comprised of cobalt suicide, while the regions 150 may be comprised of nickel suicide, whereas, in other embodiments, any other appropriate combination may be selected as long as the characteristics of the respective regions 130, 150 are individually adapted to the device requirements of the respective transistor elements 120, 140. Moreover, the first contact liner layer 131 induces a desired first strain in the channel region 128, such as a tensile strain, when the transistor 120 represents an N-channel transistor, while the second contact liner layer 151 provides for a different strain in the respective channel region 148 in accordance with device requirements of the transistor 140. Consequently, transistor performance for N-channel transistors and P-channel transistors may individually be increased by forming the metal suicide regions and the respective strain-inducing layers according to the process strategy described above, thereby also maintaining a high degree of process flexibility without undue mutual interaction of the processes for forming the first and second metal suicides. It should be appreciated that in the above-described illustrative embodiments, the first or the second contact liner layer 131, 151 may be used as a mask during the formation of the respective metal suicide of the non-covered transistor element, thereby in total merely requiring a single hard mask for forming the first one of the metal suicide regions (Le., the hard mask 107a in Figure Ic). In other approaches, a corresponding hard mask may be formed prior to each of the formation sequences for forming the respective metal suicide regions, if an exposure of the first or second contact liner layer to the process conditions for forming a metal suicide is considered inappropriate. For example, in Figure Ie, the layer 131 may be considered as a hard mask layer, which may then be patterned to expose the second transistor element 140 and which may then be removed after the formation of the metal suicide regions 150. Thereafter, any appropriate process sequence may be performed to form differently stressed first and second contact liner layers, thereby providing a high degree of compatibility with conventional process strategies. With reference to Figures 2a-2c, further illustrative embodiments of the present invention will now be described in more detail, in which an additional strain-inducing mechanism may be incorporated to even further enhance the overall performance of transistor elements.In Figure 2a, a semiconductor device 200 comprises a first transistor element 220 and a second transistor element 240 at an initial manufacturing stage. In the embodiment shown, the first transistor element220 may represent an N-channel transistor, while the second transistor element 240 may represent a P-channel transistor. The first transistor element 220 may comprise a gate electrode structure 221 which is enclosed by disposable spacers 260, a cap layer 261 and a hard mask 262. Similarly, the second transistor element 240 may comprise disposable spacers 270 and a cap layer 271. Moreover, the device 200 may be subjected to an anisotropic etch process 214 to form recesses 273 adjacent to the disposable spacers 270.The device 200 as shown in Figure 2a may be formed in accordance with well-established processes, comprising the patterning of the gate electrode structures 221, 241 followed by a spacer formation process and a corresponding deposition of a hard mask layer, which may then be patterned by photolithography and anisotropic etch to obtain the hard mask 262. Thereafter, the etch process 214 may be performed on the basis of well-established etch techniques, wherein the disposable spacers 270, the cap layer 271, as well as the hard mask262, act as an etch mask. Thereafter, the device 200 may be subjected, after any pre-cleaning processes, to a selective epitaxial growth process.Figure 2b schematically shows the device 200 during a selective epitaxial growth process 215 to grow a semiconductor compound within the recesses 273, thereby creating a strained embedded semiconductor region 274. In illustrative embodiments, when the second transistor 240 represents a P-channel transistor, the semiconductor compound 274 may be comprised of a mixture of silicon and germanium, thereby forming a region of compressive stress, which results in an efficient creation of compressive strain below the gate electrode structure 241. It should be appreciated, however, that, according to device requirements, other semiconductor compounds, such as silicon and carbon and the like, may be formed in order to establish a desired type of strain in the respective channel region. Appropriate selective epitaxial growth recipes are well-established in the art and may effectively be used during the process 215. Thereafter, the disposable spacers 270, the hard mask 262 and the disposable spacers 260 may be removed and the further processing of the device 200 may be continued similarly as is described with reference to Figures la-Ik. That is, different metal suicide regions may be formed in the first and second transistor elements having the desired distance to respective channel regions and additionally respective contact liner layers of different internal stress may be formed.Figure 2c schematically shows the device 200 after a corresponding process sequence, as is described with reference to Figures la-Ik. Hence, the first transistor element 220 may comprise a spacer structure 222 having a width 222a, which substantially defines a lateral distance of a first metal suicide region 230 with respect to a channel region 228. The first metal suicide region 230 may be comprised of titanium suicide, cobalt suicide and other materials, which may allow a moderately small width 222a so as to enhance the performance of an N-channel transistor. Moreover, the transistor 220 may comprise a first contact liner layer 231 having a specified internal stress, such as a tensile stress, in order to create a desired strain in the channel region 228.Similarly, the second transistor element 240 may comprise a spacer structure 242 having a width 242a, which may differ from the width 222a. In the illustrative embodiment in which the transistor element 240 represents a P-channel transistor, the width 242a may be greater than the width 222a, thereby providing a sufficient distance for a second metal suicide region 250 in the form of nickel suicide from the respective channel region 248, thereby providing enhanced performance for a P-channel transistor. The metal suicide region 250 may be formed within the epitaxially grown embedded semiconductor region 274, which also provides enhanced strain in the channel region 248. Thus, in the case of a P-channel transistor, the silicon/germanium mixture in the region 274 may create additional compressive strain in the channel region 248. Moreover, a second contact liner layer 251 may be provided having a specific internal stress, which may also significantly contribute to the total strain in the channel region 248.Consequently, the device 200 may exhibit enhanced performance characteristics compared to conventional CMOS devices with P-channel transistors having formed therein an embedded epitaxially grown semiconductor region. Moreover, due to the characteristics of nickel suicide, the regions 250 may be efficiently formed within the silicon/germanium region 274, while at the same time a cobalt suicide may be formed in the regions 230.As a result, the present invention provides an enhanced technique for forming strained transistor elements of different types, wherein additionally the corresponding metal suicide regions are specifically tailored with respect to a further performance enhancement. For this purpose, a process strategy is provided that enables the formation of different types of metal suicides, while still the strain-inducing mechanism may be used individually for each transistor type. Hereby, the metal suicide formation may include a different lateral position of the metal suicide regions in the first and second transistor types, thereby providing enhanced design flexibility. For instance, NMOS transistors requiring a short distance between the metal suicide and the channel region may be formed along with PMOS transistors, requiring a high conductivity of the metal silicide, which may be accomplished by the provision of nickel silicide, which on the other hand necessitates a significantly large distance between the metal silicide and the channel region.The particular embodiments disclosed above are illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. For example, the process steps set forth above may be performed in a different order.Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the invention. Accordingly, the protection sought herein is as set forth in the claims below.
In one embodiment, a matrix operation associated with a plurality of input matrices may be performed. The plurality of input matrices may be partitioned into a plurality of input partitions, wherein the plurality of input matrices is partitioned based on a number of available processing elements. The plurality of input partitions may be distributed among a plurality of processing elements, wherein each input partition is distributed to a particular processing element of the plurality of processing elements. A plurality of partial matrix operations may be performed using the plurality of processing elements, and partial matrix data may be transmitted between the plurality of processing elements while performing the plurality of partial matrix operations. A result of the matrix operation may be determined based on the plurality of partial matrix operations.
A method, comprising:performing a matrix operation associated with a plurality of input matrices, wherein performing the matrix operation comprises:obtaining the plurality of input matrices from memory (302, 270);partitioning the plurality of input matrices into a plurality of input partitions (Fig. 5), wherein the plurality of input matrices is partitioned based on a number of a plurality of matrix processing chips;distributing the plurality of input partitions among the plurality of matrix processing chips (220), wherein each input partition is distributed to a particular matrix processing chip of the plurality of matrix processing chips, wherein the plurality of matrix processing chips is configured in a cyclic arrangement such that each matrix processing chip is communicatively coupled (215) to a first neighbor matrix processing chip and a second neighbor matrix processing chip;performing a plurality of partial matrix operations using the plurality of matrix processing chips, wherein the plurality of partial matrix operations is performed in a plurality of stages and the method further comprises:simultaneously transmitting and receiving partial matrix data for a subsequent stage of the plurality of partial matrix operations to and from each of the first neighbor matrix processing chip and the second neighbor matrix processing chip while performing the plurality of partial matrix operations; anddetermining a result of the matrix operation based on the plurality of partial matrix operations.The method of Claim 1, wherein the matrix operation comprises one or more matrix multiplication operations.The method of Claim 1, wherein:the plurality of matrix processing chips is configured in a hierarchical arrangement comprising a plurality of processing levels; andthe matrix operation is distributed across the plurality of processing levels.The method of Claim 3, wherein the plurality of matrix processing chips further comprises:a plurality of matrix processing clusters associated with each matrix processing chip.The method of Claim 1, wherein the plurality of input matrices is further partitioned based on a number of rows of the plurality of input matrices.The method of Claim 1, wherein each processing element transmits a portion of the partial matrix data to one or more of the neighbor matrix processing chips while performing a particular stage of the partial matrix operations.The method of Claim 6, wherein the portion of the partial matrix data is transmitted from each processing element to the first neighbor processing element and the second neighbor processing element.The method of Claim 7, wherein the partial matrix data comprises a partial input matrix, wherein the partial input matrix is used by a first processing element in a particular stage of the partial matrix operations, and wherein the partial input matrix is used by a second processing element in a subsequent stage of the partial matrix operations.The method of Claim 8, wherein the matrix operation is associated with a forward propagation operation in a neural network.The method of Claim 8, wherein the matrix operation is associated with a weight update operation in a neural network.The method of Claim 6, wherein the partial matrix data comprises a partial result matrix determined by a first processing element in a particular stage of the partial matrix operations, and wherein the partial result matrix is used by a second processing element in a subsequent stage of the partial matrix operations.The method of Claim 11, wherein the matrix operation is associated with a backward propagation operation in a neural network.An apparatus comprising means to perform a method as claimed in any preceding claim.At least one machine accessible storage medium having instructions stored thereon, the instructions when executed on a machine, cause the machine to perform a method or realize an apparatus as claimed in any preceding claim.
FIELD OF THE SPECIFICATIONThis disclosure relates in general to the field of computer processing, and more particularly, though not exclusively, to performing matrix operations using a plurality of processing resources.BACKGROUNDMatrix operations, such as matrix multiplication and convolutions, can be highly processor-intensive and memory-intensive operations, as they often involve complex operations on large, multi-dimensional matrix operands. Accordingly, the performance of complex matrix operations can be limited by the processing and/or memory latency. As matrix operations are increasingly utilized in a variety of applications and with ever-growing data sets (from graphics and image processing to machine learning and artificial intelligence), the demand for high-performance processing of matrix operations is increasing.Junji Chen et al. describe in "DaDianNao: A Machine-Learning Supercomputer", Proc. of the Annual ACM/IEEE International Symposium on Microarchitecture 2014, December 13, 2014, pages 609 - 622 , a custom multi-chip architecture for state-of-the-art machine-learning algorithms as Convolutional Neural networks (CNNs) and Deep Neural networks (DNNs) with respect to architecture node, interconnect and performance. On a subset of the largest neural network layers a speedup over a GPU and an energy reduction on average for a 64-chip system is described.It is the object of the present invention to avoid wasted or idle processing time and to improve efficiency during communicating of matrix operands.SHORT DESCRIPTION OF THE INVENTION[2c]The invention provides subject-matter as defined in the independent claims, preferred embodiments thereof defined in the dependent claims.According to one aspect of the present disclosure, an apparatus comprising: a plurality of memory elements to store matrix data; and a plurality of processing elements to perform a matrix operation associated with a plurality of input matrices, wherein the plurality of processing elements is configured to: partition the plurality of input matrices into a plurality of input partitions, wherein the plurality of input matrices is partitioned based on a number of available processing elements; distribute the plurality of input partitions among the plurality of processing elements, wherein each input partition is distributed to a particular processing element of the plurality of processing elements; perform a plurality of partial matrix operations using the plurality of processing elements; transmit partial matrix data between the plurality of processing elements while performing the plurality of partial matrix operations; and determine a result of the matrix operation based on the plurality of partial matrix operations.In some examples, the apparatus, wherein the plurality of processing elements is configured in a hierarchical arrangement comprising a plurality of processing levels; and the plurality of processing elements is further configured to distribute the matrix operation across the plurality of processing levels.In some examples, the apparatus, wherein the plurality of processing elements is further configured to partition the plurality of input matrices based on a number of rows of the plurality of input matrices.In some examples, the apparatus, wherein the plurality of processing elements is configured in a cyclic arrangement such that each processing element is communicatively coupled to a plurality of neighbor processing elements; and the plurality of neighbor processing elements of each processing element comprises a first neighbor processing element and a second neighbor processing element.In some examples, the apparatus, wherein the plurality of processing elements is further configured to: perform the plurality of partial matrix operations in a plurality of stages; and transmit a portion of the partial matrix data from each processing element to one or more of the neighbor processing elements while performing a particular stage of the partial matrix operations.In some examples, the apparatus, wherein the plurality of processing elements is further configured to transmit the portion of the partial matrix data from each processing element to the first neighbor processing element and the second neighbor processing element.In some examples, the apparatus, wherein the partial matrix data comprises a partial input matrix, wherein the partial input matrix is to be used by a first processing element in a particular stage of the partial matrix operations, and wherein the partial input matrix is to be used by a second processing element in a subsequent stage of the partial matrix operations.In some examples, the apparatus, wherein the partial matrix data comprises a partial result matrix determined by a first processing element in a particular stage of the partial matrix operations, and the partial result matrix is to be used by a second processing element in a subsequent stage of the partial matrix operations.In some examples, the apparatus, wherein the matrix operation comprises one or more matrix multiplication operations.In some examples, the apparatus, wherein the plurality of processing elements comprises: a plurality of matrix processing chips; and a plurality of matrix processing clusters associated with each matrix processing chip.In some examples, the apparatus, wherein the matrix operation is associated with a forward propagation operation in a neural network.In some examples, the apparatus, wherein the matrix operation is associated with a weight update operation in a neural network.In some examples, the apparatus, wherein the matrix operation is associated with a backward propagation operation in a neural network.According to one aspect of the present disclosure, a method comprising: performing a matrix operation associated with a plurality of input matrices, wherein performing the matrix operation comprises: partitioning the plurality of input matrices into a plurality of input partitions, wherein the plurality of input matrices is partitioned based on a number of available processing elements; distributing the plurality of input partitions among a plurality of processing elements, wherein each input partition is distributed to a particular processing element of the plurality of processing elements; performing a plurality of partial matrix operations using the plurality of processing elements; transmitting partial matrix data between the plurality of processing elements while performing the plurality of partial matrix operations; and determining a result of the matrix operation based on the plurality of partial matrix operations.In some examples, the method, wherein the matrix operation comprises one or more matrix multiplication operations.In some examples, the method, wherein the plurality of processing elements is configured in a hierarchical arrangement comprising a plurality of processing levels; and the matrix operation is distributed across the plurality of processing levels.In some examples, the method, wherein the plurality of processing elements comprises: a plurality of matrix processing chips; and a plurality of matrix processing clusters associated with each matrix processing chip.In some examples, the method, wherein the plurality of input matrices is further partitioned based on a number of rows of the plurality of input matrices.In some examples, the method, wherein the plurality of processing elements is configured in a cyclic arrangement such that each processing element is communicatively coupled to a plurality of neighbor processing elements; and the plurality of neighbor processing elements of each processing element comprises a first neighbor processing element and a second neighbor processing element.In some examples, the method, wherein the plurality of partial matrix operations is performed in a plurality of stages, and each processing element transmits a portion of the partial matrix data to one or more of the neighbor processing elements while performing a particular stage of the partial matrix operations.In some examples, the method, wherein the portion of the partial matrix data is transmitted from each processing element to the first neighbor processing element and the second neighbor processing element.In some examples, the method, wherein the partial matrix data comprises a partial input matrix, wherein the partial input matrix is used by a first processing element in a particular stage of the partial matrix operations, and wherein the partial input matrix is used by a second processing element in a subsequent stage of the partial matrix operations.In some examples, the method, wherein the matrix operation is associated with a forward propagation operation in a neural network.In some examples, the method, wherein the matrix operation is associated with a weight update operation in a neural network.In some examples, the method, wherein the partial matrix data comprises a partial result matrix determined by a first processing element in a particular stage of the partial matrix operations, and the partial result matrix is used by a second processing element in a subsequent stage of the partial matrix operations.In some examples, the method, wherein the matrix operation is associated with a backward propagation operation in a neural network.According to one aspect of the present disclosure, a system, comprising: a plurality of memory elements to store matrix data; a plurality of processing elements to perform a matrix operation associated with a plurality of input matrices, wherein the plurality of processing elements comprises: a host processor; one or more matrix processing chips; a plurality of matrix processors associated with the one or more matrix processing chips; wherein the plurality of processing elements is configured to: partition the plurality of input matrices into a plurality of input partitions, wherein the plurality of input matrices is partitioned based on a number of available processing elements; distribute the plurality of input partitions among the plurality of processing elements, wherein each input partition is distributed to a particular processing element of the plurality of processing elements; perform a plurality of partial matrix operations using the plurality of processing elements; transmit partial matrix data between the plurality of processing elements while performing the plurality of partial matrix operations; and determine a result of the matrix operation based on the plurality of partial matrix operations.In some examples, the system further comprises a communication interface to communicate with one or more remote matrix processing chips over a communication network.According to one aspect of the present disclosure, at least one machine accessible storage medium having instructions stored thereon, the instructions, when executed on a machine, cause the machine to: perform a matrix operation associated with a plurality of input matrices, wherein the instructions that cause the machine to perform the matrix operation further cause the machine to: partition the plurality of input matrices into a plurality of input partitions, wherein the plurality of input matrices is partitioned based on a number of available processing elements; distribute the plurality of input partitions among a plurality of processing elements, wherein each input partition is distributed to a particular processing element of the plurality of processing elements; perform a plurality of partial matrix operations using the plurality of processing elements; transmit partial matrix data between the plurality of processing elements while performing the plurality of partial matrix operations; and determine a result of the matrix operation based on the plurality of partial matrix operations.In some examples, the at least one machine accessible storage medium, wherein the instructions further cause the machine to partition the plurality of input matrices based on a number of rows of the plurality of input matrices.In some examples, the at least one machine accessible storage medium, wherein the plurality of processing elements is configured in a cyclic arrangement such that each processing element is communicatively coupled to a plurality of neighbor processing elements; and the plurality of neighbor processing elements of each processing element comprises a first neighbor processing element and a second neighbor processing element.In some examples, the at least one machine accessible storage medium, wherein the instructions further cause the machine to: perform the plurality of partial matrix operations in a plurality of stages; and transmit a portion of the partial matrix data from each processing element to one or more neighbor processing elements while performing a particular stage of the partial matrix operations.In some examples, the at least one machine accessible storage medium, wherein the instructions further cause the machine to transmit the portion of the partial matrix data from each processing element to the first neighbor processing element and the second neighbor processing element.BRIEF DESCRIPTION OF THE DRAWINGSThe present disclosure is best understood from the following detailed description when read with the accompanying figures. It is emphasized that, in accordance with the standard practice in the industry, various features are not necessarily drawn to scale, and are used for illustration purposes only. Where a scale is shown, explicitly or implicitly, it provides only one illustrative example. In other embodiments, the dimensions of the various features may be arbitrarily increased or reduced for clarity of discussion.FIGURE 1 illustrates a schematic diagram for an example computing system according to certain embodiments.FIGURES 2A-Cillustrate block diagrams for an example embodiment of a matrix processing architecture.FIGURES 3and4 illustrate block diagrams for example embodiments of computer processors.FIGURE 5 illustrates an example of partitioning matrix operands.FIGURES 6A-Cillustrate an example weight update operation in a neural network.FIGURES 7A-Cillustrate an example forward propagation operation in a neural network.FIGURES 8A-Cillustrate an example backward propagation operation in a neural network.FIGURE 9 illustrates a flowchart for an example embodiment of distributed matrix operations.EMBODIMENTS OF THE DISCLOSUREThe following disclosure provides many different embodiments, or examples, for implementing different features of the present disclosure. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. Further, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. Different embodiments may have different advantages, and no particular advantage is necessarily required of any embodiment.Matrix processing operations (e.g., linear algebra operations that involve matrix and/or vector operands) have a wide range of applications in computing systems, from graphics processing to machine learning and artificial intelligence, among other examples. For example, complex matrix operations may be used to implement artificial neural networks that provide artificial intelligence and machine learning capabilities, including computer vision, autonomous navigation, speech and audio recognition, and natural language processing, among other examples. These complex matrix operations (e.g., matrix multiplication and convolutions) may be used to implement the fundamental operations of neural networks, such as forward propagation, backward propagation, and weight updates. These matrix operations, however, can be highly processor and memory intensive, as they often involve complex operations on large, multi-dimensional matrix operands. Accordingly, the performance of these matrix operations can be limited by processing and/or memory latency. As matrix operations are increasingly utilized in a variety of applications with ever-growing data sets, such as artificial intelligence and machine learning, the demand for high-performance processing of matrix operations is increasing.Existing matrix processing approaches suffer from various inefficiencies, particularly when used to implement artificial intelligence and machine learning in artificial neural networks. For example, while central processing units (CPUs) could be used to perform matrix operations, many CPU architectures are designed for low arithmetic intensity operations (i.e., a low ratio of arithmetic operations relative to memory operations), and thus are not designed for efficient execution of matrix operations. Moreover, many CPU architectures utilize complex local or cache memory management routines, which may increase processing overhead and execution complexity for operations involving large matrix operands. Graphics processing units (GPUs) could also be used to perform matrix operations. GPUs, however, are often designed for high precision computations and may provide a level of precision that is unnecessary for certain matrix operations, thus reducing the volume of matrix operations that can be performed. Accordingly, existing matrix processing approaches are inefficient for certain matrix operations, such as matrix multiplication or convolution operations involving large matrix operands and/or matrix operands with certain dimensions, among other examples. The existing approaches are unable to perform these matrix operations with 100% processing efficiency using all available processing resources. Moreover, existing approaches cannot be efficiently scaled to perform these matrix operations across additional processing resources in parallel. As an example, existing approaches are inefficient for matrix multiplication (e.g., general matrix multiplication or GEMM) on a large matrix operand which is neither square nor a single vector, such as a "thin" matrix with a much larger height than width. Existing approaches require more time to access and communicate the matrix operands than to perform the actual matrix computations, resulting in idle processing time while matrix operands are being obtained from memory and/or communicated to processing resources. Similarly, existing approaches are inefficient for convolution operations on large matrix operands, as they are unable to efficiently distribute or scale a convolution operation across a variable number of processing resources. Thus, existing approaches do not achieve 100% processing efficiency for these matrix operations.The matrix processing functionality described throughout this disclosure performs matrix operations using a distributed approach that achieves 100% processing efficiency using the available processing resources. For example, this approach distributes matrix operations across multiple processing resources in a processing architecture that is optimized for performing matrix operations, thus enabling full utilization of the processing resources throughout the duration of the matrix operations. For example, the processing architecture may include multiple processing resources that are designed and optimized for performing matrix operations, and may support a higher volume of matrix operations than other architectures (e.g., GPUs). In some embodiments, these processing resources may be configured in a cyclical arrangement, with either unidirectional communication interfaces between neighboring processing resources (a "single-cyclical" configuration) or bi-directional communication interfaces between neighboring processing resources (a "dual-cyclical" configuration). In addition, the processing resources may be arranged hierarchically with multiple levels of processing resources. For example, in some embodiments, the processing resources may include multiple matrix processing chips, multiple high bandwidth memory (HBM) modules and matrix processing clusters on each matrix processing chip, and/or multiple matrix processing units (MPUs) on each matrix processing cluster. This processing architecture enables matrix operations to be distributed across multiple processing resources and/or processing hierarchies with 100% processing efficiency. In addition, this processing architecture enables matrix operations to be efficiently scaled across a variable number of processing resources operating in parallel, while still achieving 100% processing efficiency.As an example, in some embodiments, a matrix operation may be distributed across multiple processing resources in a manner that results in the latency for communicating matrix operands being less than the matrix processing time, which allows the communication of matrix operands to be completed while the matrix processing is being performed. For example, a dual-cyclical configuration of processing resources enables each processing resource to perform matrix computations while simultaneously obtaining matrix operands and data from both of its neighboring processing resources, which significantly reduces the latency for communicating matrix operands. The communication latency may be reduced by half when using this dual-cyclical approach as opposed to a single-cyclical approach where each processing resource only obtains matrix operands and data from one neighboring processing resource at any given time. In this manner, the latency for communicating matrix operands can be fully masked by the matrix processing time, thus avoiding any wasted or idle processing time and achieving 100% processing efficiency. Accordingly, matrix operations (e.g., matrix multiplication or GEMM) can be performed efficiently even for large matrix operands and/or matrix operands with certain dimensions, such as a large matrix operand that is neither square nor a single vector (e.g., a "thin" matrix with a much larger height than width).The distributed matrix processing functionality described throughout this disclosure provides numerous technical advantages, including alleviating the inefficiencies of existing approaches and enabling matrix operations to be executed efficiently, achieving 100% processing efficiency using the available processing resources, and efficiently scaling matrix operations across a variable number of processing resources operating in parallel. These advantages result in reduced processing time for matrix operations, which improves performance for applications that involve complex matrix operations, such as artificial intelligence and machine learning functionality implemented using artificial neural networks (e.g., convolutional neural networks, multilayer perceptrons (MLPs), restricted Boltzmann machines (RBM), and deep belief networks (DBN), among other examples).Example embodiments that may be used to implement the matrix processing functionality of this disclosure will now be described with more particular reference to the attached FIGURES.FIGURE 1 illustrates a schematic diagram for an example computing system 100 according to certain embodiments.In some embodiments, the matrix processing functionality described throughout this disclosure may be implemented in system 100. Matrix processing functionality may be used in system 100 for a wide range of applications and/or use cases involving matrix operations, from graphics processing to machine learning and artificial intelligence, among other examples. For example, in some embodiments, matrix processing functionality may be used to implement artificial intelligence and machine learning in artificial neural networks. Moreover, matrix processing functionality may be implemented by any component of system 100. For example, in the illustrated embodiment, system 100 includes edge devices 110, cloud services 120, matrix processing nodes 130, and network 150. Matrix processing nodes 130 may include any component or device with matrix processing functionality, including any component of system 100. For example, matrix processing nodes 130 may include cloud services 120 and/or servers implemented with matrix processing functionality (e.g., application servers in a datacenter), edge devices 110 implemented with matrix processing functionality (e.g., end-user devices 112, Internet-of-Things devices 114, gateways 116), and so forth. These various components of system 100 are discussed further below.Edge devices 110 may include any equipment and/or devices deployed or connected near the "edge" of a communication system 100. Edge devices 110 may communicate with each other and/or with other remote networks and services (e.g., cloud services 120) through one or more networks and/or communication protocols, such as network 150. In some embodiments, certain edge devices 110 may include the matrix processing functionality described throughout this disclosure, and thus may be used as matrix processing nodes 130. In the illustrated embodiment, edge devices 110 include end-user devices 112 (e.g., desktops, laptops, mobile devices), Internet-of-Things (IoT) devices 114, and gateways and/or routers 116, among other examples.End-user devices 112 may include any device that enables or facilitates user interaction with computing system 100, including, for example, desktop computers, laptops, tablets, mobile phones and other mobile devices, and wearable devices (e.g., smart watches, smart glasses, headsets), among other examples.IoT devices 114 may include any device capable of communicating and/or participating in an Internet-of-Things (IoT) system or network. IoT systems may refer to new or improved ad-hoc systems and networks composed of multiple different devices (e.g., IoT devices 114) interoperating and synergizing for a particular application or use case. Such ad-hoc systems are emerging as more and more products and equipment evolve to become "smart," meaning they are controlled or monitored by computer processors and are capable of communicating with other devices. For example, an IoT device 114 may include a computer processor and/or communication interface to allow interoperation with other components of system 100, such as with cloud services 120 and/or other edge devices 110. IoT devices 114 may be "greenfield" devices that are developed with IoT capabilities from the ground-up, or "brownfield" devices that are created by integrating IoT capabilities into existing legacy devices that were initially developed without IoT capabilities. For example, in some cases, IoT devices 114 may be built from sensors and communication modules integrated in or attached to "things," such as equipment, toys, tools, vehicles, living things (e.g., plants, animals, humans), and so forth. Alternatively, or additionally, certain IoT devices 114 may rely on intermediary components, such as edge gateways or routers 116, to communicate with the various components of system 100.IoT devices 114 may include various types of sensors for monitoring, detecting, measuring, and generating sensor data and signals associated with characteristics of their environment. For instance, a given sensor may be configured to detect one or more respective characteristics, such as movement, weight, physical contact, temperature, wind, noise, light, position, humidity, radiation, liquid, specific chemical compounds, battery life, wireless signals, computer communications, and bandwidth, among other examples. Sensors can include physical sensors (e.g., physical monitoring components) and virtual sensors (e.g., software-based monitoring components). IoT devices 114 may also include actuators to perform various actions in their respective environments. For example, an actuator may be used to selectively activate certain functionality, such as toggling the power or operation of a security system (e.g., alarm, camera, locks) or household appliance (e.g., audio system, lighting, HVAC appliances, garage doors), among other examples.Indeed, this disclosure contemplates use of a potentially limitless universe of IoT devices 114 and associated sensors/actuators. IoT devices 114 may include, for example, any type of equipment and/or devices associated with any type of system 100 and/or industry, including transportation (e.g., automobile, airlines), industrial manufacturing, energy (e.g., power plants), telecommunications (e.g., Internet, cellular, and television service providers), medical (e.g., healthcare, pharmaceutical), food processing, and/or retail industries, among others. In the transportation industry, for example, IoT devices 114 may include equipment and devices associated with aircrafts, automobiles, or vessels, such as navigation systems, autonomous flight or driving systems, traffic sensors and controllers, and/or any internal mechanical or electrical components that are monitored by sensors (e.g., engines). IoT devices 114 may also include equipment, devices, and/or infrastructure associated with industrial manufacturing and production, shipping (e.g., cargo tracking), communications networks (e.g., gateways, routers, servers, cellular towers), server farms, electrical power plants, wind farms, oil and gas pipelines, water treatment and distribution, wastewater collection and treatment, and weather monitoring (e.g., temperature, wind, and humidity sensors), among other examples. IoT devices 114 may also include, for example, any type of "smart" device or system, such as smart entertainment systems (e.g., televisions, audio systems, videogame systems), smart household or office appliances (e.g., heat-ventilation-air-conditioning (HVAC) appliances, refrigerators, washers and dryers, coffee brewers), power control systems (e.g., automatic electricity, light, and HVAC controls), security systems (e.g., alarms, locks, cameras, motion detectors, fingerprint scanners, facial recognition systems), and other home automation systems, among other examples. IoT devices 114 can be statically located, such as mounted on a building, wall, floor, ground, lamppost, sign, water tower, or any other fixed or static structure. IoT devices 114 can also be mobile, such as devices in vehicles or aircrafts, drones, packages (e.g., for tracking cargo), mobile devices, and wearable devices, among other examples. Moreover, an IoT device 114 can also be any type of edge device 110, including end-user devices 112 and edge gateways and routers 116.Edge gateways and/or routers 116 may be used to facilitate communication to and from edge devices 110. For example, gateways 116 may provide communication capabilities to existing legacy devices that were initially developed without any such capabilities (e.g., "brownfield" IoT devices). Gateways 116 can also be utilized to extend the geographical reach of edge devices 110 with short-range, proprietary, or otherwise limited communication capabilities, such as IoT devices 114 with Bluetooth or ZigBee communication capabilities. For example, gateways 116 can serve as intermediaries between IoT devices 114 and remote networks or services, by providing a front-haul to the IoT devices 114 using their native communication capabilities (e.g., Bluetooth, ZigBee), and providing a back-haul to other networks 150 and/or cloud services 120 using another wired or wireless communication medium (e.g., Ethernet, Wi-Fi, cellular). In some embodiments, a gateway 116 may be implemented by a dedicated gateway device, or by a general purpose device, such as another IoT device 114, end-user device 112, or other type of edge device 110.In some instances, gateways 116 may also implement certain network management and/or application functionality (e.g., IoT management and/or IoT application functionality for IoT devices 114), either separately or in conjunction with other components, such as cloud services 120 and/or other edge devices 110. For example, in some embodiments, configuration parameters and/or application logic may be pushed or pulled to or from a gateway device 116, allowing IoT devices 114 (or other edge devices 110) within range or proximity of the gateway 116 to be configured for a particular IoT application or use case.Cloud services 120 may include services that are hosted remotely over a network 150, or in the "cloud." In some embodiments, for example, cloud services 120 may be remotely hosted on servers in datacenter (e.g., application servers or database servers). Cloud services 120 may include any services that can be utilized by or for edge devices 110, including but not limited to, data storage, computational services (e.g., data analytics, searching, diagnostics and fault management), security services (e.g., surveillance, alarms, user authentication), mapping and navigation, geolocation services, network or infrastructure management, IoT application and management services, payment processing, audio and video streaming, messaging, social networking, news, and weather, among other examples. In some embodiments, certain cloud services 120 may include the matrix processing functionality described throughout this disclosure, and thus may be used as matrix processing nodes 130.In general, edge devices 110 (and in particular IoT devices 114) may generate an extremely large volume and variety of data. IoT edge devices 114 typically offload this data to the cloud for processing and/or storage (e.g., by cloud services 120). Cloud services 120, however, may not necessarily be suited to handle the rapidly growing volume, variety, and velocity of data generated by IoT devices 114 and other edge devices 110. For example, cloud-based processing may not be ideal in certain circumstances, such as processing time-sensitive or highly confidential data, or when faced with network bandwidth constraints, among other examples. In some embodiments, cloud services 120 may leverage "edge" based processing using edge devices 110 to improve the performance of cloud services. Edge processing is an approach that involves processing certain data at the network edge (e.g., using edge devices 110), near where the data is generated, rather than simply funneling large volumes of data to the cloud for processing and storage. Certain data may still be sent to the cloud, as appropriate, such as for deeper analysis and/or long-term storage. Edge processing may be used to complement the shortcomings of cloud-based processing (e.g., when cloud-based processing is inefficient, ineffective, and/or unsecure), and thus improve the handling of the growing volume, variety, and velocity of data generated by IoT devices 114 and/or other edge devices 110. For example, in some cases, processing data near its source (e.g., in the network edge) rather than in the cloud may improve performance and/or avoid system failures or disasters. Edge processing may also conserve network bandwidth, which may be particularly beneficial when facing bandwidth constraints and/or limited network connectivity.In some embodiments, edge devices 110 that provide edge-based processing for cloud services 120 may be collectively referred to as the "fog," as they serve to extend the "cloud" to the edge of the network, thus creating a "fog" over the network edge. In some embodiments, devices 110 in the "fog" may connect and/or communicate with each other, for example, using an interconnection standard or protocol. For example, in some embodiments, device interconnection may be implemented using the open interconnect consortium (OIC) standard specification 1.0, released by the Open Connectivity Foundation™ (OCF) on December 23, 2015, which enables devices to discover and connect with each other. Another interconnection protocol that may be used is Thread, a networking protocol for Internet-of-Things (IoT) devices used in "smart" home automation and similar deployments, which has been developed by an alliance of organizations named the "Thread Group." Other interconnection protocols may also be used, including, for example, the optimized link state routing (OLSR) protocol, or the better approach to mobile ad-hoc networking (B.A.T.M.A.N.), among others.Network 150 may be used to facilitate communication between the components of computing system 100. For example, edge devices 110, such as end-user devices 112 and IoT devices 114, may use network 150 to communicate with each other and/or access one or more remote cloud services 120. Network 150 may include any number or type of communication networks, including, for example, local area networks, wide area networks, public networks, the Internet, cellular networks, Wi-Fi networks, short-range networks (e.g., Bluetooth or ZigBee), and/or any other wired or wireless networks or communication mediums.Any, all, or some of the computing devices of system 100 may be adapted to execute any operating system, including Linux or other UNIX-based operating systems, Microsoft Windows, Windows Server, MacOS, Apple iOS, Google Android, or any customized and/or proprietary operating system, along with virtual machines adapted to virtualize execution of a particular operating system.While FIGURE 1 is described as containing or being associated with a plurality of elements, not all elements illustrated within system 100 of FIGURE 1 may be utilized in each alternative implementation of the present disclosure. Additionally, one or more of the elements described in connection with the examples of FIGURE 1 may be located external to system 100, while in other instances, certain elements may be included within or as a portion of one or more of the other described elements, as well as other elements not described in the illustrated implementation. Further, certain elements illustrated in FIGURE 1 may be combined with other components, as well as used for alternative or additional purposes in addition to those purposes described herein.Example Matrix Processing ArchitectureFIGURES 2A-Cillustrate block diagrams for an example embodiment of a matrix processing architecture.In some embodiments, the matrix processing functionality described throughout this disclosure may be implemented using a matrix processing architecture, such as the matrix processing architecture of FIGURES 2A - 2C . Matrix processing architectures, such as the matrix processing architecture of FIGURES 2A - 2C , may be implemented or used in a variety of systems, devices, and/or components, such as those described throughout this disclosure, including system 100 of FIGURE 1 and/or any of its associated components (e.g., cloud services 120 / datacenter servers, edge devices 110, matrix processing nodes 130). In some embodiments, the matrix processing architecture of FIGURES 2A - 2C may be used to implement artificial intelligence and machine learning in neural networks. The matrix processing architecture illustrated in FIGURES 2A - 2C is merely one example embodiment for performing the matrix processing functionality described throughout this disclosure. Other embodiments may use different types, arrangements, and/or numbers of components. For example, other embodiments may include any number of matrix processing chips 220, matrix processing clusters 230, matrix processing units (MPUs) 234, high bandwidth memory (HBM) modules 240, and/or memory resource blocks (MRBs) 238. Moreover, all or part of any component of the matrix processing architecture of FIGURES 2A - 2C (e.g., any component of matrix processing system 200, matrix processing chips 220, and/or matrix processing clusters 230) may be implemented as a separate or stand-alone component or chip, or may be integrated with other components or chips, such as a system-on-a-chip (SoC) that integrates various computer components into a single chip.FIGURE 2A illustrates a block diagram for an example embodiment of a matrix processing system 200. In the illustrated embodiment, matrix processing system 200 includes host processor 260, host memory 270, matrix processing resources 210, and interconnect bus 280.Host processor 260 may be configured to control and/or manage matrix processing system 200. For example, in some embodiments, host processor 260 may use matrix processing resources 210 to perform complex matrix operations. Host processor 260 may be any processing resource capable of controlling and/or managing matrix processing functionality of matrix processing system 200. For example, in some embodiments, host processor 260 may be implemented using computer processors 300 or 400 of FIGURES 3 and 4 , respectively. In some embodiments, host processor 260 may be a separate or stand-alone component that is communicatively coupled to matrix processing resources 210. Alternatively, in other embodiments, host processor 260 and matrix processing resources 210 may be integrated into the same component or chip. For example, in some embodiments, the components of matrix processing system 200, including host processor 260 and matrix processing resources 210, may be implemented as a system-on-a-chip (SoC).Host memory 270 may include any type or combination of volatile and/or non-volatile memory. Examples of volatile memory include various types of random access memory (RAM), such as dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), and static random access memory (SRAM), among other examples. Examples of non-volatile memory include disk-based storage mediums (e.g., magnetic and/or optical storage mediums), solid-state storage (e.g., any form of persistent flash memory, including planar or three dimensional (3D) NAND flash memory or NOR flash memory), 3D crosspoint memory, electrically erasable programmable read-only memory (EEPROM), and/or other types of non-volatile random access memories (RAM), among other examples. Host memory 270 may be used, for example, to store information for host processor 260 during execution, such as code and/or data.Interconnect bus 280 may be used, in some embodiments, to communicatively couple host processor 260 and host memory 270 to matrix processing resources 210. Interconnect bus 280 may use any interconnection protocol, such as Peripheral Component Interconnect express (PCIe), Universal Serial Bus (USB), or Small Computer Systems Interface (SCSI), among other examples.Matrix processing resources 210 may include any processing resources configured to perform matrix operations. For example, matrix processing resources 210 may be configured to perform matrix multiplication operations, convolution operations, element-wise matrix operations (e.g., +, *, / <, >, ==), dimension shuffle operations, and/or any combination thereof. In some embodiments, matrix processing resources 210 may include processing resources that are designed and optimized for performing matrix operations. In some embodiments, matrix processing resources 210 may also be arranged hierarchically with multiple levels of processing resources. For example, in the illustrated embodiment, matrix processing resources 210 include a plurality of matrix processing chips 220, and may also include any processing resources within each matrix processing chip 220. For example, as discussed below in connection with FIGURES 2B and 2C , each matrix processing chip 220 may include a plurality of high bandwidth memory (HBM) modules 240 and a plurality of matrix processing clusters 230, and each matrix processing cluster 230 may include multiple matrix processing units 234. Thus, in some embodiments, matrix processing resources 210 may include multiple matrix processing chips 220, multiple high bandwidth memory (HBM) modules 240 and multiple matrix processing clusters 230 on each matrix processing chip 220, and/or multiple matrix processing units 234 on each matrix processing cluster 230.Matrix processing chips 220 may be, for example, any chips or other components configured to perform matrix operations. For example, in some embodiments, a matrix processing chip 220 may be a peripheral card or chip connected to host processor 260 using any type of interconnect interface, such as a PCIe interface. In some embodiments, a matrix processing chip 220 may be implemented using an integrated circuit, such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), and/or any other type of circuitry. In the illustrated embodiment, matrix processing chips 220 are configured in a cyclical arrangement, with communication channels 215 between neighboring matrix processing chips 220. In some embodiments, communication channels 215 may provide one-way communication between neighboring matrix processing chips 220. In other embodiments, however, communication channels 215 may provide bi-directional communication between neighboring matrix processing chips 220. A cyclical arrangement with unidirectional communication between neighboring processing resources may be referred to as a "single-cyclical" configuration, while a cyclical arrangement with bi-directional communication between neighboring processing resources may be referred to as a "dual-cyclical" configu ration.FIGURE 2B illustrates a block diagram for an example embodiment of a matrix processing chip 220. In the illustrated embodiment, matrix processing chip 220 includes controller 222, host interface 224, inter-chip links 225, high bandwidth memory (HBM) modules 240, and matrix processing clusters 230.Controller 222 may be configured to control and/or manage matrix operations performed by matrix processing chip 220. In some embodiments, controller 222 may control and/or manage matrix operations in conjunction with host processor 260 of FIGURE 2A and/or master control CPUs (MCCs) 232 of matrix processing clusters 230 of FIGURE 2C . For example, in some embodiments, host processor 260, controller 222, and/or master control CPUs (MCCs) 232 may be configured to receive a matrix operation or command, and distribute the matrix operation and matrix operands across matrix processing clusters 230 and high bandwidth memory (HBM) modules 240. In some embodiments, controller 222 may be a microprocessor, an integrated circuit, and/or any other type of circuitry and/or processing logic.Host interface 224 may be a communication interface that enables a matrix processing chip 220 to communicate with host processor 260 of FIGURE 2A . In some embodiments, for example, controller 222 may use host interface 224 to communicate with host processor 260 of FIGURE 2A . Host interface 224 may use any type of interconnect protocol or interface, including Peripheral Component Interconnect express (PCIe), Universal Serial Bus (USB), or Small Computer Systems Interface (SCSI), among other examples.Inter-chip links (ICLs) 225 may enable a matrix processing chip 220 to communicate with other matrix processing chips. For example, inter-chip links 225 may be used to implement the communication channels 215 between matrix processing chips 220 in FIGURE 2A . An inter-chip link 225 may be, for example, any communication interface that enables a matrix processing chip 220 to communicate with another matrix processing chip. In some embodiments, a matrix processing chip 220 may include multiple inter-chip links 225 (e.g., twelve inter-chip links). In some embodiments, an inter-chip link 225 may be implemented using one or more serializer / de-serializer (SerDes) interfaces. A SerDes interface may be a communication interface that converts data from serial to parallel, and vice-versa. For example, the transmitter of a SerDes interface may include a serial-to-parallel converter, and the receiver of a SerDes interface may include a parallel-to-serial converter. In some embodiments, a matrix processing chip 220 may use multiple SerDes interfaces for each connection to another matrix processing chip (e.g., four SerDes interfaces between each pair of connected matrix processing chips).High bandwidth memory (HBM) modules 240 may be memory components associated with matrix processing chip 220 that are used to store matrix operands and other matrix data. In some embodiments, high bandwidth memory (HBM) modules 240 may be designed to efficiently store and retrieve matrix data. In some embodiments, high bandwidth memory (HBM) modules 240 may be multi-dimensional memory components configured to store and retrieve data in multiple dimensions. For example, in some embodiments, high bandwidth memory (HBM) modules 240 may be memory components configured to store and retrieve data in two dimensions, such as rows and columns. Other embodiments, however, may use memory components configured to store and retrieve data using any other number of dimensions (e.g., one dimension, three dimensions, four dimensions, and so forth). In the illustrated embodiment, matrix processing chip 220 includes four high bandwidth memory (HBM) modules 240a-d. In some embodiments, high bandwidth memory (HBM) modules 240 may be shared by the matrix processing clusters 230 of a matrix processing chip 220.Matrix processing clusters 230 may include processing resources configured to perform matrix operations, such as matrix multiplication, convolutions, and/or dimension shuffling, among other examples. In some embodiments, matrix processing clusters 230 may be collectively used to execute a particular matrix operation by performing matrix processing in parallel. In the illustrated embodiment, matrix processing chip 220 includes twelve matrix processing clusters 230a-l. Moreover, in the illustrated embodiment, matrix processing clusters 230 are configured or arranged using a two-dimensional mesh interconnection topology. The interconnection topology of matrix processing clusters 230 may facilitate cyclical communication among the matrix processing clusters 230. Moreover, other embodiments may include any number and/or arrangement of matrix processing clusters 230.FIGURE 2C illustrates a block diagram for an example embodiment of a matrix processing cluster 230. In the illustrated embodiment, matrix processing cluster 230 includes master control CPU (MCC) 232, matrix processing units (MPUs) 234, slicing engine 236, and memory resource blocks (MRBs) 238.Master control CPU (MCC) 232 may be configured to control and/or manage matrix operations performed by a matrix processing cluster 230. In some embodiments, master control CPU 232 may be a microprocessor, an integrated circuit, and/or any other type of circuitry and/or processing logic. In some embodiments, master control CPU 232 may receive instructions from another component, such as host processor 260 of FIGURE 2A and/or controller 222 of FIGURE 2B . Based on the instructions, master control CPU 232 may then use matrix processing units 234 to perform matrix operations, such as matrix multiplication, convolutions, and/or dimension shuffling, among other examples. For example, master control CPU 232 may receive an instruction to perform a matrix multiplication operation, such as C = A * B. The instruction may include the handles or identifiers for each matrix, and may also indicate how the matrices should be stored in memory resource blocks (MRBs) 238. Matrices A and B may then be broken down into a series of smaller matrices (e.g., 32x32 matrices). Matrix operations may then be performed on the smaller matrices, and the partial results may be stored in memory resource blocks (MRBs) 238, until the output matrix C has been fully computed.Matrix processing units (MPUs) 234 may be configured to perform matrix operations, such as matrix multiplication, convolutions, and/or dimension shuffling. In some embodiments, matrix processing units (MPUs) 234 perform matrix operations based on commands received from master control CPU (MCC) 232. Moreover, in some embodiments, each matrix processing cluster 230 may include multiple matrix processing units (MPUs) 234. For example, in the illustrated embodiment, matrix processing cluster 230 includes two matrix processing units (MPUs) 234. A matrix processing unit (MPU) 234 may be capable of performing matrix operations, such as matrix multiplication, on small matrices (e.g., 32x32 matrices). In some cases, a matrix processing unit (MPU) 234 may be designed and/or optimized to perform matrix multiplication operations. A matrix processing unit (MPU) 234 may load matrix operands from memory resource blocks (MRBs) 238. In some embodiments, a matrix processing unit (MPU) 234 may support the following arithmetic operations: matrix multiplication; unary matrix operations; binary matrix operations, such as addition (+), subtraction (-), multiplication (∗), division (/), bitwise XOR, AND, OR, logical and arithmetic left and right shift, comparison (>, <, >=, <= , ==, !=); and column-wise, row-wise, and matrix-wide operations, such as sum, max value, and min value.Slicing engine 236 may be configured to slice the matrix operands of a particular matrix operation into smaller partial matrices. For example, in some embodiments, master control CPU (MCC) 232 may use slicing engine 236 to break up matrix operands into smaller partial matrices for matrix processing units (MPUs) 234. In some embodiments, slicing engine 236 may include a convolution slicing engine (CSE) to perform matrix slicing for convolution operations. For example, in some embodiments, a convolution slicing engine (CSE) may slice matrix operands in a manner that enables a convolution operation to be cast as a matrix multiplication operation, thus enabling the same processing logic to perform both matrix multiplication and convolution operations. Moreover, in some embodiments, slicing engine 236 and/or the associated convolution slicing engine (CSE) may be used to perform the dimension shuffle operations to reorder the dimensions of a matrix.Memory resource blocks (MRBs) 238 may be memory components on matrix processing cluster 230 used to store matrix operands and other matrix data. In some embodiments, memory resource blocks (MRBs) 238 may be designed to store and retrieve matrix data efficiently. In some embodiments, memory resource blocks (MRBs) 238 may be multi-dimensional memory components configured to store and retrieve data in multiple dimensions. For example, in some embodiments, memory resource blocks (MRBs) 238 may be memory components configured to store and retrieve data in two dimensions, such as rows and columns. In the illustrated embodiment, matrix processing cluster 230 includes ten memory resource blocks (MRBs) 238. Other embodiments, however, may include a different number of memory resource blocks (MRBs) 238 on a matrix processing cluster 230. In some embodiments, each memory resource block (MRB) 238 may be capable of storing a matrix of a certain size (e.g., a 256x512 matrix). In some embodiments, memory resource blocks (MRBs) 238 may be shared by the matrix processing units (MPUs) 234 of a particular matrix processing cluster 230.In some embodiments, the matrix processing architecture of FIGURES 2A - 2C may be used to implement the matrix processing functionality described throughout this disclosure. For example, matrix processing system 200 may be used to perform matrix operations using a distributed approach that achieves 100% processing efficiency using the available processing resources. For example, in some embodiments, a matrix operation may be distributed across multiple processing resources 210 that are optimized for matrix processing, thus enabling full utilization of the processing resources 210 throughout the duration of the matrix operation. For example, matrix processing system 200 may include multiple processing resources 210 that are designed and optimized for performing matrix operations. In some embodiments, these processing resources 210 may be configured in a single-cyclical or dual-cyclical arrangement. In addition, the processing resources 210 may be arranged hierarchically with multiple levels of processing resources. For example, in some embodiments, the processing resources 210 may include multiple matrix processing chips 220, multiple high bandwidth memory (BHM) modules 240 and multiple matrix processing clusters 230 on each matrix processing chip 220, and/or multiple matrix processing units (MPUs) 234 on each matrix processing cluster 230. This processing architecture enables matrix operations to be distributed across multiple processing resources 210 and/or processing hierarchies with 100% processing efficiency. In addition, this processing architecture enables matrix operations to be efficiently scaled across a variable number of processing resources 210 operating in parallel, while still achieving 100% processing efficiency. For example, scaling may be achieved by adjusting the number of processing resources 210 used to perform a particular matrix operation, such as the number of matrix processing systems 200 or servers, the number of matrix processing chips 220 in each matrix processing system 200 or server, and so forth.As an example, the matrix processing architecture of FIGURES 2A - 2C may be used to implement matrix multiplication and/or convolution operations. For example, in some embodiments, a matrix multiplication operation may be distributed across multiple processing resources 210 in a manner that results in the latency for communicating matrix operands being less than the matrix processing time, which allows the communication of matrix operands to be completed while the matrix processing is being performed. For example, for certain matrix operations involving matrix operands with certain dimensions (e.g., matrix multiplication with a "thin" matrix operand), the time required to access and communicate matrix operands may exceed the time required to perform the actual matrix computations, resulting in idle processing time while the matrix operands are being obtained from memory and/or communicated to processing resources 210. For example, a single-cyclical configuration (e.g., where each processing resource 210 only obtains matrix operands and data from one neighboring processing resource 210 at any given time) may be unable to achieve 100% processing efficiency for these particular types of matrix operations and matrix operands. However, a dual-cyclical configuration of processing resources 210 enables each processing resource to perform matrix computations while simultaneously obtaining matrix operands and data from both of its neighboring processing resources 210, which significantly reduces the latency for communicating matrix operands, and thus avoids any idle processing time. For example, the communication latency for certain operations may be reduced by half when using a dual-cyclical approach as opposed to a single-cyclical approach. In this manner, the latency for communicating matrix operands and matrix data can be fully masked by the matrix processing time, thus avoiding any wasted or idle processing time and achieving 100% processing efficiency. Accordingly, matrix operations (e.g., matrix multiplication or GEMM) can be performed efficiently even for large matrix operands and/or matrix operands with certain dimensions, such as a large matrix operand that is neither square nor a single vector (e.g., a "thin" matrix with a much larger height than width). For example, matrix multiplication can be performed efficiently even when multiplying two thin matrices, a thin matrix and a square matrix, and so forth. Similarly, convolution operations may be distributed across multiple processing resources 210 in a manner that results in 100% processing efficiency using the available processing resources.As an example, when a matrix operation or command is received, the matrix operation may be distributed across the processing resources 210 of matrix processing system 200. For example, the matrix operands (or input matrices) may be partitioned based on the number of available processing resources 210. Moreover, in some embodiments, the partitions may be across the rows of the matrix operands, and/or across any other dimension of the matrix operands. Each partition may then be distributed to a particular processing resource 210. Each processing resource 210 may then perform a plurality of partial matrix operations. In some embodiments, the plurality of partial matrix operations is performed in a plurality of stages. For example, each processing resource 210 may perform a particular stage of partial matrix operations while simultaneously sending and receiving partial matrix data to and from its neighboring processing resources 210. For example, in a single-cyclical configuration of processing resources 210, each processing resource 210 either sends or receives partial matrix data to or from each neighboring processing resource 210. Similarly, in a dual-cyclical configuration of processing resources 210, each processing resource 210 may send and receive partial matrix data to and from each neighboring processing resource 210. Each processing resource 210 may then use the partial matrix data for subsequent partial matrix operations. The result of the matrix operation may then be determined based on the partial matrix operations collectively performed by the processing resources 210.Moreover, if the processing resources 210 are arranged hierarchically, the matrix operation may be distributed in a hierarchical manner. For example, the matrix operands (or input matrices) may initially be partitioned based on the number of available matrix processing chips 220. Each partition, and the associated partial matrix operations, may then be distributed to a particular matrix processing chip 220. The partition and partial matrix operations distributed to a particular matrix processing chip 220 may then be similarly partitioned and distributed across the matrix processing clusters 230 and/or high bandwidth memory (HBM) modules 240 of the particular matrix processing chip 220. For example, for certain matrix operations, partial matrix operations may be distributed to each matrix processing cluster 230. Alternatively, for certain matrix operations, partial matrix operations may be distributed across various "logical processing nodes" (e.g., groups of matrix processing clusters 230 associated with a high-bandwidth memory (HBM) module 240), and may then be distributed to each matrix processing cluster 230 of a particular logical processing node. In some embodiments, the matrix processing clusters 230 (and/or the logical processing nodes) may be cyclically configured similar to the matrix processing chips 220. The partition and partial matrix operations distributed to a particular matrix processing cluster 230 may then be similarly partitioned and distributed across the matrix processing units (MPUs) 234 of the particular matrix processing cluster 230.Example Computer Processor ArchitecturesFIGURES 3and4 illustrate block diagrams for example embodiments of computer processors that may be used in accordance with embodiments disclosed herein. For example, the computer processors illustrated in FIGURES 3 and 4 may be used as host processors associated with matrix processing systems (e.g., host processor 260 in matrix processing system 200 of FIGURE 2A ), or as processors associated with other components and/or devices discussed throughout this disclosure (e.g., processors associated with components in system 100 of FIGURE 1 ). Other processor and system designs and configurations known in the art for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.FIGURE 3 illustrates a block diagram for an example embodiment of a processor 300. Processor 300 is an example of a type of hardware device that can be used in connection with the embodiments described throughout this disclosure. Processor 300 may be any type of processor, such as a microprocessor, an embedded processor, a digital signal processor (DSP), a network processor, a multi-core processor, a single core processor, or other device to execute code. Although only one processor 300 is illustrated in FIGURE 3 , a processing element may alternatively include more than one of processor 300 illustrated in FIGURE 3 . Processor 300 may be a single-threaded core or, for at least one embodiment, the processor 300 may be multi-threaded in that it may include more than one hardware thread context (or "logical processor") per core.FIGURE 3 also illustrates a memory 302 coupled to processor 300 in accordance with an embodiment. Memory 302 may be any of a wide variety of memories (including various layers of memory hierarchy) as are known or otherwise available to those of skill in the art. Such memory elements can include, but are not limited to, random access memory (RAM), read only memory (ROM), logic blocks of a field programmable gate array (FPGA), erasable programmable read only memory (EPROM), and electrically erasable programmable ROM (EEPROM).Processor 300 can execute any type of instructions associated with algorithms, processes, or operations detailed herein. Generally, processor 300 can transform an element or an article (e.g., data) from one state or thing to another state or thing.Code 304, which may be one or more instructions to be executed by processor 300, may be stored in memory 302, or may be stored in software, hardware, firmware, or any suitable combination thereof, or in any other internal or external component, device, element, or object where appropriate and based on particular needs. In one example, processor 300 can follow a program sequence of instructions indicated by code 304. Each instruction enters a front-end logic 306 and is processed by one or more decoders 308. The decoder may generate, as its output, a micro operation such as a fixed width micro operation in a predefined format, or may generate other instructions, microinstructions, or control signals that reflect the original code instruction. Front-end logic 306 may also include register renaming logic and scheduling logic, which generally allocate resources and queue the operation corresponding to the instruction for execution.Processor 300 can also include execution logic 314 having a set of execution units 316a, 316b, 316n, etc. Some embodiments may include a number of execution units dedicated to specific functions or sets of functions. Other embodiments may include only one execution unit or one execution unit that can perform a particular function. Execution logic 314 performs the operations specified by code instructions.After completion of execution of the operations specified by the code instructions, back-end logic 318 can retire the instructions of code 304. In one embodiment, processor 300 allows out of order execution but requires in order retirement of instructions. Retirement logic 320 may take a variety of known forms (e.g., re-order buffers or the like). In this manner, processor 300 is transformed during execution of code 304, at least in terms of the output generated by the decoder, hardware registers and tables utilized by register renaming logic 310, and any registers (not shown) modified by execution logic 314.Although not shown in FIGURE 3 , a processing element may include other elements on a chip with processor 300. For example, a processing element may include memory control logic along with processor 300. The processing element may include I/O control logic and/or may include I/O control logic integrated with memory control logic. The processing element may also include one or more caches. In some embodiments, non-volatile memory (such as flash memory or fuses) may also be included on the chip with processor 300.FIGURE 4 illustrates a block diagram for an example embodiment of a multiprocessor 400. As shown in FIGURE 4 , multiprocessor system 400 is a point-to-point interconnect system, and includes a first processor 470 and a second processor 480 coupled via a point-to-point interconnect 450. In some embodiments, each of processors 470 and 480 may be some version of processor 300 of FIGURE 3 .Processors 470 and 480 are shown including integrated memory controller (IMC) units 472 and 482, respectively. Processor 470 also includes as part of its bus controller units point-to-point (P-P) interfaces 476 and 478; similarly, second processor 480 includes P-P interfaces 486 and 488. Processors 470, 480 may exchange information via a point-to-point (P-P) interface 450 using P-P interface circuits 478, 488. As shown in FIGURE 4 , IMCs 472 and 482 couple the processors to respective memories, namely a memory 432 and a memory 434, which may be portions of main memory locally attached to the respective processors.Processors 470, 480 may each exchange information with a chipset 490 via individual P-P interfaces 452, 454 using point to point interface circuits 476, 494, 486, 498. Chipset 490 may optionally exchange information with the coprocessor 438 via a high-performance interface 439. In one embodiment, the coprocessor 438 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, matrix processor, or the like.A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.Chipset 490 may be coupled to a first bus 416 via an interface 496. In one embodiment, first bus 416 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of this disclosure is not so limited.As shown in FIGURE 4 , various I/O devices 414 may be coupled to first bus 416, along with a bus bridge 418 which couples first bus 416 to a second bus 420. In one embodiment, one or more additional processor(s) 415, such as coprocessors, high-throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), matrix processors, field programmable gate arrays, or any other processor, are coupled to first bus 416. In one embodiment, second bus 420 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 420 including, for example, a keyboard and/or mouse 422, communication devices 427 and a storage unit 428 such as a disk drive or other mass storage device which may include instructions/code and data 430, in one embodiment. Further, an audio I/O 424 may be coupled to the second bus 420. Note that other architectures are possible. For example, instead of the point-to-point architecture of FIGURE 4 , a system may implement a multi-drop bus or other such architecture.All or part of any component of FIGURE 4 may be implemented as a separate or stand-alone component or chip, or may be integrated with other components or chips, such as a system-on-a-chip (SoC) that integrates various computer components into a single chip.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Certain embodiments may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.Program code, such as code 430 illustrated in FIGURE 4 , may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable's (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.Accordingly, embodiments of this disclosure also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.Distributed Matrix OperationsFIGURES 5,6A-C,7A-C, and8A-Cillustrate example operations in a neural network. In some embodiments, these example operations may be performed using a matrix processing architecture, such as the matrix processing architecture of FIGURES 2A - 2C . The fundamental operations of a neural network may include forward propagation, backward propagation, and weight updates. These operations may be used, in some embodiments, to train a neural network in order to provide machine learning functionality. For example, a forward propagation operation may include propagating a particular input through a neural network in order to generate a corresponding output. The input to the forward propagation operation may be a training pattern with a known or expected output. A backward propagation operation may then be used to determine the error associated with the forward propagation operation based on the difference or delta between the calculated output and the expected output of the forward propagation operation. A weight update operation may then be used to determine updated weight values in order to minimize the associated error. In some embodiments, these neural network operations may be performed using matrix operations. For example, the input values, weights, and output values may be represented using matrices. In some embodiments, these neural network operations may be implemented using the following formulas: A 2 = w * A 1A 1 = w T * A 2Δ w = A 1 T * A 2FIGURE 5 illustrates an example of partitioning matrix operands. Matrix operands may be partitioned, for example, to perform matrix operations using the distributed matrix processing functionality described throughout this disclosure. In particular, matrix partitioning may be performed for neural network operations, such as those illustrated in FIGURES 6 , 7 , and 8 .The illustrated embodiment demonstrates matrix partitioning for a weight matrix (W) and an activation matrix (A). In the illustrated embodiment, weight matrix (W) and activation matrix (A) are partitioned into P partitions. In some embodiments, matrix operands may be partitioned into a number of partitions corresponding to the number of available processing resources. For example, weight matrix (W) and activation matrix (A) may be partitioned into P partitions corresponding to P processing resources. Moreover, in some embodiments, the matrix operands may be partitioned across their rows. Each partition may then be distributed to a particular processing resource, as described throughout this disclosure.In some embodiments, matrix operands may be partitioned hierarchically based on the hierarchical arrangement of processing resources. For example, the matrix operands may initially be partitioned based on the number of available matrix processing chips (e.g., matrix processing chips 220 of FIGURE 2A ). Each partition, and the associated partial matrix operations, may then be distributed to a particular matrix processing chip. The partition and partial matrix operations distributed to a particular matrix processing chip may then be similarly partitioned and distributed across the matrix processing clusters of that matrix processing chip (e.g., matrix processing clusters 230 of FIGURE 2B ). The partition and partial matrix operations distributed to a particular matrix processing cluster may then be similarly partitioned and distributed across the matrix processing units (MPUs) of that matrix processing cluster (e.g., matrix processing units (MPUs) 234 of FIGURE 2C ).FIGURES 6A-Cillustrate an example weight update operation in a neural network. FIGURE 6A illustrates the weight update operation that is to be performed, and FIGURES 6B and 6C illustrate how the weight update operation is performed.FIGURE 6A illustrates the following operation: A ∗ B = C. A weight update operation may be implemented using the formula Δw = A1T ∗ A2, which may translate as follows in FIGURE 6A : matrix A corresponds to A1T (e.g., the transpose of the first activation matrix); matrix B corresponds to A2 (e.g., the second activation matrix); and matrix C corresponds to Δw (e.g., the updated weight matrix).Matrices A and B may first be partitioned based on the number of available processing resources, as described in connection with FIGURE 5 . For example, in some embodiments, matrices A and B may be partitioned into P partitions corresponding to the number of available matrix processing chips (e.g., matrix processing chips 220 of FIGURE 2A ). For example, if there are P matrix processing chips, the rows of matrix A may be partitioned into partitions a1 - ap, and the rows of matrix B may be partitioned into partitions b1 - bp. Each partition may then be distributed to a particular matrix processing chip. For example, partitions a1 and b1 may be distributed to a first matrix processing chip, partitions a2 and b2 may be distributed to a second matrix processing chip, and so forth.Moreover, in some embodiments the matrix operands may be further partitioned based on the hierarchical arrangement of processing resources, as described in connection with FIGURE 5 . For example, the partition distributed to a particular matrix processing chip may then be similarly partitioned and distributed across the matrix processing clusters of that matrix processing chip (e.g., matrix processing clusters 230 of FIGURE 2B ). The partition distributed to a particular matrix processing cluster may then be similarly partitioned and distributed across the matrix processing units (MPUs) of that matrix processing cluster (e.g., matrix processing units (MPUs) 234 of FIGURE 2C ).The weight update operation may then be performed as described in connection with FIGURES 6B and 6C .FIGURE 6B illustrates the first stage of the weight update operation. In the first stage, each matrix processing chip may perform a partial matrix multiplication operation using its respective partitions of matrices A and B. For example, the first chip may perform a partial matrix multiplication operation using partitions a1 and b1, the second chip may perform a partial matrix multiplication operation using partitions a2 and b2, and so forth. The partial result calculated by each matrix processing chip may then be stored in the corresponding location in result matrix C.Moreover, in some embodiments, the partial matrix operations may be further distributed based on the hierarchical arrangement of processing resources. For example, the partial matrix operations distributed to a particular matrix processing chip may then be similarly distributed across the matrix processing clusters of that matrix processing chip (e.g., matrix processing clusters 230 of FIGURE 2B ). The partial matrix operations distributed to a particular matrix processing cluster may then be similarly distributed across the matrix processing units (MPUs) of that matrix processing cluster (e.g., matrix processing units (MPUs) 234 of FIGURE 2C ).While the partial operations are being performed by the matrix processing chips, each chip may simultaneously send and receive partial matrix operands to and from its neighboring matrix processing chips. For example, in some embodiments, the matrix processing chips may be configured in a single-cyclical arrangement (e.g., with one-way communication between neighboring chips) or a dual-cyclical arrangement (e.g., with two-way communication between neighboring chips). In a single-cyclical configuration, each matrix processing chip may send or receive partial matrix operands to or from each neighboring chip. However, a single-cyclical configuration may be unable to achieve 100% processing efficiency for certain matrix operations and matrix operands (e.g., a large matrix operand which is neither square nor a single vector, such as a "thin" matrix with a much larger height than width). In a dual-cyclical configuration, each matrix processing chip may send and receive matrix operands to and from both neighboring chips. Accordingly, a dual-cyclical configuration may significantly reduce the latency for communicating matrix operands, thus avoiding any idle processing time.Using either approach, the partitions of matrix B (e.g., partitions b1 - bp) are shifted across matrix processing chips during each stage of partial matrix operations. For example, the illustrated embodiment uses a single-cyclical approach, such that each partition of matrix B (e.g., partitions b1 - bp) is transmitted from its current chip to a single neighboring chip. Other embodiments may use a dual-cyclical approach, such that each partition of matrix B (e.g., partitions b1 - bp) is transmitted from its current chip to both neighboring chips, thus reducing the latency for communicating partial matrix operands by half.In this manner, during each stage of partial matrix operations, partial matrix operands (e.g., partitions b1 - bp) are shifted to neighboring chip(s), and each matrix processing chip may then use the partial matrix operands received from neighboring chips for subsequent partial matrix operations, as described in connection with FIGURE 6C .FIGURE 6C illustrates the second stage of the weight update operation. In the second stage, each matrix processing chip may perform a partial matrix multiplication operation using its respective partitions of matrices A and B. For example, while the partitions of matrix A remain the same across the chips, the partitions of matrix B have been shifted across the chips, as described in connection with FIGURE 6B . Thus, the first chip may perform a partial matrix multiplication operation using partitions a1 and b2, the second chip may perform a partial matrix multiplication operation using partitions a2 and b3, and so forth. Moreover, in some embodiments the partial matrix operations may be further distributed based on the hierarchical arrangement of processing resources, as described in connection with FIGURE 6B . The partial result calculated by each matrix processing chip may then be stored in the corresponding location in result matrix C.Moreover, while the partial operations are being performed by the matrix processing chips, each chip may simultaneously send and receive partial matrix operands to and from its neighboring matrix processing chips, as described in connection with FIGURE 6B . For example, each matrix processing chip may send its current partition of matrix B (e.g., partitions b1 - bp) to one or more neighboring chips.Thus, during each stage of partial matrix operations, partial matrix operands (e.g., partitions b1 - bp) are shifted to neighboring chip(s), and each matrix processing chip may then use the partial matrix operands received from neighboring chips for subsequent partial matrix operations. These stages of the matrix operation may continue in this manner until all partial results for result matrix C have been computed. The result of the matrix operation may then be determined using the partial results collectively computed by the matrix processing chips.FIGURES 7A-Cillustrate an example forward propagation operation in a neural network. FIGURE 7A illustrates the forward propagation operation that is to be performed, and FIGURES 7B and 7C illustrate how the forward propagation operation is performed.FIGURE 7A illustrates the following operation: A ∗ B = C. A forward propagation operation may be implemented using the formula A2 = w ∗ A1, which may translate as follows in FIGURE 7A : matrix A corresponds to w (e.g., the weight matrix); matrix B corresponds to A1 (e.g., the first activation matrix); and matrix C corresponds to A2 (e.g., the second activation matrix).Matrices A and B may first be partitioned based on the number of available processing resources, as described in connection with FIGURE 5 . For example, in some embodiments, matrices A and B may be partitioned into P partitions corresponding to the number of available matrix processing chips (e.g., matrix processing chips 220 of FIGURE 2A ). For example, if there are P matrix processing chips, the rows of matrix A may be partitioned into partitions a1x - apx, and the rows of matrix B may be partitioned into partitions b1 - bp. Each partition may then be distributed to a particular matrix processing chip. For example, partitions a1x and b1 may be distributed to a first matrix processing chip, partitions a2x and b2 may be distributed to a second matrix processing chip, and so forth.Moreover, in some embodiments the matrix operands may be further partitioned based on the hierarchical arrangement of processing resources, as described in connection with FIGURE 5 . For example, the partition distributed to a particular matrix processing chip may then be similarly partitioned and distributed across the matrix processing clusters of that matrix processing chip (e.g., matrix processing clusters 230 of FIGURE 2B ). The partition distributed to a particular matrix processing cluster may then be similarly partitioned and distributed across the matrix processing units (MPUs) of that matrix processing cluster (e.g., matrix processing units (MPUs) 234 of FIGURE 2C ).The forward propagation operation may then be performed as described in connection with FIGURES 7B and 7C . For example, the corresponding partitions of result matrix C (e.g., c1 - cp) may be calculated and stored by each matrix processing chip, such that ci = Σ aij∗bj.FIGURE 7B illustrates the first stage of the forward propagation operation. In the first stage, each matrix processing chip may perform a partial matrix multiplication operation using its respective partitions of matrices A and B. For example, the first chip may perform a partial matrix multiplication operation using partitions an and b1, the second chip may perform a partial matrix multiplication operation using partitions a22 and b2, and so forth. The partial result calculated by each matrix processing chip may then be stored in the corresponding partition c1 - cp of result matrix C, such that ci = aii∗bi.Moreover, in some embodiments, the partial matrix operations may be further distributed based on the hierarchical arrangement of processing resources. For example, the partial matrix operations distributed to a particular matrix processing chip may then be similarly distributed across the matrix processing clusters of that matrix processing chip (e.g., matrix processing clusters 230 of FIGURE 2B ). The partial matrix operations distributed to a particular matrix processing cluster may then be similarly distributed across the matrix processing units (MPUs) of that matrix processing cluster (e.g., matrix processing units (MPUs) 234 of FIGURE 2C ).While the partial operations are being performed by the matrix processing chips, each chip may simultaneously send and receive partial matrix operands to and from its neighboring matrix processing chips, using a single-cyclical or dual-cyclical configuration, as described in connection with FIGURE 6B . Thus, the partitions of matrix B (e.g., partitions b1 - bp) may be shifted across matrix processing chips during each stage of partial matrix operations. For example, the illustrated embodiment uses a single-cyclical approach, such that each partition of matrix B (e.g., partitions b1 - bp) is transmitted from its current chip to a single neighboring chip. Other embodiments may use a dual-cyclical approach, such that each partition of matrix B (e.g., partitions b1 - bp) is transmitted from its current chip to both neighboring chips, thus reducing the latency for communicating partial matrix operands by half.In this manner, during each stage of partial matrix operations, partial matrix operands (e.g., partitions b1 - bp) are shifted to neighboring chip(s), and each matrix processing chip may then use the partial matrix operands received from neighboring chips for subsequent partial matrix operations, as described in connection with FIGURE 7C .FIGURE 7C illustrates the second stage of the forward propagation operation. In the second stage, each matrix processing chip may perform a partial matrix multiplication operation using its respective partitions of matrices A and B. For example, while the partitions of matrix A remain the same across the chips, the partitions of matrix B have been shifted across the chips, as described in connection with FIGURE 7B . Thus, the first chip may perform a partial matrix multiplication operation using partitions a12 and b2, the second chip may perform a partial matrix multiplication operation using partitions a23 and b3, and so forth. Moreover, in some embodiments the partial matrix operations may be further distributed based on the hierarchical arrangement of processing resources, as described in connection with FIGURE 7B . The partial result calculated by each matrix processing chip may then be added to the current value stored in the corresponding partition c1 - cp of result matrix C, such that ci = ci + ai(i+1)∗bi+1. In this manner, when all partial operations are complete, each partition c1 - cp of result matrix C contains the sum of the partial results calculated by the corresponding matrix processing chip, such that ci = Σ aij∗bj.Moreover, while the partial operations are being performed by the matrix processing chips, each chip may simultaneously send and receive partial matrix operands to and from its neighboring matrix processing chips, as described in connection with FIGURE 7B . For example, each matrix processing chip may send its current partition of matrix B (e.g., partitions b1 - bp) to one or more neighboring chips.Thus, during each stage of partial matrix operations, partial matrix operands (e.g., partitions b1 - bp) are shifted to neighboring chip(s), and each matrix processing chip may then use the partial matrix operands received from neighboring chips for subsequent partial matrix operations. These stages of the matrix operation may continue in this manner until all partial results for result matrix C have been computed. The result of the matrix operation may then be determined using the partial results collectively computed by the matrix processing chips.FIGURES 8A-Cillustrate an example backward propagation operation in a neural network. FIGURE 8A illustrates the backward propagation operation that is to be performed, and FIGURES 8B and 8C illustrate how the backward propagation operation is performed.FIGURE 8A illustrates the following operation: AT * B = C. A backward propagation operation may be implemented using the formula A1 = wT * A2, which may translate as follows in FIGURE 8A : matrix A corresponds to w (e.g., the weight matrix); matrix B corresponds to A2 (e.g., the second activation matrix); and matrix C corresponds to A1 (e.g., the first activation matrix). In this example, the matrix operation AT ∗ B = C may be performed without having to perform a transpose on the elements of matrix A in memory.Matrices A and B may first be partitioned based on the number of available processing resources, as described in connection with FIGURE 5 . For example, in some embodiments, matrices A and B may be partitioned into P partitions corresponding to the number of available matrix processing chips (e.g., matrix processing chips 220 of FIGURE 2A ). For example, if there are P matrix processing chips, the rows of matrix A may be partitioned into partitions a1x - apx, and the rows of matrix B may be partitioned into partitions b1 - bp. Each partition may then be distributed to a particular matrix processing chip. For example, partitions a1x and b1 may be distributed to a first matrix processing chip, partitions a2x and b2 may be distributed to a second matrix processing chip, and so forth.Moreover, in some embodiments the matrix operands may be further partitioned based on the hierarchical arrangement of processing resources, as described in connection with FIGURE 5 . For example, the partition distributed to a particular matrix processing chip may then be similarly partitioned and distributed across the matrix processing clusters of that matrix processing chip (e.g., matrix processing clusters 230 of FIGURE 2B ). The partition distributed to a particular matrix processing cluster may then be similarly partitioned and distributed across the matrix processing units (MPUs) of that matrix processing cluster (e.g., matrix processing units (MPUs) 234 of FIGURE 2C ).The backward propagation operation may then be performed as described in connection with FIGURES 8B and 8C . For example, the corresponding partitions of result matrix C (e.g., c1 - cp) may be calculated and stored by each matrix processing chip, such that ci = A[:i]*B.FIGURE 8B illustrates the first stage of the backward propagation operation. In the first stage, each matrix processing chip may perform a partial matrix multiplication operation using its respective partitions of matrices A and B. For example, the first chip may perform a partial matrix multiplication operation using partitions a12 and b1, the second chip may perform a partial matrix multiplication operation using partitions a23 and b2, and so forth. The partial result calculated by each matrix processing chip may then be stored in the corresponding partition c1 - cp of result matrix C.Moreover, in some embodiments, the partial matrix operations may be further distributed based on the hierarchical arrangement of processing resources. For example, the partial matrix operations distributed to a particular matrix processing chip may then be similarly distributed across the matrix processing clusters of that matrix processing chip (e.g., matrix processing clusters 230 of FIGURE 2B ). The partial matrix operations distributed to a particular matrix processing cluster may then be similarly distributed across the matrix processing units (MPUs) of that matrix processing cluster (e.g., matrix processing units (MPUs) 234 of FIGURE 2C ).While the partial operations are being performed by the matrix processing chips, each chip may simultaneously send and receive partial matrix data to and from its neighboring matrix processing chips, as described in connection with FIGURE 6B . However, for a backward propagation operation, the partitions of result matrix C (e.g., partitions c1 - cp) may be shifted across matrix processing chips during each stage of partial matrix operations. For example, in the illustrated embodiment, each partition c1 - cp of result matrix C is transmitted from its current chip to a neighboring chip.In this manner, during the first stage of partial matrix operations, partial results are calculated and stored in the corresponding partition c1 - cp of result matrix C. Each partial result on partitions c1 - cp is then shifted to a neighboring chip, and each matrix processing chip may then use the partial result received from a neighboring chip for subsequent partial matrix operations, as described in connection with FIGURE 8C .FIGURE 8C illustrates the second stage of the backward propagation operation. In the second stage, each matrix processing chip may perform a partial matrix multiplication operation using its respective partitions of matrices A and B. In some embodiments, the partial matrix operations may be further distributed based on the hierarchical arrangement of processing resources, as described in connection with FIGURE 8B .As an example, the first chip may perform a partial matrix multiplication operation using partitions a13 and b1, the second chip may perform a partial matrix multiplication operation using partitions a24 and b2, and so forth. The partial result calculated by each matrix processing chip may then be added to the current value of the result partition c1 - cp, which was previously received from a neighboring chip (as discussed in connection with FIGURE 8B ). For example, partition c2 may have previously been shifted from the second chip to the first chip, and thus the first chip may now add that value of c2 to the partial result computed in the current stage (e.g., c2 = c2 + a13∗b1).While the partial operations are being performed by the matrix processing chips, each chip may simultaneously send and receive partial matrix data to and from its neighboring matrix processing chips, as described in connection with FIGURE 8B . For example, each matrix processing chip may send its current partition of result matrix C (e.g., partitions c1 - cp) to a neighboring chip. Thus, during each stage of partial matrix operations, partial matrix results (e.g., partitions c1 - cp) are shifted to a neighboring chip, and each matrix processing chip may then use the partial matrix result received from a neighboring chip for subsequent partial matrix operations. These stages of the matrix operation may continue in this manner until all partial results for result matrix C have been computed. In this manner, when all partial operations are complete, the partitions c1 - cp of result matrix C contain the result of the matrix operation AT * B = C, allowing the matrix operation to be performed without having to transpose the elements of matrix A in memory.FIGURE 9 illustrates a flowchart 900 for an example embodiment of distributed matrix operations. Flowchart 900 may be implemented, in some embodiments, by components described throughout this disclosure (e.g., the matrix processing architecture of FIGURES 2A-C ).The flowchart may begin at block 902 by receiving a command to perform a matrix operation. The matrix operation may comprise an operation associated with a plurality of input matrices (e.g., matrix operands), such as one or more matrix multiplication operations. In some embodiments, the matrix operation may be associated with an operation in a neural network, such as a forward propagation operation, backward propagation operation, and/or weight update operation.The flowchart may then proceed to block 904 to partition the input matrices into a plurality of partitions based on the number of available processing elements. In some embodiments, the input matrices may be partitioned based on the hierarchical arrangement of processing resources, as described further in connection with block 906. Moreover, in some embodiments, the input matrices may be partitioned across their rows.The flowchart may then proceed to block 906 to distribute the partitions to the available processing elements. For example, in some embodiments, each partition may be distributed to a particular processing element. Moreover, in some embodiments, the processing elements may be configured in a hierarchical arrangement with a plurality of processing levels, and the matrix operation may be distributed across the hierarchy of processing levels. For example, the processing elements may include multiple matrix processing chips (e.g., matrix processing chips 220 of FIGURE 2A ), multiple matrix processing clusters on each matrix processing chip (e.g., matrix processing clusters 230 of FIGURE 2B ), and/or multiple matrix processing units (MPUs) on each matrix processing cluster (e.g., matrix processing units (MPUs) 234 of FIGURE 2C ). In those embodiments, the matrix operation may first be partitioned and distributed across the matrix processing chips. The partial matrix operation distributed to a particular matrix processing chip may then be similarly partitioned and distributed across the matrix processing clusters of that matrix processing chip. The partial matrix operation distributed to a particular matrix processing cluster may then be similarly partitioned and distributed across the matrix processing units (MPUs) of that matrix processing cluster.The flowchart may then proceed to block 908 to perform partial matrix operations using the processing elements. For example, each processing element may perform a partial matrix operation based on the matrix data distributed to that processing element.The flowchart may then proceed to block 910 to transmit partial matrix data between processing elements while performing the partial matrix operations. For example, in some embodiments, the processing elements may be configured in a cyclical arrangement such that each processing element is communicatively coupled to multiple neighbor processing elements. Moreover, the partial matrix operations may be performed in a plurality of stages, and each processing element may transmit partial matrix data to its neighbor processing elements while performing a particular stage of the partial matrix operations. For example, in some embodiments, each processing element may transmit partial matrix data to one of its neighbor processing elements (e.g., using a single-cyclical approach) or to both of its neighbor processing elements (e.g., using a dual-cyclical approach) during each stage of partial matrix operations. For example, a first processing element may use or calculate partial matrix data in a particular stage of the partial matrix operations, the first processing element may transmit the partial matrix data to a second processing element, and the second processing element may then use the partial matrix data in a subsequent stage of the partial matrix operations. In some matrix operations, the partial matrix data may include a partial input matrix, while in other matrix operations the partial matrix data may include a partial result matrix.The flowchart may then proceed to block 912 to determine a result of the matrix operation. For example, the result of the matrix operation may be determined based on the partial results collectively computed by the processing elements.At this point, the flowchart may be complete. In some embodiments, however, the flowchart may restart and/or certain blocks may be repeated. For example, in some embodiments, the flowchart may restart at block 902 to continue receiving and processing commands to perform matrix operations.The flowcharts and block diagrams in the FIGURES illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various aspects of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order or alternative orders, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.The foregoing disclosure outlines features of several embodiments so that those skilled in the art may better understand various aspects of the present disclosure. Those skilled in the art should appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments introduced herein.All or part of any hardware element disclosed herein may readily be provided in a system-on-a-chip (SoC), including a central processing unit (CPU) package. An SoC represents an integrated circuit (IC) that integrates components of a computer or other electronic system into a single chip. The SoC may contain digital, analog, mixed-signal, and radio frequency functions, all of which may be provided on a single chip substrate. Other embodiments may include a multi-chip-module (MCM), with a plurality of chips located within a single electronic package and configured to interact closely with each other through the electronic package. In various other embodiments, the computing functionalities disclosed herein may be implemented in one or more silicon cores in Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and other semiconductor chips.As used throughout this specification, the term "processor" or "microprocessor" should be understood to include not only a traditional microprocessor (such as Intel's® industry-leading x86 and x64 architectures), but also matrix processors, graphics processors, and any ASIC, FPGA, microcontroller, digital signal processor (DSP), programmable logic device, programmable logic array (PLA), microcode, instruction set, emulated or virtual machine processor, or any similar "Turing-complete" device, combination of devices, or logic elements (hardware or software) that permit the execution of instructions.Note also that in certain embodiments, some of the components may be omitted or consolidated. In a general sense, the arrangements depicted in the figures should be understood as logical divisions, whereas a physical architecture may include various permutations, combinations, and/or hybrids of these elements. It is imperative to note that countless possible design configurations can be used to achieve the operational objectives outlined herein. Accordingly, the associated infrastructure has a myriad of substitute arrangements, design choices, device possibilities, hardware configurations, software implementations, and equipment options.In a general sense, any suitably-configured processor can execute instructions associated with data or microcode to achieve the operations detailed herein. Any processor disclosed herein could transform an element or an article (for example, data) from one state or thing to another state or thing. In another example, some activities outlined herein may be implemented with fixed logic or programmable logic (for example, software and/or computer instructions executed by a processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (for example, a field programmable gate array (FPGA), an erasable programmable read only memory (EPROM), an electrically erasable programmable read only memory (EEPROM)), an ASIC that includes digital logic, software, code, electronic instructions, flash memory, optical disks, CD-ROMs, DVD ROMs, magnetic or optical cards, other types of machine-readable mediums suitable for storing electronic instructions, or any suitable combination thereof.In operation, a storage may store information in any suitable type of tangible, non-transitory storage medium (for example, random access memory (RAM), read only memory (ROM), field programmable gate array (FPGA), erasable programmable read only memory (EPROM), electrically erasable programmable ROM (EEPROM), or microcode), software, hardware (for example, processor instructions or microcode), or in any other suitable component, device, element, or object where appropriate and based on particular needs. Furthermore, the information being tracked, sent, received, or stored in a processor could be provided in any database, register, table, cache, queue, control list, or storage structure, based on particular needs and implementations, all of which could be referenced in any suitable timeframe. Any of the memory or storage elements disclosed herein should be construed as being encompassed within the broad terms 'memory' and 'storage,' as appropriate. A non-transitory storage medium herein is expressly intended to include any non-transitory special-purpose or programmable hardware configured to provide the disclosed operations, or to cause a processor to perform the disclosed operations. A non-transitory storage medium also expressly includes a processor having stored thereon hardware-coded instructions, and optionally microcode instructions or sequences encoded in hardware, firmware, or software.Computer program logic implementing all or part of the functionality described herein is embodied in various forms, including, but in no way limited to, hardware description language, a source code form, a computer executable form, machine instructions or microcode, programmable hardware, and various intermediate forms (for example, forms generated by an HDL processor, assembler, compiler, linker, or locator). In an example, source code includes a series of computer program instructions implemented in various programming languages, such as an object code, an assembly language, or a high-level language such as OpenCL, FORTRAN, C, C++, JAVA, or HTML for use with various operating systems or operating environments, or in hardware description languages such as Spice, Verilog, and VHDL. The source code may define and use various data structures and communication messages. The source code may be in a computer executable form (e.g., via an interpreter), or the source code may be converted (e.g., via a translator, assembler, or compiler) into a computer executable form, or converted to an intermediate form such as byte code. Where appropriate, any of the foregoing may be used to build or describe appropriate discrete or integrated circuits, whether sequential, combinatorial, state machines, or otherwise.In one example, any number of electrical circuits of the FIGURES may be implemented on a board of an associated electronic device. The board can be a general circuit board that can hold various components of the internal electronic system of the electronic device and, further, provide connectors for other peripherals. More specifically, the board can provide the electrical connections by which the other components of the system can communicate electrically. Any suitable processor and memory can be suitably coupled to the board based on particular configuration needs, processing demands, and computing designs. Other components such as external storage, additional sensors, controllers for audio/video display, and peripheral devices may be attached to the board as plug-in cards, via cables, or integrated into the board itself. In another example, the electrical circuits of the FIGURES may be implemented as stand-alone modules (e.g., a device with associated components and circuitry configured to perform a specific application or function) or implemented as plug-in modules into application specific hardware of electronic devices.Note that with the numerous examples provided herein, interaction may be described in terms of two, three, four, or more electrical components. However, this has been done for purposes of clarity and example only. It should be appreciated that the system can be consolidated or reconfigured in any suitable manner. Along similar design alternatives, any of the illustrated components, modules, and elements of the FIGURES may be combined in various possible configurations, all of which are within the broad scope of this specification. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a limited number of electrical elements. It should be appreciated that the electrical circuits of the FIGURES and its teachings are readily scalable and can accommodate a large number of components, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope or inhibit the broad teachings of the electrical circuits as potentially applied to a myriad of other architectures.Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims.
An integrated circuit (IC) device (400) includes an electromigration (EM) resistant feed line (401). The IC device includes a substrate (405) including active circuitry (409). A back end of the line (BEOL) metallization stack (420) includes an interconnect metal layer (412) that is coupled to a bond pad (419) by the EM resistant feed line. A bonding feature (435) is on the bond pad. The EM resistant feed line (401) includes a uniform portion (402) and patterned trace portion (405) that extends to the bond pad which includes at least three sub-traces that are electrically in parallel. The sub-traces are sized so that a number of squares associated with each of the sub-traces is within a range of a mean number of squares for the sub-traces plus or minus twenty percent or a current density provided to the bonding feature through each sub- trace is within a range of a mean current density provided to the bonding feature plus or minus twenty percent.
CLAIMS What is claimed is: 1. An integrated circuit device, comprising: a substrate having a top surface including active circuitry configured to provide a circuit function; a metallization stack including an interconnect metal layer that includes at least one trace coupled to a bond pad; and a bonding feature on said bond pad; wherein said trace includes a uniform trace portion and patterned trace portion that extends to said bond pad, wherein said patterned trace portion includes at least three sub- traces that are electrically in parallel to one another, and wherein at least one of: (i) a number of squares for each of said sub-traces are within a range of a mean number of squares for said sub- traces plus or minus twenty percent, and (ii) a current density provided to said bonding feature conducted through each of said sub-traces are within a range of a mean current density provided to said bonding feature plus or minus twenty percent. 2. The device of claim 1, wherein said sub-traces contact said bond pad along its periphery, and wherein said sub-traces have substantially equal separation from one another along said periphery. 3. The evice of claim 1, wherein said sub-traces include at least 10 sub-traces. 4. The device of claim 1, wherein said metallization stack includes a top metal layer and said interconnect metal layer is beneath said top metal layer, and wherein said bond pad comprises said top metal layer. 5. The device of claim 1, wherein said bonding feature comprises a solder bump, a through substrate via, a pillar, a stud, or an organic bonding material having a plurality of metal particles therein. 6. The device of claim 1, further comprising a plurality of vias in a dielectric layer between respective ones of said sub-traces and said bonding feature. 7. The device of claim 1, further comprising a current spreading layer between said bond pad and said bonding feature and a dielectric layer between said bond pad and said current spreading layer, wherein said dielectric layer includes a plurality of vias that provide separate contacts between said bond pad and said current spreading layer. 8. The device of claim 7, wherein said sub-traces are sized so that a number of squares associated with paths provided by at least one of said sub-traces is outside a range of a mean number of squares for said sub-traces plus or minus twenty percent, and wherein respective ones of said vias are sized to provide said (ii). 9. The device of claim 1, wherein said at least one trace comprises a plurality of independent traces including a first trace including a first patterned trace portion dividing into at least three sub-traces, and a second trace including a second patterned trace portion dividing into at least three second sub-traces. 10. The device of claim 1, wherein said bonding feature comprises a solder bump. 11. An integrated circuit device, comprising: a substrate having a top surface including active circuitry configured to provide a circuit function; a metallization stack including a top metal layer and an interconnect metal layer beneath said top metal layer, wherein said interconnect metal layer includes at least one trace that that couples to a bond pad comprising said top metal layer; an under bump metallization layer formed on said bond pad; and a solder bump formed on said under bump metallization layer; wherein said trace includes a patterned trace portion that extends to said bond pad, wherein said patterned trace portion divides into at least three sub-traces that are electrically in parallel to one another, and wherein at least one of: (i) a number of squares for each of said sub- traces are within a range of a mean number of squares for said sub-traces plus or minus twenty percent, and (ii) a current density provided to said bonding feature conducted through each of said sub-traces are within a range of a mean current density provided to said bonding feature plus or minus twenty percent. 12. A method of forming an integrated circuit device, comprising: providing a substrate having a top surface including active circuitry; forming a metallization stack including an interconnect metal layer that includes at least one trace that includes a patterned trace portion comprising at least three sub-traces that are electrically in parallel to one another; forming a bond pad that is coupled to said at least three sub-traces; and forming a bonding feature on said bond pad; wherein said patterned trace portion includes at least three sub-traces that are electrically in parallel to one another, and wherein at least one of (i) a number of squares for each of said sub-traces are within a range of a mean number of squares for said sub-traces plus or minus twenty percent and (ii) a current density provided to said bonding feature conducted through each of said sub-traces are within a range of a mean current density provided to said bonding feature plus or minus twenty percent. 13. The method of claim 12, wherein said forming said metallization stack includes forming a top metal layer, wherein said interconnect metal layer is beneath said top metal layer, and wherein said bond pad comprises said top metal layer. 14. The method of claim 12, wherein said sub-traces contact said bond pad along its periphery, and wherein said sub-traces have substantially equal separation to neighboring ones of said sub-traces along said periphery. 15. The method of claim 12, further comprising forming a current spreading layer between said bond pad and said bonding feature, forming a dielectric layer between said bond pad and said current spreading layer, and forming a plurality of vias in said dielectric layer that each provide separate contacts between said bond pad and said current spreading layer.
IC DEVICE HAVING ELECTROMIGRATION RESISTANT FEED LINE STRUCTURES [0001] Disclosed embodiments relate to integrated circuit (IC) devices that include feed line structures that improve electromigration (EM) performance. BACKGROUND [0002] ICs generally comprise a substrate, active circuitry formed on the topside of the substrate, and a back end of the line (BEOL) structure including alternating metal wiring layers and interlevel dielectric layers (ILD) above the active circuitry. The metal wiring layers comprise various interconnects that provide electrical connections between the active circuitry and external connections. Solder bumps (or solder balls) are commonly utilized to provide a connection between the last (e.g., top) metal wiring level of a semiconductor device and another device, such as from a node in the active circuitry or situations where interconnect plays a passive role where the solder bump/is simply part of a pass-through (e.g., for a stacked die/package). A common type of solder bump is the controlled collapse chip connection (C4) solder bump, often used for jointing for flip chip devices. [0003] As dimensions of features (e.g., pads, wires, interconnects, vias) shrink to create smaller devices, the maximum allowable current density decreases rapidly due to EM-based constraints imposed for reliability. EM is a known phenomenon in which atoms of a metal feature are displaced due to the electrical current passing through the metal feature. [0004] IC devices such as flip chip devices are requiring higher and higher current carrying capabilities, sometimes to the level of 10 amps or more. Solder is known to have a significantly lower current density handling ability as compared to conventional metal interconnects, such as copper and aluminum. For example, solder has a relatively low EM current limit (e.g., typical EM-limited current density for conventional solder is around 104 A/cm2, about one hundred times lower than that of copper and aluminum). The current carrying capability of each flip chip solder bump sets the minimum number of solder bumps used to supply this current to limit the current density through the solder bumps due to EM constraints. The conventional flip chip solder bump process suffers from a current distribution non- uniformity over the cross sectional area of the solder bump which accelerates the EM-based degradation of the solder and causes failures earlier than for the case where the current distribution is more uniform. [0005] One example of a conventional flip chip bump arrangement includes a copper feed line to an aluminum bond pad formed from a top metal layer, a dielectric (e.g., polyimide) layer including an opening (dielectric opening) over the pad, a thick (e.g., 2 μιη thick) nickel under bump metallization (UBM) layer over the dielectric layer and the dielectric opening, and a solder bump over the UBM. This arrangement suffers from significant current non-uniformity across the cross sectional area of the solder bump. [0006] For a solder bump with a feed line current coming from one side, the peak current in the solder bump area adjacent to the UBM may exist over a portion of the cross sectional area that is only about 10% of the overall cross sectional area of the solder bump. This is the current crowded region in the solder bump that voids first due to exceeding the EM current density limit of solder. Once this region voids, the solder area next to it will carry the peak current distribution and will void next. This voiding pattern will continue until the whole solder bump over the dielectric opening becomes voided. At this time the outer annulus of the UBM over the dielectric will begin the void, and eventually an open circuit will result. [0007] One known solution to this problem involves adding a thick copper stud in the UBM which helps spread current across the cross sectional area of the solder bump. This known solution adds a process step and is only minimally effective since it cannot render uniform current density for typical stud dimensions. There is thus a need for new feed line to bonding feature arrangements that allow the current to be more uniform over the cross sectional area of the solder bump or other bonding feature without adding a process step or significantly increasing the area required to implement the feed line structure. SUMMARY [0008] Disclosed embodiments describe integrated circuit (IC) devices that have electromigration (EM) resistant feed line structures to the bonding features that force the current flowing into the bonding feature to be more uniform across its cross sectional area. Such current spreading embodiments solve or at least significantly reduce EM-induced voiding in bonding features, such as solder bumps. [0009] By dividing the feed line trace to the bonding feature into at least three electrically parallel sub-trace paths, with the respective sub-trace paths having at least one of (i) appropriate line sizings to make the plurality of feed currents substantially equal currents (i.e., longer lines are wider, and shorter lines are narrower) and ii) a current density provided to the bonding feature conducted through each of the sub-traces being substantially equal, higher total current levels can be handled by the bonding feature without EM-based problems due to better distribution of current (less current crowding) across the cross sectional area of the bonding feature. Disclosed embodiments do not generally add any process steps. [0010] For example, disclosed feed structures can replace a conventional single incoming feed line trace (e.g., a 10 micron wide trace from end to end) by a trace that includes a patterned trace portion comprising a plurality of sub-traces (e.g., eight, twelve, sixteen or even more sub- traces). In one embodiment, the current density provided to the bonding feature conducted through each of the sub-traces is substantially equal. As used herein, "substantially equal current density" provided to the bonding feature conducted through each of the sub-traces refers to the current densities each being within a range of the mean current density provided to the bonding feature plus or minus twenty percent. [0011] In another embodiment, the sub-traces have different widths and different lengths, where the respective sub-traces each have a substantially equal numbers of squares. As used herein, a "substantially equal number of squares" produces substantially equal sub-trace currents and refers to a number of squares associated with the paths provided by each of the sub-traces all being within a range of a mean number of squares for the sub-traces plus or minus twenty percent, and in one embodiment is within a range of a mean number of squares for the sub-traces plus or minus ten percent. [0012] In an embodiment referred to herein as the edge feed embodiment, the sub-traces can be distributed so that the area under the edge (perimeter) of the bonding feature over a dielectric opening has an equal distribution of feed line sub-trace contacts, that is the separation (spacing) between each feed line sub-trace to its neighbors under the bond pad is substantially uniform. In this embodiment "substantially equal separation" refers to the distances along the perimeter between the sub-traces all being within a range of a mean perimeter spacing distance for the plurality of sub-traces plus or minus twenty percent. Since the number of squares and thus the resistance of each feed line sub-trace can be substantially equal in this embodiment, the current in the uniform trace portion will divide itself substantially equally amongst each of the sub-trace paths to the bonding feature available to it. Thus, for the edge feed embodiment the periphery under the bonding feature will see a uniform current into it and a more uniform current distribution in the bonding feature (e.g., solder bump) is generally achieved. [0013] In an embodiment referred to herein as the area feed embodiment, substantially the full area of the bonding feature is fed by current. In this embodiment, the bond pad has vias distributed over the substantially the full area under or over the bond pad. In this embodiment, a via pattern can be provided in the dielectric layer over bond pad (e.g., between the bond pad and a UBM pad), or the via pattern can be in the dielectric under bond pad (e.g., between the feed line sub-traces and the bond pad). The area feed embodiment may also be combined with the edge feed embodiment. BRIEF DESCRIPTION OF THE DRAWINGS [0014] FIG. 1A shows a depiction of an example electromigration (EM) resistant feed line structure having at least three sub-traces that provide substantially equal sub-trace currents having a substantially equal distribution over the periphery of a dielectric opening under the bonding feature, according to an example embodiment. [0015] FIG. IB shows a depiction of an example EM resistant feed line structure having at least three sub-traces that provide both substantially equal sub-trace currents and substantially equal sub-trace current densities, as well as substantially equal distribution over the periphery of a dielectric opening under the bonding feature, according to an example embodiment. [0016] FIG. 1C shows a depiction of an example EM resistant feed line structure having a single metal layer that provides both the plurality of sub-traces and the bond pad, where the sub-traces provide substantially equal sub-trace currents to feed the bond pad along its periphery, according to an example embodiment. [0017] FIG. 2 A shows a depiction of an example EM resistant feed line structure having at least three sub-traces that provide substantially equal sub-trace currents coupled in a contact region to a bond pad having vias thereon distributed across a full area of a bonding feature above the bond pad, according to an example embodiment. [0018] FIG. 2B shows a depiction of an example EM resistant feed line structure having at least three sub-traces that are coupled in a contact region to a bond pad having vias thereon distributed across a full area of a bonding feature above the bond pad, wherein the sub-trace currents have a range that is outside a mean number of squares for the sub-traces plus or minus twenty percent, and the vias are sized to provide substantially equal current densities to the bonding feature, according to an example embodiment. [0019] FIG. 3 shows a depiction of an example EM resistant feed line structure having a first independent feed line comprising a uniform trace portion and a second independent feed line comprising a uniform trace portion, both being coupled to the same bond pad, wherein the feed lines each comprise a patterned trace portion including four sub-traces sized to provide substantially equal sub-trace currents, according to an example embodiment. [0020] FIG. 4 shows an example IC device including a substrate having active circuitry, a back end of the line (BEOL) metallization stack including an interconnect metal layer including a disclosed EM resistant feed line structure comprising at least three sub-traces that provide substantially equal sub-trace currents coupled to a bond pad comprising a top metal layer, and a bonding feature on the bond pad, according to an example embodiment. [0021] FIG. 5 shows a stacked IC device comprising an IC die having a disclosed EM resistant feed line structure comprising at least three sub-traces that provide substantially equal sub-trace currents bonded to a substrate by a joint that comprises a metal/organic bonding material, according to an example embodiment. DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS [0022] FIG. 1A shows an example EM resistant feed line structure 100 providing edge feed having at least three sub-traces that provide substantially equal sub-trace currents having a substantially equal distribution over the periphery of a dielectric opening 115 under a bonding feature, according to an example embodiment. Although one large dielectric opening 115 is shown in FIG. 1A, disclosed embodiments can instead include an array of smaller dielectric openings, or both, under a later formed bonding feature. The bonding features are generally described herein as being solder bumps. However, the bonding features can also comprise through substrate vias (TSVs), pillars (e.g., copper pillars), studs (e.g., gold studs), or an organic bonding material having a plurality of metal particles therein. [0023] EM resistant feed line structure 100 comprises a uniform (i.e., conventional) trace portion 102 coupled to a patterned trace portion 105 comprising at least three sub-traces 105(a), 105(b), 105(c), etc. that are electrically in parallel and distributed to provide electrical contact along the periphery over the dielectric opening 115 shown that is under the later formed metal stack (not shown) including a bonding feature on a bond pad. The later formed metal stack will be over dielectric opening 115, which in one particular embodiment can comprise a solder ball/Ni under bump metallization (UMB)/A1 bond pad. [0024] Substantially equal sub-trace currents are provided by EM resistant feed line structure 100 because the respective plurality of feed line sub-traces 105(a), 105(b), 105(c), etc. are sized so that a number of squares associated with the respective paths provided by the sub- traces are all within a range of a mean number of squares for the plurality of sub-traces plus or minus twenty percent (± 20%). It can be seen that sub-trace 105(a) which is the longest sub-trace shown in FIG. 1A is also the widest sub-trace, sub-trace 105(b) which is the shortest sub-trace shown is also the narrowest sub-trace, with sub-trace 105(c) being a sub-trace shown having an intermediate length and an intermediate line width. It is noted that although the sub-traces in patterned trace portion 105 are all shown having a constant line width along their respective path lengths, the line widths need not be constant to provide substantially equal sub-trace currents provided the resulting number of squares for the respective sub-traces are within the numerical range as described above. [0025] FIG. 1A demonstrates that the edge feed embodiment takes up little extra metallization area as compared to a conventional single feed line arrangement for coupling to a bonding feature (e.g., solder bump). Since the edge feed embodiment shown in FIG. 1A provides improved current spreading in the overlying bonding feature (e.g. solder bump), this embodiment allows a reduced area for the bonding feature to be used to yield an overall smaller metal area requirement in the upper metal layers of the device for the same EM current performance, thus providing a cost savings for the IC device. [0026] Applied to wafer chip scale packages (WSCPs), the uniform trace portion 102 and patterned trace portion 105 can both be formed from the redirect layer (RDL). In this embodiment patterned trace portion 105 couples to an RDL pad that is over a bond pad on the IC, while a UBM pad can be on the RDL pad, and a solder bump can be on the UBM pad. In this embodiment, the dielectric opening 115 can be an opening in the dielectric between the RDL and the UBM, such as an opening in a polyimide layer. [0027] As described above, some disclosed embodiments can provide both substantially matched sub-currents and substantially matched current densities provided to the bonding feature conducted through each of the sub-traces. For example, FIG. IB shows a depiction of an example EM resistant feed line structure 130 having at least three sub-traces 125(a), 125(b), 125(c) that provide both substantially equal sub-trace currents and substantially equal sub-trace current densities, while also providing a substantially equal physical distribution over the periphery of the dielectric opening 115 under the bonding feature, according to an example embodiment. As known in the art, the current density (J) going into the dielectric opening 115 under the bonding feature is found by dividing the current (I) at the sub-trace contact by the area (A) of the contact, and is given by J= I/A. It can be seen that narrow sub-traces such as sub-trace 125(b) is significantly widened at is distal end 125(b)(1) that extends into dielectric opening 115, as compared to longer sub-traces such as sub-trace 125(a) which shows no widening of its distal end 125(a)(1), so that width of the respective sub-traces at their distal ends that extend into dielectric opening 115 are the same width or are about the same width. Sub-trace 125(c) which has an intermediate line width shows moderate widening of its distal end 125(c)(1). Since currents in the respective sub-traces are matched to one another, and the area at their contacts are also the same, current density matching is provided. [0028] FIG. 1C shows an example EM resistant feed line structure 150 comprising a uniform trace portion 162 coupled to a patterned trace portion 165 having at least three sub- traces 165(a), 165(b), 165(c), etc. that provide substantially equal sub-trace currents that feed the bond pad 170 having a substantially equal distribution along a periphery of the bond pad, according to an example embodiment. In this embodiment the feed line structure 150 and the bond pad 170 are all formed from the same metal layer. This metal layer can be a metal interconnect layer, a top metal layer, or an RDL. [0029] FIG. 2 A shows an example EM resistant feed line structure 200 demonstrating the area feed embodiment having at least three sub-traces that provide substantially equal sub-trace currents coupled in a contact region 225 across the area of a bond pad, with a plurality of vias 230 formed in a dielectric layer on the bond pad 215, according to an example embodiment. EM resistant feed line structure 200 comprises a uniform trace portion 102 and a patterned trace portion 205 comprising a plurality of sub-traces 205(a), 205(b), 205(c), etc. The plurality of sub- traces 205(a), 205(b), 205(c), etc. are sized so that a number of squares associated with paths provided by each of the plurality of sub-traces are all within a range of a mean number of squares for the plurality of sub-traces plus or minus twenty percent. [0030] In the contact region 225 the respective sub-traces 205(a), 205(b), 205(c) can contact the bond pad 215 using a single dielectric opening (such as dielectric opening 115 shown in FIG. 1A) or a plurality of vias formed in a dielectric layer between the sub-traces 205(a), 205(b), 205(c) and the bond pad 215. The bond pad 215 has an example circular via pattern including vias 230 formed from a dielectric layer thereon at locations that define the respective effective bond pad portions 215(a), 215(b), 215(c). The outer rings 235(a), 235(b), 235(c) shown as dashed rings represent current spreading beyond the bond pad portions 215(a), 215(b), 215(c) as the feed current traverses from the bond pad portions to an example 2 micron thick nickel UBM layer (not shown) that may be over the bond pad 215 in the contact region 225. As depicted in FIG. 2 A, almost the entire area of an UBM layer over the contact region 225 spreads current for a bonding feature (e.g. solder bump) that can be positioned thereon, virtually guaranteeing uniform current distribution over the full cross sectional area of the bonding feature (e.g. solder bump). [0031] In this embodiment, the top metal layer in which the bond pad 215 comprises (e.g., an aluminum layer) which can connect the UBM to the patterned trace portion 205 of the underlying metal (e.g., copper) sub-traces is effectively patterned. This patterning can be performed so that the openings over the bond pad 215 comprises array of vias, which can be shaped in a variety of shapes including, but not limited to, round or square depending upon the metal patterning requirements. The area of the vias can be based on the thickness of the bond pad metal and the UBM metal thereon, so that the area of the vias increase as the thickness of the bond pad metal and the UBM metal increase. For example, in embodiments including an UBM on the bond pad 215, where the vias 230 are round, the diameter of the vias 230 can be twice the UBM thickness plus twice the bond pad metal thickness plus or minus twenty percent. Thus, for a 1 micron thick bond pad 215 and a 2 micron thick UBM layer, the vias 230 can be six microns in diameter plus or minus twenty percent in diameter. [0032] The spacing between adjacent vias 230 can also be based on the overlying metal thickness. For example, the via spacing can be so that the maximum distance to the next via is twice the UBM metal thickness plus or minus twenty percent. Thus, for a two micron thick UBM, the via-to-via distance can be four microns plus or minus twenty percent. [0033] Each via 230 is thus fed by individual sub-traces 205(a), 205(b), 205(c) from uniform trace portion 102 in a manner such that the number of squares and thus the resistance of each sub-trace is substantially equal, but significantly higher than the sum of the resistance of the bonding feature stack (e.g. solder bump on UBM) plus the via resistance over the bond pad 215. Thus for a conventional dielectric (e.g., polyimide) via opening between the bond pad 215 and the UBM (e.g., solder bump opening) of 35 microns in diameter, and a 1 micron bond pad layer (e.g. aluminum) and 2 micron UBM (e.g., nickel), a conventional single dielectric opening over the bond pad can be replaced by 14 six micron circular vias 230 with 14 individual feed line sub- traces as shown in FIG. 2A. The vias may be shaped asymmetrically to allow easier routing of the sub-traces 205(a), 205(b), 205(c) should this be helpful. The overall (summed) widths of the sub-traces 205(a), 205(b), 205(c) to the effective bond pad portions 215(a), 215(b), 215(c) can be made equal to that of the uniform trace portion 102 to minimize the total resistance to the bonding feature (e.g., solder bump). [0034] In another embodiment, a via pattern may be formed in the contact region 225 between the sub-traces and the bond pad 215, instead of vias over the bond pad 215 as shown in FIG. 2A, to yield a similar but enhanced current spreading feed line structure as compared to the feed line structure 200 shown in FIG. 2A. This embodiment has the advantage of additional current spreading as current traverses the thickness of bond pad 215 (e.g., 1 μιη aluminum) before it reaches the UBM layer in embodiments including a UBM layer on the bond pad 215. [0035] The area feed embodiment shown in FIG. 2A for bonding features on a UBM provides a uniform current distribution over substantially the entire bonding feature cross-section at the UBM to the bonding feature interface. This embodiment takes up very little if any extra metallization area as compared to a standard bump feed structure and has the advantage of a smaller bump structure which lowers the overall area for the bonding feature. Improved uniformity of the current distribution in the bonding feature and reduced area of the metal feed structure combine to yield an overall smaller metal area requirement in the upper metal layers of the IC device for the same current EM performance, thus leading to a cost savings for the IC device. [0036] FIG. 2B shows an example EM resistant feed line structure 250 having at least three sub-traces 255(a), 255(b) and 255(c) coupled in the contact region 225 to a bond pad 215, with a plurality of vias 230(a), 230(b) and 230(c) formed in a dielectric layer on the bond pad 215 that are distributed across an area of the bond pad 215, wherein the sub-trace currents have a range that is outside a mean number of squares for the sub-traces plus or minus twenty percent, and the vias are sized to provide substantially equal current densities to a bonding feature above the bond pad, according to an example embodiment. In this embodiment, the vias 230(a), 230(b) and 230(c) are sized so that the shorter sub-traces such as 255(b) that result in higher currents are coupled to larger area via areas as compared to sub-traces that carry lower current such as sub- trace 255(a) that have smaller via areas. The outer rings are shown as 235(a), 235(b), 235(c) which have sizes that reflect the size of their corresponding vias 230(a), 230(b) and 230(c), depict current spreading beyond the bond pad portions 215(a), 215(b), 215(c) as the current traverses from the bond pad portions to a metal layer thereon (not shown) that is typically over the bond pad 215 in the contact region 225. [0037] Disclosed embodiments can also be applied to IC designs where there are two or more independent feed lines (i.e., from different nodes on the IC) coupled to the same bonding feature (e.g., solder bump). Discretion may be used to determine whether the feeds should combined to maximize current uniformity, or be split based upon expected current loading on each incoming line. Thus, for a uniform split, the independent feed lines can be tied together before being split into sub-traces. For the cases where the expected current from each independent feed line is known by design, the number of contact vias feeding the bonding feature may be divided per input line to yield a uniform current distribution over the area of the bonding feature. Thus, if there are two independent feed lines with equal current on each line, then half of the vias can be assigned to one of the feed lines and half of the vias to the other feed line. [0038] FIG. 3 shows an example EM resistant feed line structure 300 having a first independent feed line 310 comprising uniform trace portion 312 and a second independent feed line 320 comprising uniform trace portion 322 that are both coupled to a bond pad 215, wherein the feed lines 310 and 320 each comprise a patterned trace portion 315 and 325 that each comprise four sub-traces 315(a)-(d) and 325(a)-(d), that provide substantially equal sub-trace currents, according to an example embodiment. First feed line 310 and second feed line are shown formed from different metal interconnects. First feed line 310 is shown formed from metal layer N (e.g., seventh level metal), while second feed line 320 is shown formed from metal layer N-l (e.g. sixth level metal). First feed line 310 is shown feeding a feed current of Ii, while second feed line 320 is shown feeding a current I2. Ii and l2 are generally not equal currents. [0039] FIG. 4 shows an example IC device 400 including a back end of the line (BEOL) metallization stack 420 comprising a top interconnect metal layer shown as METn 412 that includes a disclosed EM resistant feed line 401 comprising a uniform trace portion 402 and a patterned trace portion 406 comprising a plurality of sub-traces that couple to a bump pad 419 comprising METn (e.g., a copper bump pad), such as by a dielectric opening analogous to the dielectric opening 115 shown in FIG. 1A or a plurality of vias (not shown). BEOL stack 420 also includes first dielectric layer 431 and second dielectric layer 432. IC device 400 includes a substrate 405 having active circuitry 409, where a node 417 in the active circuitry is shown coupled to uniform trace portion 402 by a connection through the BEOL 420. [0040] A bond pad 415 formed from a top metal layer (e.g., aluminum) is on the bump pad 419, a UBM pad that provides a current spreading layer is on bond pad 415, and a bonding feature shown as a solder bump 435 is on the UBM pad 418. Although METn 412 is shown in FIG. 4 providing the EM resistant feed lines, any of the metal interconnect layers on IC 400 may generally be used to provide EM resistant feed lines, such as underlying metal interconnect layers. It is noted that for certain bonding features, the UBM pad 418 shown may not be needed. For example, when the bonding feature comprises a cooper pillar, the copper pillar can be formed directly on a copper bond pad. [0041] FIG. 5 shows a stacked IC device 500 comprising an IC die 510 having a disclosed EM resistant feed line structure in a flip chip arrangement coupled to a pad 511 bonded to a substrate 520 having a pad 521 by a joint 525 that comprises a metal/organic bonding material, according to an example embodiment. Disclosed embodiments may be particularly helpful for bonding materials with low EM resistance such as the metal/organic bonding material shown by current spreading provided across substantially the entire cross sectional area of the joint 525 provided by disclosed feed line structures. [0042] Disclosed embodiments can generally be applied to any feed line structure coupled to a bonding feature. WCSPs including a ball is only one example. Other feed structures that can benefit from disclosed embodiments include TSV to RDL to remote pad arrangements. [0043] Simulations were performed to compare the EM performance using the mean time to failure (MTTF) parameter obtained from Black's equation for the EM resistant feed line structure 100 shown in FIG. 1A (edge feed), the EM resistant feed line structure 200 shown in FIG. 2 A (area feed), both with fourteen feed line sub-traces sized to provide substantially equal sub-trace currents coupled to a bond pad, with a 2 micron Ni UMB on the bond pad and a solder bump on the UBM, vs. two different reference structures. The first reference structure comprised fourteen feed line sub-traces equivalent to feed line structure 200 other than having the same uniform sub-trace line width throughout their lengths, and the second reference structure comprised a conventional single feed line arrangement with the same layer stack on the bond pad. Black's Equation (shown below) is a mathematical model for the MTTF of a semiconductor circuit due to where A is a constant, j is the current density, n is a model parameter, Q is the activation energy in eV (electron volts), k is Boltzmann constant, T is the absolute temperature in K, and w is the width of the metal line/wire. [0044] Based on simulations performed, the first reference structure having fourteen feed line sub-traces all having the same uniform line width over their respective lengths provided an improvement in solder lifetime by about 20 to 40% as compared to the conventional single feed line arrangement. In contrast, feed line structure 100 shown in FIG. 1A was found to provide an improvement in solder lifetime of 200 to 300%. The feed line structure 200 shown in FIG. 2A was found to provide an improvement in solder lifetime of more than an order of magnitude, i.e >l,000%. [0045] The magnitude of the MTTF performance impact found to be obtained by disclosed embodiments including sub-trace sizing for matching sub-trace currents evidenced an unexpected result that demonstrates criticality based on the magnitude of the improvement. Specifically, the 200 to 300% improvement in solder lifetime for the edge feed embodiment and > 1 ,000% improvement in solder lifetime for the area feed embodiment both represent a marked improvement over the results achieved from the conventional feed line structure as well as the first reference structure, as to be properly considered a difference in kind, rather than a difference of degree. [0046] The active circuitry formed on the wafer semiconductor substrate comprises circuit elements that may generally include transistors, diodes, capacitors, and resistors, as well as signal lines and other electrical conductors that interconnect the various circuit elements and are configured to provide an IC circuit function. As used herein "provide an IC circuit function" refers to circuit functions from ICs, that for example may include an application specific integrated circuit (ASIC), a digital signal processor, a radio frequency chip, a memory, a microcontroller and a system-on-a-chip or a combination thereof. Disclosed embodiments can be integrated into a variety of process flows to form a variety of devices and related products. The semiconductor substrates may include various elements therein and/or layers thereon. These can include barrier layers, other dielectric layers, device structures, active elements and passive elements, including source regions, drain regions, bit lines, bases, emitters, collectors, conductive lines, conductive vias, etc. Moreover, disclosed embodiments can be used in a variety of semiconductor device fabrication processes including bipolar, CMOS, BiCMOS and MEMS processes. [0047] Those skilled in the art to which this disclosure relates will appreciate that modifications may be made to the described embodiments and also that many other embodiments are possible within the scope of the claimed invention.
A host integrated circuit (101) is provided with an interrupt aggregator having a signal terminal for coupling to the signal end of an R-2R resistor ladder (102) that has a plurality of rungs corresponding to a plurality of peripheral devices. The interrupt aggregator is configured to process a voltage signal received at the signal terminal to identify any of the peripheral device that intend to trigger an interrupt to a processor.
The integrated circuit of claim 1, wherein the processor is integrated within the integrated circuit.The integrated circuit of claim 1, wherein the processor is external to the integrated circuit.The integrated circuit of claim 3, further comprising a power management integrated circuit.The integrated circuit of claim 1, further comprising a register configured to store the plurality of interrupt bits.17 WO 2017/099897 PCT/US2016/05871.The integrated circuit of claim 1, wherein the logic circuit comprises an AND gate.The integrated circuit of claim 1, further comprising:a ground terminal;a differential amplifier configured to compare a voltage of an internal node to a reference voltage;a three-terminal switch configured to couple the ground terminal to the internal node during a default state in which the voltage of the internal node equals the reference voltage; and a controller configured to drive the three-terminal switch to couple the ground terminal to ground responsive to the voltage of the internal node being less than the reference voltage.The integrated circuit of claim 7, wherein the controller is further configured to enable the analog-to-digital converter responsive to the voltage of the internal node being less than the reference voltage.The integrated circuit of claim 7, wherein the reference voltage is a power supply voltage.The integrated circuit of claim 7, further comprising:a signal terminal; and a summing amplifier configured to sum a digital contribution from each peripheral device received at the signal terminal to provide the voltage signal.18 WO 2017/099897 PCT/US2016/05871.The integrated circuit of claim 10, wherein the summing amplifier includes a variable feedback resistor having a variable feedback resistance, and wherein the controller is further configured to vary the variable feedback resistance.A method, comprising:at a host integrated circuit, receiving a voltage signal at a signal terminal coupled to a plurality of peripheral devices through an R-2R resistor ladder, wherein the voltage signal has a binary-weighted digital value responsive to whether each peripheral device is an interrupting state or in a default state;converting the received voltage signal into an analog voltage signal proportional to the digital value;digitizing the analog voltage signal into a plurality of interrupt bits corresponding to the plurality of peripheral devices; and processing the interrupt bits to identify whether at least one of the peripheral devices is in the interrupting state.The method of claim 12, further comprising:interrupting a processor with an identity of the at least one interrupting peripheral device.The method of 12, wherein converting the received voltage signal into an analog voltage signal comprises amplifying the received voltage signal in a summing amplifier.The method of claim 12, further comprising:19 WO 2017/099897 PCT/US2016/058715 monitoring a ground end voltage of a ground end of the R-2R ladder to determine whether the ground end voltage is less than a reference voltage; and grounding the ground end of the R-2R ladder responsive to a determination that the ground end voltage is less than the reference voltage.The method of claim 15, wherein monitoring the ground end voltage comprises monitoring the ground end voltage through a differential amplifier.The method of claim 15, wherein processing the interrupt bits comprises a logical AND operation.The method of claim 13, wherein interrupting the processor comprises interrupting the processor within the host integrated circuit.The method of claim 13, wherein interrupting the processor comprises interrupting the processor external to the host integrated circuit.An integrated circuit, comprising:an analog-to-digital converter configured to digitize a voltage signal into a plurality of interrupt bits corresponding to a plurality of peripheral devices, wherein each interrupt bit has a binary value responsive to an interrupt state of a corresponding peripheral device;means for processing the interrupt bits to identify whether at least one of the peripheral devices is in the interrupt state;a processor; and WO 2017/099897 PCT/US2016/058715 an interrupt interface configured to interrupt the processor responsive to the identification of the least one peripheral devices in the interrupt state.The integrated circuit of claim 20, wherein the integrated circuit comprises a system-on-a-chip.The integrated circuit of claim 20, wherein the integrated circuit is included in a system comprising:an R-2R resistor ladder having a plurality of rungs; and a plurality of peripheral devices corresponding to the plurality of rungs, wherein each peripheral device couples to the R-2R resistor ladder through a corresponding rung.The integrated circuit of claim 22, wherein each peripheral device is configured to ground its corresponding rung of the R-2R resistor ladder responsive to the peripheral device being in the interrupt state.The integrated circuit of claim 23, wherein each peripheral device is configured to charge its corresponding rung of the R-2R ladder to a reference voltage responsive to the peripheral device being in a default state in which the peripheral device does not intend to trigger an interrupt to the processor.The integrated circuit of claim 22, wherein the R-2R ladder comprises a plurality of R-2R ladders.21.
WO 2017/099897 PCT/US2016/058715 Digital Aggregation of Interrupts from Peripheral Devices CROSS REFERENCE TO RELATED APPLICATIONS [0001] This application claims priority to the filing date of U.S. Patent Application No. 14/965,511, filed December 10, 2015, which is hereby incorporated by reference in its entirety.TECHNICAL FIELD [0002] This application relates to integrated circuit signaling, and more particularly to a digital aggregation of interrupts from peripheral devices.BACKGROUND [0003] A host integrated circuit such as a system-on-a-chip (SoC) is typically integrated with a plurality of peripheral devices that can each trigger an interrupt to the SoC's processor. To accommodate the interrupt processing, a general purpose input/output (GPIO) architecture may be used in which the SoC includes a unique GPIO pin for each peripheral device's interrupt signal. The SoC then determines immediately the identity of the interrupting peripheral through the identity of the corresponding GPIO pin. Although interrupt processing latency is thus reduced, direct GPIO embodiments suffer from the resulting increased pin count as the SoC must then have a dedicated GPIO pin for each peripheral device.[00041 The SoC pin count may be reduced at the cost of increasing latency in a conventional open-drain embodiment for a host integrated circuit in which the interrupts from a plurality of peripheral devices are all aggregated onto a common pin to the SoC.1 WO 2017/099897 PCT/US2016/058715 The default state of the common pin is typically logic high such as through a weak pull up device. Should a peripheral device want to trigger an interrupt through the common pin, the peripheral device overcomes the weak pull-up device to discharge the common pin voltage to ground. Although just a single common pin can thus service multiple peripherals in an open-drain implementation, the SoC must then poll the peripheral devices to determine which device originated the interrupt, which increases interrupt processing latency.[0005] To reduce interrupt latency, a row-column matrix approach may be used in which the peripheral devices are arranged with regard to a matrix of row and column wires or signal leads. Each peripheral device couples to between a corresponding row and column lead. For example, a matrix of leads formed into three rows and three columns may couple to nine peripheral devices. A first peripheral device couples to the intersection of a first row and a first column, a second peripheral device couples to the intersection of the first row and a second column, and so on such that a ninth peripheral device couples to the intersection of a third row and a third column. Each row couples to a corresponding GPIO pin on the host device. Similarly, each column couples to a corresponding GPIO pin on the host device. In a matrix having m columns and n rows, the host device would thus need to devote the sum of (m + n) GPIO pins for coupling to the matrix. Although the number of necessary GPIO pins is reduced as compared to a direct GPIO architecture, row-column matrix architectures still consume a substantial number of GPIO pins. Moreover, only two peripheral devices may trigger an interrupt at any given time as additional interrupts from other peripheral devices cannot be uniquely identified in a row-column matrix approach. Finally, the processing of the row and column GPIO signals at the host device is complex and consumes substantial power.2 WO 2017/099897 PCT/US2016/058715 [0006] Accordingly, there is a need in the art for digital input aggregation architectures that accommodate the processing of interrupts from multiple peripheral devices with reduced latency and also reduced pin count.SUMMARY [0007] An interrupt aggregator is provided for a host integrated circuit to aggregate any interrupts from a plurality of peripheral devices. The interrupt aggregator couples to a signal end of an R-2R resistor ladder through a host integrated circuit signal terminal. Similarly, the interrupt aggregator couples to a ground end of the R-2R resistor ladder through a host integrated circuit ground terminal. The R-2R resistor ladder has a plurality of rungs corresponding to the plurality of peripheral devices.Each peripheral device couples to the R-2R resistor ladder through its corresponding rung. In a default state, each peripheral device charges its rung of the R-2R ladder to a reference voltage. Should a peripheral device need to trigger an interrupt to a processor, the peripheral device grounds its rung. Each peripheral device thus may be represented by a corresponding interrupt bit having a binary value of zero or one that depends on whether the peripheral device is in the default or interrupting state. There is thus a plurality of interrupt bits corresponding to the plurality of peripheral devices.[0008] The interrupt aggregator includes an analog-to-digital converter configured to digitize an interrupt signal derived from a voltage of the signal terminal to recover the interrupt bits responsive to whether each peripheral device is in the default state or the interrupting state. The interrupt aggregator is configured to process the interrupt bits to identify whether at least one of the peripheral devices is in the interrupting state. Should the identification be positive, the interrupt aggregator is 3 WO 2017/099897 PCT/US2016/058715 configured to interrupt the processor with the identity of the at least one peripheral devices in the interrupting state.BRIEF DESCRIPTION OF THE DRAWINGS [0009] Figure IA illustrates an example integrated interrupt aggregation system having a single resistor ladder in accordance with an aspect of the disclosure.[0010] Figure 1B illustrates an example integrated interrupt aggregation system having a pair of resistor ladders in accordance with an aspect of the disclosure.[0011] Figure 2 illustrates an example distributed interrupt aggregation system in accordance with an aspect of the disclosure.[0012] Figure 3 is a flowchart for an example method of interrupt aggregation in accordance with an aspect of the disclosure.[0013] Embodiments of the disclosure and their advantages are best understood by referring to the detailed description that follows. It should be appreciated that like reference numerals are used to identify like elements illustrated in one or more of the figures.DETAILED DESCRIPTION [0014] To reduce pin count and interrupt processing latency, a host integrated circuit is provided that aggregates one or more interrupts from a plurality of peripheral devices through an external R-2R ladder network onto a host integrated circuit signal terminal. After aggregating the interrupts, the host integrated circuit proceeds to trigger an interrupt to a processor and to provide the processor with the identity of the interrupting one (or ones) of the peripheral devices. The interrupted processor may be integrated within the host integrated circuit such as in a system-on-a-chip (SoC). Such 4 WO 2017/099897 PCT/US2016/058715 an embodiment is designated herein as an integrated interrupt aggregation system.Alternatively, the processor may be located separately from the host integrated circuit in what is denoted herein as a distributed interrupt aggregation system. For example, the host integrated circuit may be a power management integrated circuit (PMIC) that aggregates the intended interrupts from the peripheral devices and triggers an interrupt to a processor in an SoC and also identifies the interrupting one (or ones) of the peripheral devices to the processor.[00 15] Each peripheral device couples to the R-2R resistor ladder through a corresponding rung or terminal. There is thus a unique rung on the R-2R resistor ladder for each peripheral device. Each peripheral device has a default state in which the peripheral device does not intend to trigger an interrupt to a processor. While in the default state, each peripheral device is configured to charge its rung of the R-2R resistor ladder to a reference voltage such as a power supply voltage that is the substantially the same for all the peripheral devices. Conversely, each peripheral device has an interrupt state in which the peripheral device intends to trigger an interrupt to the processor.While in the interrupt state, each peripheral device is configured to ground its rung of the R-2R resistor ladder. For example, a peripheral device may include a sensor that has sensed a condition that the processor needs to be alerted to through an interrupt.The corresponding peripheral device would then change from its default state of charging its rung of the R-2R resistor ladder to grounding its rung. Advantageously, the interrupt aggregation discussed herein can uniquely identify each such interrupting peripheral device regardless of how many peripheral devices at any given time are in the default state or have transitioned into the interrupting state. Moreover, this identification of all interrupting peripheral devices requires just two pins or terminals at the host integrated circuit for coupling to the two ends of the R-2R resistor ladder.WO 2017/099897 PCT/US2016/058715 [00161 With regard to the ends of the R-2R resistor ladder, there is a signal end and a ground end. The host integrated circuit includes a signal terminal for coupling to the signal end of the R-2R resistor network and a ground terminal for coupling to the ground end of the R-2R resistor network. When all the peripheral devices are in the default state, both the ground end and the signal end of the R-2R resistor ladder are charged to the reference voltage. Should a peripheral device transition into the interrupting state, it proceeds to ground its rung of the R-2R resistor ladder. This grounding reduces the voltage of the signal end and the ground end of the R-2R resistor ladder from the reference voltage. The host integrated circuit is configured to monitor the voltage of its ground terminal to detect this voltage change. Should the host integrated circuit detect that it ground tenninal has dropped below the reference voltage, it proceeds to ground its ground terminal so as to ground the ground end of the R-2R resistor ladder. If all the peripheral devices are in the default state, the host integrated circuit couples its ground terminal to a high-impedance input of a differential amplifier to monitor the ground terminal voltage for any subsequent transitions of the peripheral devices into the interrupting state.[0017] The plurality of peripheral devices may include a positive integer n of such peripheral devices, n being a positive integer. The R-2R resistor ladder thus has n rungs for coupling to the n peripheral devices. In addition, the interrupt or default state of each peripheral device may be represented by a corresponding interrupt bit. For example, the value of the interrupt bit may be deemed to equal a binary one if the corresponding peripheral device is in the default state and to equal a binary zero if the corresponding peripheral device is in the interrupting state. Given this binary representation, the voltage at the signal end of the R-2R resistor ladder equals a binary weighted sum of the interrupt bits from the peripheral devices. For example, the 6 WO 2017/099897 PCT/US2016/058715 peripheral devices may be deemed to be arranged from a zeroth peripheral device to an (n-1)th peripheral device. The corresponding interrupt bits from the peripheral devices may thus be deemed to range from a bit Do, to a bit D1, a bit D2, and so on up to a final bit D,-,. The host integrated circuit may include a summing amplifier coupled to its signal terminal to sum all the corresponding digital contributions from the peripheral devices to the voltage received at the signal end of the R-2R ladder. The summing amplifier includes a summing amplifier having an output and a negative input coupled through a feedback resistor Rf having a resistance of Rf. Note that the output impedance of the R-2R resistor ladder is always R regardless of how many peripheral devices are in the interrupting state or in the default state. It may thus be shown that the summed analog voltage Vout from the summing amplifier may be represented by the following Equation 1:Vout = (Rf/R)*(Do/2 + D1/4 + ... + Da-/2") Eq. 1 [0018] As can be derived from Equation 1, the digital voltage contribution from the ith peripheral device is proportional to the ratio Di/2i+l. This ith interrupt bit is a binary 0 if the corresponding peripheral device is in the interrupting state and is binary 1 if the corresponding peripheral device is in the default (non-interrupting) state. The host integrated circuit may also include an analog-to-digital converter (ADC) that digitizes the analog voltage to recover the interrupt bits Do through Dn1 . The identity of the interrupting peripheral devices is thus immediately given through the binary value of the corresponding interrupt bit from the analog-to-digital converter. This is quite advantageous as the host integrated circuit requires only two pins or terminals for coupling to the R-2R resistor ladder yet there is relatively little latency and power 7 WO 2017/099897 PCT/US2016/058715 consumption through the resulting summing and digitization in the host integrated circuit. The host integrated circuit may then proceed to generate an interrupt to the processor and also provide the identity of the corresponding interrupting peripheral device or devices to the processor. In some embodiments, a single interrupt command may be used that is n-bits wide to provide both the interrupt and the identity of the interrupting peripheral devices to the processor. Alternatively, the host integrated circuit may separately interrupt the processor and provide the identity of the interrupting peripheral devices. Turning now to the drawings, an example integrated interrupt aggregation system will be discussed followed by a discussion of an example distributed interrupt aggregation system.Integrated Interrupt Aggregation System [0019] An example integrated interrupt aggregation system 100 is shown in Figure 1A. As discussed previously, an integrated interrupt processing system is one in which a host integrated circuit 101 also includes a processor 160 for which the interrupts are being aggregated. An R-2R resistor ladder 102 has a signal end 103 that couples to a signal terminal A on host integrated circuit 101. Similarly, R-2R resistor ladder 102 has a ground end 104 that couples to a ground terminal B on host integrated circuit 101. As known in the R-2R resistor ladder arts, R-2R ladder 102 has a plurality of resistors R and a plurality of resistors 2R. Each resistor R has a resistance of R ohms whereas the resistance for the 2R resistors is 2R ohms. For example, if each resistor R has a resistance of 10K i, then each resistor 2R has a resistance of 20K Q.[0020] In the illustrated example, a zeroth peripheral device 165 couples to a zeroth rung 166 of R-2R resistor ladder 102. Similarly, a first peripheral device 170 couples to a first rung 171 of R-2R resistor ladder 102. A second peripheral device 175 8 WO 2017/099897 PCT/US2016/058715 couples to a second rung 176 of R-2R resistor ladder 102. Finally, a third and final peripheral device 180 couples to a third and final rung 181 of resistor ladder 102. Each rung includes a 2R resistor that may be integrated onto a circuit board or within the corresponding peripheral device. As known in the R-2R resistor ladder arts, R-2R resistor ladder 102 includes a serial combination of resistors R from signal end 103 that couple to a final resistor 2R at ground end 104. Since system 100 includes a plurality n = 4 of peripheral devices, there are three serially-arranged resistors R coupled to signal end 103. In general, there are n-I such resistors R for an embodiment having n peripheral devices. It will be appreciated that the actual number of rungs and corresponding peripheral devices for alternative embodiments may be more or less than the example of four used in system 100.[0021] As discussed previously, each peripheral device in system 100 has a binary state that depends upon whether the peripheral device is in the interrupting state or in the default state. It is arbitrary whether the default state is represented by a binary one (in which case the interrupting state is represented by a binary zero) or whether the default state is represented by a binary zero (in which case the interrupting state is represented by a binary one) so long as the same convention is used for each peripheral device. It will thus be appreciated that a convention of using an interrupt bit equaling binary one to represent the default state for each peripheral device is used in system 100 without loss of generality. Each peripheral device may include a three-terminal switch 110 that is controlled by the corresponding interrupt bit. Each three-terminal switch 110 is configured to couple the corresponding rung of R-2R ladder 102 to either a node supplying a reference voltage (VRef) or to ground. In one embodiment, the reference voltage may equal a power supply voltage. Alternatively, the reference voltage may be derived, for example, from a band gap circuit. In the default state for the corresponding 9 WO 2017/099897 PCT/US2016/058715 peripheral device, each three-terminal switch 110 couples the corresponding rung to ground. In system 100, peripheral devices 165, 170, and 180 are all in the default state as represented by a binary value of one.[0022] As discussed earlier, the binary state for zeroth peripheral device 165 that identifies whether zeroth peripheral device is in the default or interrupting state is represented by an interrupt bit Do, which controls the corresponding 3-terminal switch 110. In system 100, interrupt bit Do thus equals a binary one. Similarly, the binary value for first peripheral device 170 is represented by an interrupt bit D1 that equals a binary one as well since first peripheral device 170 is in the default state. Moreover, the binary value for third peripheral device 180 is represented by an interrupt bit D3 that equals a binary one since third peripheral device 180 is also in the default (non interrupting) state.[0023] With regard to peripheral device 175, an appropriate event or triggering condition has occurred to cause peripheral device 175 to transition to the interrupting state in so as to trigger an interrupt to processor 160. For example, peripheral device 175 may include a WiFi device that has received a message to which processor 160 must respond. Alternatively, peripheral device 175 may include a sensor that has sensed an alert condition to which processor 160 must respond as first triggered through an interrupt. Regardless of the specific triggering condition, peripheral device 175 is in the interrupting state such that its corresponding interrupt bit D2 equals a binary zero.Interrupt bit D2 thus causes 3 -terminal switch 110 in peripheral device 175 to ground corresponding rung 176.[0024] To identify which peripheral devices are in the interrupting state, host integrated circuit 101 includes an interrupt aggregator 105 that processes the voltages for signal terminal A and for ground terminal B. Note that prior to any triggering WO 2017/099897 PCT/US2016/058715 condition such as the one discussed with regard to peripheral device 175, all the peripheral devices were in the default state such that each peripheral device charged its corresponding rung to the reference voltage. If both signal terminal A and ground terminal B have a high-input impedance at that time, both these terminals are then charged to the reference voltage. Interrupt aggregator 105 includes a summing amplifier 135 having a negative input coupled to terminal A. As known in the summing amplifier arts, summing amplifier 135 may comprise an operational amplifier or other suitable amplifier having a relatively high-input impedance such that terminal A is readily charged to the reference voltage when all the peripheral devices are in the default state.[0025] With regard to ground terminal B, interrupt aggregator 105 may include a three-terminal switch 185 that may either couple ground terminal B to ground or to an input (for example, the negative input) of a differential amplifier 115. Interrupt aggregator 105 includes a controller 125 configured to control the switching state of three-terminal switch 185 through a control signal 184. During a default state for controller 125, control signal 184 commands three-terminal switch 185 to couple ground terminal B to the negative input of a differential amplifier 115. Differential amplifier 115 may comprise an operational amplifier or other suitable amplifier that provides a relatively high-input impedance to ground terminal B when all the peripheral devices are in their default state. A reference voltage source 120 provides the reference voltage to a positive input of differential amplifier 115 such that an interrupt detection output signal 116 from differential amplifier 115 is asserted low to ground while all the peripheral devices are in their default state.[0026] In response to a triggering condition, the corresponding peripheral device may ground its rung of R-2R ladder 102 so as to cause the voltage of ground terminal B 11 WO 2017/099897 PCT/US2016/058715 to drop below the reference voltage. As discussed earlier, peripheral device 175 has responded to a triggering condition and has thus grounded its rung 176. The interrupt detection output signal 116 will then be asserted high such as to a power supply voltage.Controller 125 is configured to respond to this assertion by driving control signal 184 such that three-terminal switch 185 grounds ground terminal B. Interrupt aggregator 105 may then proceed to aggregate the intended interrupts from any peripheral devices in the interrupting state such as from peripheral device 175.[0027] To perform this aggregation, summing amplifier 135 sums the digital contributions from each peripheral device analogously as discussed with regard to Equation 1. A resistor 131 provides the feedback resistance Rf discussed with regard to Equation 1. Controller 125 responds to the assertion of interrupt detection signal 116 by enabling an analog-to-digital converter (ADC) 130 through an ADC enable signal 190.ADC 130 is configured to digitize the summed voltage from summing amplifier 135 into the interrupt bits Do, D1, D2, and D3 that identify whether the corresponding peripheral devices are in the default or interrupting state. In system 100, all these interrupt bits are a binary one except for interrupt bit D2 being a binary zero. Controller 125 may include an interrupt register 145 for storing the interrupt bits. Controller 125 may further include a logic gate such as an AND gate 150 for processing the interrupt bits to determine whether any interrupt bit equals a binary zero. When all the peripheral devices are in the default state, an output signal 151 of AND gate 150 equals a binary one. However, since peripheral device 175 is in the interrupting state, output signal 151 is a binary zero. In one embodiment, AND gate 150 forms a means for processing the interrupt bits to identify whether at least one of the peripheral devices is in the interrupt state. An interrupt control interface 155 responds to output signal 151 equaling a binary zero by triggering an interrupt of processor 160 over an internal bus 195. Controller 12 WO 2017/099897 PCT/US2016/058715 125 may also drive the contents of interrupt registers 145 over internal bus 195 to processor 160 so that processor 160 may be apprised of the identity of the peripheral device (or devices) that have triggered the interrupt. After processor 160 has responded to the interrupt, controller 125 then drives three-terminal switch 185 back into its default state in which ground terminal B is coupled to an input of differential amplifier 115.[0028] The required resolution of ADC 130 is a function of how many interrupt bits it must digitize. In system 100 there are just four interrupt bits such that the required resolution is relatively relaxed. As the number of peripheral devices is increased, the resolution demands may be eased on ADC 130 by implementing feedback resistor 131 as a variable feedback resistor controlled by a multiplier gain control signal 132 from controller 125. Referring again to Equation 1, note that the feedback resistance is a multiplier of the summed voltage. The interrupt bits range from the least significant zeroth bit Do to the most significant bit D,-,. The more significant interrupt bits are subject to progressively higher division in Equation 1 such that the feedback resistance may be increased so as to better distinguish these bits should ADC 130 have insufficient resolution. In this fashion, costs may be lowered by using a relatively low resolution ADC 130 yet all the interrupt bits may be distinguished.[0029] As the number of peripheral devices is increased for alternative embodiments, the resolution of ADC 130 must increase accordingly. For example, ADC 130 would need five bits of resolution for an embodiment having thirty-two peripheral devices. But increasing the analog-to-digital conversion resolution increases manufacturing costs. To keep the resolution low, multiple R-2R resistor ladders may be used such as shown in Figure 1B for a system 106. A host integrated circuit 107 includes an interrupt aggregator 105 that functions as discussed with regard to system 100 of Figure 1A. In contrast to system 100, interrupt aggregator 105 in system 106 13 WO 2017/099897 PCT/US2016/058715 couples to a first R-2R ladder 196 and a second R-2R ladder 197 through a pair of multiplexers 186 and 187. The peripheral devices and resistors for first and second R 2R ladders 196 and 197 are not shown in Figure 1B for illustration clarity. Multiplexers 186 and 187 are controlled to select for the same R-2R ladder. The splitting of the peripheral devices across multiple R-2R ladders relaxes the resolution requirement for ADC 130. For example, suppose that first and second R-2R ladders 196 and 197 each include eight peripheral devices. Although there would then be sixteen peripheral devices in total, ADC 130 may still have only 3 bits of resolution since interrupt aggregator 105 would monitor only one of R-2R ladders 196 and 197 at any given time.Interrupt aggregator 105 may thus periodically drive multiplexers 186 and 187 so that R-2R ladders 196 and 197 may be analyzed serially. In this fashion, input aggregator 105 may advantageously increase the number of monitored peripheral devices without requiring a corresponding increase in the resolution of ADC 130.[0030] An example distributed interrupt aggregation system will now be discussed.Example Distributed Interrupt Aggregation System.[0031] An example distributed interrupt aggregation system 200 is shown in Figure 2 in which host integrated circuit 205 does not include processor 160 for which the interrupts are being aggregated. For example, host integrated circuit 205 may include a power management integrated circuit (PIMIC) that includes an interrupt aggregator 105 as discussed with regard to Figured 1A and 1B for coupling to signal terminal A and ground terminal B. The corresponding R-2R resistor ladder and peripheral devices are not shown in Figure 2 for illustration clarity. Host integrated circuit 205 includes a processor or a finite state machine 215 for controlling a serial 14 WO 2017/099897 PCT/US2016/058715 interface 210 and interrupt aggregator 105. Serial interface 210 couples to a serial interface 220 in an SoC 225 including processor 160 and interrupt control interface 155.Serial interfaces 210 and 220 may be any suitable interface such as a serial peripheral interface (SPI) or a universal asynchronous receiver/transmitter (UART) interface. In this fashion, interfaces 220 and 210 may accommodate other signaling between host integrated circuit 205 and SoC 225 besides the interrupt aggregation signaling. Should a peripheral device transition to the interrupting state, serial interface 210 transmits a serial frame or frames to serial interface 220 that triggers an interrupt to processor 160 through interrupt control interface 155 and that identifies the interrupting peripheral devices. Interrupt control interface 155 may then proceed to interrupt processor 160 accordingly. An example method of interrupt aggregation will now be discussed.Interrupt Aggregation Method [0032] A flowchart for an interrupt aggregation method such as performed by interrupt aggregator 105 is shown in Figure 3. The method includes an act 300 performed at a host integrated circuit and includes receiving a voltage signal at a signal terminal coupled to a plurality of peripheral devices through an R-2R resistor ladder, wherein the voltage signal has a binary-weighted digital value responsive to whether each peripheral device is in an interrupting state or in a default state. The receipt of the signal end voltage from R-2R ladder 102 at the signal terminal A in Figure 1A is an example of act 300. The method also includes an act 305 of converting the received voltage signal into an analog voltage signal proportional to the digital value. The summing in summing amplifier 135 of Figure 1A is an example of act 305.Furthermore, the method includes an act 310 of digitizing the analog voltage signal into a plurality of interrupt bits corresponding to the plurality of peripheral devices. The WO 2017/099897 PCT/US2016/058715 digitization of the summing amplifier output voltage in ADC 130 of Figure 1A is an example of act 310. Finally, the method includes an act 315 of processing the interrupt bits to identify whether at least one of the peripheral devices is in the interrupting state.The processing of the interrupt bits through AND gate 150 of Figure 1A is an example of act 315.[0033] As those of some skill in this art will by now appreciate and depending on the particular application at hand, many modifications, substitutions and variations can be made in and to the materials, apparatus, configurations and methods of use of the devices of the present disclosure without departing from the scope thereof. In light of this, the scope of the present disclosure should not be limited to that of the particular embodiments illustrated and described herein, as they are merely by way of some examples thereof, but rather, should be fully commensurate with that of the claims appended hereafter and their functional equivalents.16
Field-Effect Transistor (FET) devices employing an adjacent asymmetric active gate / dummy gate width layout are disclosed. In an exemplary aspect, a FET cell is provided that includes a FET device having an active gate, a source region, and a drain region. The FET cell also includes an isolation structure comprising a dummy gate over a diffusion break located adjacent to one of the source region and the drain region. The FET cell has an asymmetric active gate / dummy gate width layout in that a width of the active gate is larger than a width of the adjacent dummy gate. The increased width of the active gate provides increased gate control and the decreased width of the dummy gate increases isolation from the dummy gate, thus reducing sub-threshold leakage through the dummy gate.
What is claimed is:1. A Field-Effect Transistor (FET) cell having an asymmetric gate width layout, comprising:a substrate comprising a body having a top surface;a FET device, comprising:a source disposed in the substrate;a drain disposed in the substrate; andan active gate of an active gate width formed between the source and the drain; andan isolation structure disposed in the substrate adjacent to the FET device, the isolation structure comprising:a diffusion break disposed in the substrate adjacent to one of the source and the drain of the FET device, wherein a depth of the one of the source and the drain that is adjacent to the diffusion break is greater than a depth of the one of the source and the drain that is not adjacent to the diffusion break; anda dummy gate of a dummy gate width formed above the diffusion break adjacent to the active gate, the dummy gate width being smaller than the active gate width by a gate width margin.2. The FET cell of claim 1, further comprising:a source contact disposed above the source adjacent to the active gate; and a drain contact disposed above the drain adjacent to the active gate,wherein one of the source contact and the drain contact that corresponds to the one of the source and the drain that is adjacent to the diffusion break is disposed between the active gate and the dummy gate, and isolated from the active gate by a first distance and isolated from the dummy gate by a second distance that is different than the first distance by an isolation margin,wherein the isolation margin is approximately half the gate width margin.3. The FET cell of claim 2,wherein the active gate width is approximately fifteen (15) nanometers (nm); wherein the dummy gate width is approximately thirteen (13) nm; and wherein the isolation margin is approximately one (1) nm.4. The FET cell of claim 2,wherein the active gate width is approximately eighteen (18) nanometers (nm); wherein the dummy gate width is approximately fourteen (14) nm; and wherein the isolation margin is approximately two (2) nm.5. The FET cell of claim 1, wherein the gate width margin is at least two (2) nanometers (nm).6. The FET cell of claim 5,wherein the active gate width is approximately fifteen (15) nm; andwherein the dummy gate width is approximately thirteen (13) nm.7. The FET cell of claim 5,wherein the active gate width is approximately seventeen (17) nm; and wherein the dummy gate width is approximately fourteen (14) nm.8. The FET cell of claim 1 ,wherein the gate width margin is at least four (4) nanometers (nm); and wherein the active gate width is approximately eighteen (18) nm.9. The FET cell of claim 1 integrated into an integrated circuit (IC).10. The FET cell of claim 1 integrated into a device selected from the group consisting of: a set top box; an entertainment unit; a navigation device; a communications device; a fixed location data unit; a mobile location data unit; a mobile phone; a cellular phone; a smart phone; a tablet; a phablet; a server; a computer; a portable computer; a desktop computer; a personal digital assistant (PDA); a monitor; a computer monitor; a television; a tuner; a radio; a satellite radio; a music player; a digital music player; a portable music player; a digital video player; a video player; a digital video disc (DVD) player; a portable digital video player; and an automobile.11. A method of fabricating a Field-Effect Transistor (FET) cell in a semiconductor die, comprising:forming a diffusion break disposed in a substrate;forming an active gate of an active gate width on the substrate;forming a dummy gate of a dummy gate width above the diffusion break and adjacent to the active gate, the dummy gate width being smaller than the active gate width by a gate width margin;forming a source epitaxial region of a FET device in the substrate, adjacent to the active gate;forming a source in the source epitaxial region at a first depth from a top surface of the substrate;forming a drain epitaxial region of the FET device in the substrate, adjacent to the diffusion break, between the active gate and the dummy gate, a portion of the drain epitaxial region in contact with the diffusion break; forming a drain in the drain epitaxial region at a second depth from the top surface of the substrate that is greater than the first depth; and forming a channel region of the FET device in the substrate between the source and the drain.12. The method of claim 11, wherein forming the dummy gate comprises forming the dummy gate comprising the dummy gate width that is smaller than the active gate width by the gate width margin of at least two (2) nanometers (nm).13. The method of claim 12,wherein forming the active gate comprises forming the active gate comprising the active gate width that is approximately fifteen (15) nm; and wherein forming the dummy gate comprises forming the dummy gate comprising the dummy gate width that is approximately thirteen (13) nm.14. The method of claim 12,wherein forming the active gate comprises forming the active gate comprising the active gate width that is approximately seventeen (17) nm; and wherein forming the dummy gate comprises forming the dummy gate comprising the dummy gate width that is approximately fourteen (14) nm.15. The method of claim 11 ,wherein forming the dummy gate comprises forming the dummy gate comprising the dummy gate width that is smaller than the active gate width by the gate width margin of at least four (4) nanometers (nm); and wherein forming the active gate comprises forming the active gate comprising the active gate width that is approximately eighteen (18) nm.16. The method of claim 11, further comprising:disposing a source contact on the source epitaxial region adjacent to the active gate; anddisposing a drain contact on the drain epitaxial region between the active gate and the dummy gate, the drain contact isolated from the active gate by a first distance and isolated from the dummy gate by a second distance that is greater than the first distance by an isolation margin,wherein the isolation margin is approximately half the gate width margin.17. The method of claim 16,wherein forming the active gate comprises forming the active gate comprising the active gate width that is approximately fifteen (15) nanometers (nm); andwherein forming the dummy gate comprises forming the dummy gate comprising the dummy gate width that is approximately thirteen (13) nm to provide the isolation margin that is approximately one (1) nm. The method of claim 16,wherein forming the active gate comprises forming the active gate comprising the active gate width that is approximately eighteen (18) nm; and wherein forming the dummy gate comprises forming the dummy gate comprising the dummy gate width that is approximately fourteen (14) nm to provide the isolation margin that is approximately two (2) nm.The method of claim 11, wherein:forming the source in the source epitaxial region comprises implanting the source in the source epitaxial region at the first depth from the top surface of the substrate; andforming the drain in the drain epitaxial region comprises implanting the drain in the drain epitaxial region at the second depth from the top surface of the substrate that is greater than the first depth.
FIELD-EFFECT TRANSISTOR (FET) DEVICES EMPLOYING ADJACENT ASYMMETRIC ACTIVE GATE / DUMMY GATE WIDTH LAYOUTPRIORITY APPLICATION[0001] The present application claims priority to U.S. Patent Application Serial No. 15/245,777 filed on August 24, 2016 and entitled "FIELD-EFFECT TRANSISTOR (FET) DEVICES EMPLOYING ADJACENT ASYMMETRIC ACTIVE GATE / DUMMY GATE WIDTH LAYOUT," which is incorporated herein by reference in its entirety.BACKGROUNDI. Field of the Disclosure[0002] The technology of the disclosure relates generally to Field-Effect Transistors (FETs), and more specifically to the layout of gate structures in FETs.II. Background[0003] Transistors are essential components in modern electronic devices. Large quantities of transistors are employed in integrated circuits (ICs) in many modern electronic devices. For example, components of modern electronic devices, such as central processing units (CPUs) and memory units, employ a large quantity of transistors for logic circuits and data storage.[0004] In the course of IC evolution, functional density (i.e., the number of interconnected devices per chip area) has increased. This increase in functional density is achieved in part through continued efforts to scale down transistor cells in ICs (e.g., reducing the size of transistor nodes in order to place increasingly more transistor nodes into the same amount of space). Transistor cells can be scaled down by a reduction in gate width and/or channel length of transistor nodes therein, for example. Transistors cells can also be scaled down by reducing the size of an isolation structure isolating a transistor node therein from adjacent transistor cells. For example, a transistor cell that includes an isolation structure comprising a double diffusion break (DDB) can be scaled down by instead implementing a single diffusion break (SDB). [0005] For example, Figure 1 is a cross-section of a conventional Fin Field-Effect Transistor (FET) (FinFET) cell 100. The FinFET cell 100 includes a FinFET 102 that includes an active gate 104 of a width Wi (e.g., fourteen (14) or sixteen (16) nanometers (nm)). The FinFET 102 further includes source and drain epitaxial regions 108 and 110 grown on a substrate 112. The source and drain epitaxial regions 108 and 110 are located in respective source and drain columns 114 and 116. The source and drain epitaxial regions 108 and 110 may comprise an epitaxial growth of Silicon Germanium (SiGe) or Germanium (Ge), for example. The source and drain epitaxial regions 108 and 110 include source and drain implants 118 and 120, respectively, for providing a corresponding source or drain to each of the source and drain epitaxial regions 108 and 110. The source and drain implants 118 and 120 may be formed by ion implantation, for example. The FinFET 102 further includes source and drain contacts 122 and 124 for providing access to the source and drain epitaxial regions 108 and 110, respectively, and thus, for providing access to an active channel region 126 between the source and drain epitaxial regions 108 and 110 under the active gate 104. The drain contact 124 is isolated from the active gate 104 by a distance Di and from a dummy gate 134 by a distance D2. The dummy gate 134 has a width indicated as W4in Figure 1. In the FinFET 102, the distances D!and D2are substantially similar. It is noted that for purposes of clarity, the epitaxial region 108 has been defined as a source epitaxial region 108, the implant 118 of the epitaxial region 108 has been defined as a source implant 118, the epitaxial region 110 has been defined as a drain epitaxial region 110, and the implant 120 of the epitaxial region 110 has been defined as a drain implant 120. However, the source/drain designations of these elements are an example and can be either designated as being for a source or a drain based on how the FinFET cell 100 is connected in the circuit, since the active channel region 126 has no intrinsic polarity.[0006] The FinFET cell 100 further includes an SDB isolation structure 129 to provide isolation between the FinFET 102 and, for example, an adjacent FinFET cell (not shown). The SDB isolation structure 129 comprises an SDB 130 of a width W2. The SDB 130 may include a shallow trench isolation oxide, for example. The SDB isolation structure 129 further includes the dummy gate 134.[0007] Under the configuration of the FinFET cell 100 described above, the FinFET cell 100 has a width W3 (i.e., the space occupied by a single FinFET cell in an array of cells) that depends, for example, on the width Wi of the active gate 104, a distance D3 between the active gate 104 and the dummy gate 134, and the width W2 of the SDB 130. Thus, the FinFET cell 100 can be scaled down by, for example, by reducing one or more of the width Wi of the active gate 104, the distance D3between the active gate 104 and the dummy gate 134, or the width W2 of the SDB 130. However, scaling down the FinFET cell 100 in this manner may be limited by fabrication and performance considerations. For example, due to fabrication limitations and/or isolation requirements, reducing the distance D3 may place the drain epitaxial region 110 closer to the SDB 130. Thus, during fabrication, the epitaxial growth of the drain epitaxial region 110 may be uneven across a top surface 142 of the drain epitaxial region 110 due to a facet mismatch between a facet 140 of the drain epitaxial region 110 and a facet 144 of the SDB 130. In particular, the facet 140 of the drain epitaxial region 110 may not match the facet 144 of the SDB 130, thus hindering growth of the drain epitaxial region 110 near the facet 144 of the SDB 130. Accordingly, growth of the drain epitaxial region 110 near the facet 144 of the SDB 130 will be slower than the growth of the drain epitaxial region 110 away from the facet 144 of the SDB 130. This uneven growth is illustrated in Figure 1 by the uneven top surface 142 of the drain epitaxial region 110. This uneven growth of the drain epitaxial region 110 may result in reduced gate control and increased sub-threshold current in the FinFET 102. In particular, during later formation of the source implant 118 and the drain implant 120 in the source and drain epitaxial regions 108 and 110, respectively, the drain implant 120 may be disposed deeper in the drain epitaxial region 110 than desired, and deeper than the source implant 118 in the source epitaxial region 108 by a source/drain implant margin 146. This results in the active channel region 126 that is lower in the substrate 112 than desired, and thus further from the active gate 104 than desired. Having the active channel region 126 further from the active gate 104 than desired can result in reduced gate control of the active channel region 126, and thus degraded performance of the FinFET 102. Furthermore, having the active channel region 126 further from the active gate 104 than desired can result in a lower voltage threshold than desired for the FinFET 102. This decreased voltage threshold increases sub-threshold current, as the active gate 104 may not be able to fully close the active channel region 126 during an "off state of the FinFET cell 100, thus increasing power consumption and degrading performance. [0008] Current leakage can also result based on the dummy gate 134 being located close to the drain epitaxial region 110 and the drain contact 124. As the pitch of the FinFET 102 is reduced, the distance between the dummy gate 134 and the drain epitaxial region 110 and the drain contact 124 may be reduced. For example, distance D2 may be reduced as pitch is reduced. This close proximity between the drain contact 124 and the dummy gate 134 may result in a potential leakage current path 148 through the dummy gate 134, thus also increasing power consumption and degrading performance of the FinFET 102.SUMMARY OF THE DISCLOSURE[0009] Aspects disclosed in the detailed description include Field-Effect Transistor (FET) devices employing an adjacent asymmetric active gate / dummy gate width layout. In an exemplary aspect, a FET cell is provided that includes a FET device having an active gate configured to control a channel region between a source region and a drain region. The FET cell also includes an isolation structure disposed adjacent to the FET device. The isolation structure comprises a diffusion break located adjacent to one of the source region and the drain region of the FET device, and a dummy gate overlaying the diffusion break. The FET cell has an asymmetric active gate / dummy gate width layout in that a width of the dummy gate is smaller than a width of the active gate. The larger width of the active gate can provide increased gate control over the channel region, and therefore reduced sub-threshold leakage current.[0010] As additional examples, providing an adjacent asymmetric active gate / dummy gate width layout may also mitigate the negative effects of a non-ideal growth of the source and/or drain regions that result in a deeper source or drain implant. Non- ideal growth of the source and/or drain regions lowers the channel region of the FET device, thus placing the channel region farther from the active gate. Furthermore, as another example, providing a smaller width of the dummy gate in a FET cell allows the FET cell to maintain cell pitch even though the active gate of the FET device has a larger width. Furthermore, as another example, providing a decreased width dummy gate may allow formation of the source/drain regions, implants, and contacts according to current fabrication processes. Furthermore, as an example, providing a decreased width dummy gate increases a separation between the dummy gate and an adjacent source and/or drain region, thus increasing the distance and isolation between the FET device and the dummy gate, thereby decreasing leakage current through the dummy gate.[0011] In this regard in one aspect, a FET cell having an asymmetric gate width layout is provided. The FET cell comprises a substrate comprising a body having a top surface, and a FET device. The FET device comprises a source disposed in the substrate. The FET device further comprises a drain disposed in the substrate. The FET device further comprises an active gate of an active gate width formed between the source and the drain. The FET cell further comprises an isolation structure disposed in the substrate adjacent to the FET device. The isolation structure comprises a diffusion break disposed in the substrate adjacent to one of the source and the drain of the FET device, wherein a depth of the one of the source and the drain that is adjacent to the diffusion break is greater than a depth of the one of the source and the drain that is not adjacent to the diffusion break. The isolation structure further comprises a dummy gate of a dummy gate width formed above the diffusion break adjacent to the active gate. The dummy gate width is smaller than the active gate width by a gate width margin.[0012] In another aspect, a method of fabricating a FET cell in a semiconductor die is provided. The method comprises forming a diffusion break disposed in the substrate. The method further comprises forming an active gate of an active gate width on the substrate, and forming a dummy gate of a dummy gate width above the diffusion break and adjacent to the active gate, the dummy gate width being smaller than the active gate width by a gate width margin. The method further comprises forming a source epitaxial region of a FET device in the substrate, adjacent to the active gate, and forming a source in the source epitaxial region at a first depth from a top surface of the substrate. The method further comprises forming a drain epitaxial region of the FET device in the substrate, adjacent to the diffusion break, between the active gate and the dummy gate, a portion of the drain epitaxial region in contact with the diffusion break, and forming a drain in the drain epitaxial region at a second depth from the top surface of the substrate that is greater than the first depth. The method further comprises forming a channel region of the FET device in the substrate between the source and the drain.[0013] In another aspect, a FET cell having an asymmetric gate width layout is provided. The FET cell comprises a means for providing a substrate comprising a body having a top surface. The FET cell further comprises a means for providing a FET device, comprising a means for providing a source disposed in the means for providing the substrate at a first depth from a top surface of the means for providing the substrate, and a means for providing a drain disposed in the means for providing the substrate at a second depth from the top surface of the means for providing the substrate. The means for providing the FET device further comprises a means for providing an active gate of an active gate width formed between the means for providing the source and the means for providing the drain. The means for providing the active gate is configured to control conductivity in a channel region below the means for providing the active gate between the means for providing the source and the means for providing the drain. The FET cell further comprises a means for providing an isolation structure disposed in the means for providing the substrate adjacent to the means for providing the FET device. The means for providing the isolation structure comprises a means for providing a diffusion break disposed in the means for providing the substrate adjacent to one of the means for providing the source and the means for providing the drain of the means for providing the FET device. A depth of the one of the means for providing the source and the means for providing the drain that is adjacent to the means for providing the diffusion break is greater than a depth of the one of the means for providing the source and the means for providing the drain that is not adjacent to the means for providing the diffusion break. The means for providing the isolation structure further comprises a means for providing a dummy gate of a dummy gate width formed above the means for providing the diffusion break adjacent to the means for providing the active gate, the dummy gate width being smaller than the active gate width by a gate width margin.BRIEF DESCRIPTION OF THE FIGURES[0014] Figure 1 illustrates a cross-section of a conventional Fin Field-Effect Transistor (FET) (FinFET) cell;[0015] Figure 2 illustrates a cross-section of an exemplary FinFET cell that includes an exemplary FinFET employing an adjacent asymmetric active gate / dummy gate width layout, which can promote increased gate control for reducing leakage current;[0016] Figure 3 is a flowchart illustrating an exemplary process for fabricating the exemplary FinFET cell of Figure 2; [0017] Figure 4A is a cross-sectional diagram of an exemplary fabrication stage of forming a diffusion break in a substrate for fabricating the exemplary FinFET cell illustrated in Figure 2;[0018] Figure 4B is a cross-sectional diagram of an exemplary fabrication stage of forming an active gate of an active gate width on a substrate and forming a dummy gate of a dummy gate width above a diffusion break and adjacent to the active gate, the dummy gate width being smaller than the active gate width by a gate width margin to form an asymmetric gate width layout for fabricating the exemplary FinFET cell illustrated in Figure 2;[0019] Figure 4C is a cross-sectional diagram of an exemplary fabrication stage of etching recesses on a substrate for depositing a source epitaxial region and a drain epitaxial region for fabricating the exemplary FinFET cell illustrated in Figure 2;[0020] Figure 4D is a cross-sectional diagram of an exemplary fabrication stage of depositing of a source epitaxial region and a drain epitaxial region on corresponding recesses for fabricating the exemplary FinFET cell illustrated in Figure 2;[0021] Figure 4E is a cross-sectional diagram of an exemplary fabrication stage of forming a source and a drain in a source epitaxial region and a drain epitaxial region, respectively, for fabricating the exemplary FinFET cell illustrated in Figure 2;[0022] Figure 4F is a cross-sectional diagram of an exemplary fabrication stage of disposing a source contact on a source epitaxial region adjacent to an active gate, and disposing a drain contact on a drain epitaxial region between the active gate and a dummy gate for fabricating the exemplary FinFET cell illustrated in Figure 2;[0023] Figure 5 is a block diagram of an exemplary processor-based system that can include the exemplary FinFET cell illustrated in Figure 2; and[0024] Figure 6 is a block diagram of an exemplary wireless communications device that includes radio-frequency (RF) components which include FinFET cells that include an exemplary FinFET employing an adjacent asymmetric active gate / dummy gate width layout according to the exemplary aspects disclosed herein.DETAILED DESCRIPTION[0025] With reference now to the drawing figures, several exemplary aspects of the present disclosure are described. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects.[0026] Aspects disclosed in the detailed description include Field-Effect Transistor (FET) devices employing an adjacent asymmetric active gate / dummy gate width layout. In an exemplary aspect, a FET cell is provided that includes a FET device having an active gate configured to control a channel region between a source region and a drain region. The FET cell also includes an isolation structure disposed adjacent to the FET device. The isolation structure comprises a diffusion break located adjacent to one of the source region and the drain region of the FET device, and a dummy gate overlaying the diffusion break. The FET cell has an asymmetric active gate / dummy gate width layout in that a width of the dummy gate is smaller than a width of the active gate. The larger width of the active gate can provide increased gate control over the channel region, and therefore reduced sub-threshold leakage current.[0027] As additional examples, providing an adjacent asymmetric active gate / dummy gate width layout may also mitigate the negative effects of a non-ideal growth of the source and/or drain regions that result in a deeper source or drain implant. Non- ideal growth of the source and/or drain regions lowers the channel region of the FET device, thus placing the channel region farther from the active gate. Furthermore, as another example, providing a smaller width of the dummy gate in a FET cell allows the FET cell to maintain cell pitch even though the active gate of the FET device has a larger width. Furthermore, as another example, providing a decreased width dummy gate may allow formation of the source/drain regions, implants, and contacts according to current fabrication processes. Furthermore, as an example, providing a decreased width dummy gate increases a separation between the dummy gate and an adjacent source and/or drain region, thus increasing the distance and isolation between the FET device and the dummy gate, thereby decreasing leakage current through the dummy gate.[0028] In this regard, Figure 2 illustrates a cross-section of an exemplary FinFET cell 200 that includes an exemplary FinFET 202 employing an adjacent asymmetric active gate / dummy gate width layout. As shown in Figure 2, the FinFET cell 200 includes a substrate 204 comprising a body 206 having a top surface 208. The FinFET cell 200 comprises an isolation structure 238 disposed in the substrate 204 adjacent to the FinFET 202. The isolation structure 238 is disposed in the FinFET cell 200 to isolate the FinFET 202 from an adjacent cell (not shown), such as an adjacent FinFET cell for example. The isolation structure 238 comprises a single diffusion break (SDB) 228, and is disposed in the substrate 204 adjacent to a drain 218 of the FinFET 202. The SDB 228 has a width W5 and may comprise a shallow trench isolation (STI) oxide 240, for example. The isolation structure 238 further comprises a dummy gate 242 of a dummy gate width W6formed above the SDB 228, adjacent to the active gate 232.[0029] The FinFET 202 of the FinFET cell 200 comprises a source 210 disposed in the substrate 204 at a depth DPi from the top surface 208 of the substrate 204. The source 210 of the FinFET 202 is formed on a source epitaxial region 214 in the substrate 204 by ion implantation. As one example, the source epitaxial region 214 may include an epitaxial growth of Silicon Germanium (SiGe) or Germanium (Ge), in the substrate 204. The source epitaxial region 214 of the FinFET cell 200 can have an even top surface 216 that is flush with the top surface 208 of the substrate 204.[0030] With continuing reference to Figure 2, the FinFET 202 further comprises a drain 218 disposed in the substrate 204 at a depth DP2from the top surface 208 of the substrate 204, the depth DP2being greater than depth DP!. The drain 218 is formed on a drain epitaxial region 222 by ion implantation. As one example, the drain epitaxial region 222 comprises an epitaxial growth of Silicon Germanium (SiGe) or Germanium (Ge), for example, on the substrate 204. The depth DP2of the drain 218 is greater than the depth DPi of the source 210 because these depths DPi, DP2are a function of a height and shape of the top surface 216 and a top surface 230 of the source epitaxial region 214 and the drain epitaxial region 222, respectively. As will be described in further detail below, the top surface 230 of the drain epitaxial region 222 is uneven and lower than the top surface 216 of the source epitaxial region 214. Accordingly, the drain 218 is formed lower, relative to the top surface 208 of the substrate 204, than the source 210. It is noted that for purposes of clarity, the epitaxial region 214 has been defined as a source epitaxial region 214, the source 210 of the epitaxial region 214 has been defined as a source 210, the epitaxial region 222 has been defined as a drain epitaxial region 222, and the drain 218 of the epitaxial region 222 has been defined as a drain 218. However, the source/drain designations of these elements are an example and can be designated as being for a source or a drain based on how the FinFET 202 is connected in the circuit, since a channel region 236 has no intrinsic polarity.[0031] As illustrated in the example FinFET cell 200 in Figure 2, the drain epitaxial region 222 was grown unevenly. This uneven growth is due to a facet mismatch between a facet 224 of the drain epitaxial region 222 and a facet 226 of the SDB 228 disposed adjacent to the drain epitaxial region 222, thus hindering growth of the drain epitaxial region 222 near the facet 226 of the SDB 228. Accordingly, growth of the drain epitaxial region 222 near the facet 226 of the SDB 228 will be slower, and thus lower in the substrate 204, than the growth of the drain epitaxial region 222 away from the facet 226 of the SDB 228. Therefore, the drain epitaxial region 222 has an uneven top surface 230 that is lower near the SDB 228.[0032] With continuing reference to Figure 2, the FinFET 202 also comprises an active gate 232 of an active gate width W7formed between the source 210 and the drain 218. The FinFET 202 further comprises the channel region 236 below the active gate 232 between the source 210 and the drain 218. Thus, the active gate 232 is configured to control conductivity in the channel region 236 between the source 210 and the drain 218 based on a field (not shown) generated by the active gate 232 when a voltage is applied thereto.[0033] The FinFET 202 further includes a source contact 248 disposed on the source epitaxial region 214, adjacent to the active gate 232, for providing access to the source 210. The FinFET 202 further includes a drain contact 250 disposed on the drain epitaxial region 222, between the active gate 232 and the dummy gate 242, for providing access to the drain 218. The drain contact 250 is isolated from the active gate 232 by a distance D4. The drain contact 250 is isolated from the dummy gate 242 by a distance D5.[0034] In the FinFET cell 200, the uneven growth of the drain epitaxial region 222 may result in reduced gate control and increased sub-threshold current. In particular, during formation of the source 210 and the drain 218 in the FinFET 202, through ion implantation for example, the drain 218 may be disposed deeper in the drain epitaxial region 222 than desired, and deeper than the source 210, by a source/drain implant margin 256. This results in the channel region 236 being lower in the substrate 204 than desired, and thus further from the active gate 232 than desired. Having the channel region 236 further from the active gate 232 than desired can result in reduced gate control of the channel region 236, and thus degraded performance of the FinFET 202.[0035] In this regard, in the exemplary FinFET cell 200 in Figure 2, to mitigate or offset reduced gate control of the channel region 236 due to the channel region 236 being located lower in the substrate 204, the dummy gate 242 is formed in the FinFET cell 200 to have a smaller dummy gate width W6than the active gate width W7by a gate width margin, i.e., a difference between the active gate width W7and the dummy gate width Wg. As an example, this gate width margin can be at least two (2) nanometers (nm). For example, the active gate width W7could be approximately fifteen (15) nm and the dummy gate width W6could be approximately thirteen (13) nm, for a gate width margin that is approximately two (2) nm. In view of this exemplary aspect, the FinFET cell 200 has an asymmetric active gate / dummy gate layout, because the active gate width W7of the active gate 232 is larger than the dummy gate width W6of the adjacent dummy gate 242. By having an increased active gate width W7, the active gate 232 provides improved control over the channel region 236. This improved gate control decreases sub-threshold leakage current in the FinFET 202 and counters at least some of the increase in sub-threshold leakage current caused by the non-ideal growth of the drain epitaxial region 222 relative to a gate control provided by an active gate of a FET cell of a symmetrical active gate / dummy gate layout, such as the FinFET cell 100 illustrated in Figure 1.[0036] However, increasing the active gate width W7reduces a distance D6between the active gate 232 and the dummy gate 242, which may hinder the epitaxial growth of the drain epitaxial region 222 and the implantation of the drain 218 into the drain epitaxial region 222. In particular, reducing the distance D6may not provide the space necessary between the active gate 232 and the dummy gate 242 to dispose, etch, implant, or otherwise form materials in the substrate 204. In this regard, in an exemplary aspect, the dummy gate width W6of the dummy gate 242 is formed smaller than the active gate width W7by a gate width margin, i.e., a difference between the active gate width W7and the dummy gate width W6. Having a decreased dummy gate width W6allows formation of the drain epitaxial region 222 according to current fabrication processes, e.g., fabrication processes used to fabricate the FinFET cell 100 illustrated in Figure 1. Furthermore, decreasing the dummy gate W6increases the distance D5 between the drain contact 250 and the dummy gate 242, and a separation 260 between the dummy gate 242 and the adjacent drain contact 250, thus further isolating the FinFET 202 from the dummy gate 242, thereby decreasing a leakage current through the dummy gate 242. Furthermore, in an aspect where the increase of the active gate width W7 matches a decrease of the dummy gate width W6, an isolation margin of the drain contact 250 relative to the active gate 232 and the dummy gate 242 (i.e., the difference between the distance D5 and the distance D4), is approximately half the gate width margin, i.e., a difference between the active gate width W7 and the dummy gate width W6. In particular, the increase of the active gate width W7 expands the active gate 232 equally towards the source contact 248 and the drain contact 250. Accordingly, the distance D4between the active gate 232 and the drain contact 250 is reduced by the increase of the active gate width W7 towards the drain contact 250. Therefore, the distance D4between the active gate 232 and the drain contact 250 is reduced by half of the increase in the active gate width W7.[0037] Specifically, in a symmetrical active gate / dummy gate layout, such as the layout illustrated in Figure 1 for the FinFET cell 100, the active gate width Wi is limited by several factors. For example, the active gate width Wi of the active gate 104 is limited by the overall width W3of the FinFET cell 100 having to form the active gate 104 with an active gate width Wi, the width W4of the dummy gate 134, and the distance D3 between the active gate 104 and the dummy gate 134 needed to allow disposing of the source and drain epitaxial regions 108 and 110 in the substrate 112. Thus, gate control in the symmetrical active gate / dummy gate layout, such as the layout illustrated in Figure 1 for the FinFET cell 100, is limited by a maximum width that the active gate width Wi can be. However, in the asymmetric active gate / dummy gate layout of the FinFET cell 200 of the present application, the active gate width W7 of the active gate 232 is formed larger than the dummy gate width W6of the adjacent dummy gate 242, thus increasing gate control while having a width Ws that is approximately the same width W3of the FinFET cell 100 illustrated in Figure 1.[0038] In addition, having a reduced dummy gate width W6allows the FinFET cell 200 to maintain the width Ws that is similar to the width W3of the FinFET cell 100 illustrated in Figure 1, even when the FinFET cell 200 has an increased active gate width W7 of the active gate 232. In particular, in an aspect, the width W6of the dummy gate 242 can be decreased by a same amount that the active gate width W7is increased. This would provide the distance D6between the active gate 232 and the dummy gate 242 of the FinFET cell 200 to be similar or approximately the same as the distance D3between the active gate 104 and the dummy gate 134 of the FinFET cell 100 illustrated in Figure 1. This could also provide for the width Ws of the FinFET cell 200 to be similar or approximately the same as width W3 of the FinFET cell 100 illustrated in Figure 1. Reducing the width W6of the dummy gate 242 can increase the distance D5 between the dummy gate 242 and the adjacent drain contact 250, thus reducing the risk of shorts between the dummy gate 242 and the drain contact 250. Accordingly, the FinFET cell 200 may be fabricated using similar fabrication methods used to fabricate the FinFET cell 100 illustrated in Figure 1.[0039] In the exemplary aspect described above, the gate width margin was defined as at least two (2) nm, as an example. As a further example, the active gate width W7was defined as approximately fifteen (15) nm and the dummy gate width W6as approximately thirteen (13) nm, providing a gate width margin that is approximately two (2) nm. In a further example, the active gate width W7can be approximately seventeen (17) nm and the dummy gate width W6can be approximately fourteen (14) nm, to provide a gate width margin that is approximately three (3) nm. In another aspect, the gate width margin can be at least four (4) nm, for example. Thus, the active gate width W7can be approximately eighteen (18) nm and the dummy gate width W6can be approximately fourteen (14) nm, to provide a gate width margin that is approximately four (4) nm, for example. Having a larger gate width margin provides increased gate control over an implementation with no gate width margin, because a larger active gate width W7results in an increased electric field (not shown) over the channel region 236, and thus, increased control over the channel region 236. Furthermore, having a larger gate width margin provides decreased leakage current through the dummy gate 242 over an implementation with no gate width margin, because a narrower dummy gate width W6results in increased separation 260 between the dummy gate 242 and the adjacent drain contact 250, thus further isolating the FinFET 202 from the dummy gate 242, thereby decreasing a leakage current through the dummy gate 242. [0040] A FinFET cell employing an adjacent asymmetric active gate / dummy gate width layout, such as the FinFET cell 200 in Figure 2, can be fabricated according to any fabrication processes desired. For example, Figure 3 is a flowchart illustrating an exemplary process 300 for fabricating the exemplary FinFET cell 200 employing the adjacent asymmetric active gate / dummy gate width layout in Figure 2. The steps in the process 300 are illustrated respectively in Figures 4A-4F. Figures 4A-4F will be referenced as the exemplary steps in the process 300 in Figure 3 as described below.[0041] A first exemplary step to fabricate the FinFET cell 200 illustrated in Figure 2 includes forming the SDB 228 disposed in the substrate 204 (block 302 in Figure 3). In this regard, Figure 4A illustrates a stage 400(1) where the SDB 228 has been formed in the substrate 204. For example, forming the SDB 228 in the substrate 204 can be performed by etching a recess 402 on the substrate 204 and depositing an isolation material, such as an oxide, for example, to form the SDB 228 as the shallow trench isolation (STI) oxide 240. Forming the SDB 228 may further include polishing the SDB 228 using chemical-mechanical planarization (CMP), for example, to form a top surface 404 of the SDB 228 flush with the top surface 208 of the substrate 204.[0042] A second exemplary step to fabricate the FinFET cell 200 illustrated in Figure 2 includes forming the active gate 232 of the active gate width W7on the substrate 204 (block 304 in Figure 3). A third exemplary step to fabricate the FinFET cell 200 illustrated in Figure 2 includes forming the dummy gate 242 of a dummy gate width W6above the SDB 228 and adjacent to the active gate 232. The dummy gate width W6is formed smaller than the active gate width W7by a gate width margin to form an asymmetric gate width layout (block 306 in Figure 3). In this regard, Figure 4B illustrates a stage 400(2) where the active gate 232 of an active gate width W7has been formed on the substrate 204. The stage 400(2) further illustrates where the dummy gate 242 of a dummy gate width W6has been formed above the SDB 228. Forming the active gate 232 and the dummy gate 242 can be performed by disposing a polysilicon (PolySi) layer and a hard mask (HM) layer, and etching the polysilicon layer and the hard mask layer. Forming the active gate 232 and the dummy gate 242 can further include depositing spacer layers 406 and 408 to form a gate electrode pillar 410, and depositing spacer layers 412 and 414 to form a gate electrode pillar 416. The gate electrode pillars 410 and 416 correspond to the active gate 232 and the dummy gate 242, respectively.[0043] A fourth exemplary step to fabricate the FinFET cell 200 illustrated in Figure 2 includes forming the source epitaxial region 214 of the FinFET 202 in the substrate 204, adjacent to the active gate 232, and implanting the source 210 in the source epitaxial region 214 at the depth DPi from the top surface 208 of the substrate 204 (block 308 in Figure 3). A fifth exemplary step to fabricate the FinFET cell 200 illustrated in Figure 2 includes forming the drain epitaxial region 222 in the substrate 204, adjacent to the SDB 228, between the active gate 232 and the dummy gate 242, wherein a portion of the drain epitaxial region 222 is in contact with the SDB 228, and implanting the drain 218 in the drain epitaxial region 222 at the depth DP2from the top surface 208 of the substrate 204 that is greater than the depth DPi (block 310 in Figure 3). In this regard, Figure 4C illustrates a stage 400(3) where the etching of a recess 418 and a recess 420 on the substrate 204 for depositing the source epitaxial region 214 and the drain epitaxial region 222, respectively, has been performed on the substrate 204.[0044] Furthermore, Figure 4D illustrates a stage 400(4) where depositing of the source epitaxial region 214 and the drain epitaxial region 222 on the recesses 418 and 420, respectively, has been performed. The stage 400(4) illustrates in particular that the drain epitaxial region 222 grows unevenly. This uneven growth is due to a facet mismatch between a facet 224 of the drain epitaxial region 222 and a facet 226 of the SDB 228. This facet 224, 226 mismatch hinders the growth of the drain epitaxial region 222 near the facet 226 of the SDB 228. Accordingly, growth of the drain epitaxial region 222 will be slower, and thus lower, near the facet 226 of the SDB 228 than the growth of the drain epitaxial region 222 away from the facet 226 of the SDB 228. Therefore, the drain epitaxial region 222 has an uneven top surface 230 that is lower near the SDB 228 and higher near the active gate 232.[0045] Furthermore, Figure 4E illustrates a stage 400(5) where implanting of the source 210 and the drain 218 in the source epitaxial region 214 and the drain epitaxial region 222, respectively, has been performed. Figure 4E illustrates that the source 210 is implanted at a depth DPi from the top surface 208 of the substrate 204. Figure 4E further illustrates that the drain 218 is implanted at a depth DP2from the top surface 208 of the substrate 204 that is greater than the depth DPi by a source/drain implant margin 256. These implantations can be performed by, for example, ion implantation. The deeper implantation of the drain 218 is the result of the uneven growth of the drain epitaxial region 222. In particular, implantation is performed based on, for example, a time-based process that is performed equally on the source epitaxial region 214 and the drain epitaxial region 222. The uneven growth of the drain epitaxial region 222 causes the top surface 230 to be lower, in parts, than a top surface 422 of the source epitaxial region 214, which causes the implantation of the drain 218 to produce a drain 218 that is deeper relative to the source 210.[0046] A sixth exemplary step to fabricate the FinFET cell 200 illustrated in Figure 2 includes forming a channel region 236 of the FinFET 202 in the substrate 204 between the source 210 and the drain 218 (block 312 in Figure 3). In this regard, the stage 400(5) illustrated in Figure 4E shows the channel region 236, which is formed in the substrate 204 between the source 210 and the drain 218, and is activated, for example, when a voltage (not shown) is applied to the active gate 232.[0047] A seventh exemplary step to fabricate the FinFET cell 200 illustrated in Figure 2 includes disposing a source contact 248 on the source epitaxial region 214 adjacent to the active gate 232, and disposing a drain contact 250 on the drain epitaxial region 222 between the active gate 232 and the dummy gate 242, the drain contact 250 isolated from the adjacent active gate 232 by a distance D6 and isolated from the adjacent dummy gate 242 by a distance D7. In this regard, Figure 4F illustrates a stage 400(6) of the seventh step in a cross-section view. The stage 400(6) illustrates the source contact 248 disposed on the source epitaxial region 214 adjacent to the active gate 232. The stage 400(6) further illustrates the drain contact 250 disposed on the drain epitaxial region 222 between the active gate 232 and the dummy gate 242. As explained earlier, the increase active gate width W7and the reduced dummy gate width W6result in the distance D7 being greater than the distance D6. This enhances the isolation of the FinFET 202 from the dummy gate 242, thereby decreasing a leakage current through the dummy gate 242.[0048] In other aspects, an exemplary FinFET cell that includes an exemplary FinFET employing an adjacent asymmetric active gate / dummy gate width layout, which can promote increased gate control for reducing leakage current, can also include a means for providing a substrate. An example of a means for providing a substrate is shown as the substrate 204 in Figures 2 and 4A-4F. The FinFET cell can also include a means for providing a FET device comprising a means for providing a source disposed in the means for providing the substrate, a means for providing a drain disposed in the means for providing the substrate, and a means for providing an active gate of an active gate width formed between the means for providing the source and the means for providing the drain. An example of such means for providing a FET device is shown as the FinFET 202 in Figure 2. An example of a means for providing an active gate is shown as the active gate 232 illustrated in Figures 2 and 4B-4F. The FinFET cell can also include a means for providing an isolation structure disposed in the means for providing the substrate, comprising a means for providing a diffusion break disposed in the means for providing the substrate adjacent to one of the means for providing the source and the means for providing the drain of the means for providing the FET device. The means for providing the isolation structure further comprises a means for providing a dummy gate of a dummy gate width formed above the means for providing the diffusion break adjacent to the means for providing the active gate. An example of such a means for providing an isolation structure is shown as the isolation structure 238 illustrated in Figure 2.[0049] The FET devices employing an adjacent asymmetric active gate / dummy gate width layout according to aspects disclosed herein may be provided in or integrated into any processor-based device. Examples, without limitation, include a set top box, an entertainment unit, a navigation device, a communications device, a fixed location data unit, a mobile location data unit, a mobile phone, a cellular phone, a smart phone, a tablet, a phablet, a server, a computer, a portable computer, a desktop computer, a personal digital assistant (PDA), a monitor, a computer monitor, a television, a tuner, a radio, a satellite radio, a music player, a digital music player, a portable music player, a digital video player, a video player, a digital video disc (DVD) player, a portable digital video player, and an automobile.[0050] In this regard, Figure 5 is a block diagram of an exemplary processor-based system 500 that can include the exemplary FinFET cell 200 illustrated in Figure 2. In this example, the processor-based system 500 includes one or more CPUs 502, each including one or more processors 504. The processor-based system 500 may be provided as a system-on-a-chip (SoC) 506. The CPU(s) 502 may have cache memory 508 coupled to the processor(s) 504 for rapid access to temporarily stored data. The CPU(s) 502 is coupled to a system bus 510 and can intercouple master and slave devices included in the processor-based system 500. As is well known, the CPU(s) 502 communicates with these other devices by exchanging address, control, and data information over the system bus 510. For example, the CPU(s) 502 can communicate bus transaction requests to a memory controller 512 in a memory system 514 as an example of a slave device. Although not illustrated in Figure 5, multiple system buses 510 could be provided, wherein each system bus 510 constitutes a different fabric. In this example, the memory controller 512 is configured to provide memory access requests to a memory array 516 in the memory system 514.[0051] Other devices can be connected to the system bus 510. As illustrated in Figure 5, these devices can include the memory system 514, one or more input devices 518, one or more output devices 520, one or more network interface devices 522, and one or more display controllers 524, as examples. The input device(s) 518 can include any type of input device, including but not limited to input keys, switches, voice processors, etc. The output device(s) 520 can include any type of output device, including but not limited to audio, video, other visual indicators, etc. The network interface device(s) 522 can be any devices configured to allow exchange of data to and from a network 526. The network 526 can be any type of network, including but not limited to a wired or wireless network, a private or public network, a local area network (LAN), a wireless local area network (WLAN), a wide area network (WAN), a BLUETOOTH™ network, and the Internet. The network interface device(s) 522 can be configured to support any type of communications protocol desired.[0052] The CPU(s) 502 may also be configured to access the display controller(s) 524 over the system bus 510 to control information sent to one or more displays 528. The display controller(s) 524 sends information to the display(s) 528 to be displayed via one or more video processors 530, which process the information to be displayed into a format suitable for the display(s) 528. The display(s) 528 can include any type of display, including but not limited to a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, etc.[0053] Figure 6 illustrates an example of a wireless communications device 600 which can include RF components in which a FinFET cell that includes an exemplary FinFET employing an adjacent asymmetric active gate / dummy gate width layout, including but not limited to the FinFET cell 200 in Figure 2, may be included. In this regard, the wireless communications device 600, including a FinFET cell that includes an exemplary FinFET employing an adjacent asymmetric active gate / dummy gate width layout, may be provided in an integrated circuit (IC) 606. The wireless communications device 600 may include or be provided in any of the above referenced devices, as examples. As shown in Figure 6, the wireless communications device 600 includes a transceiver 604 and a data processor 608. The data processor 608 may include a memory (not shown) to store data and program codes. The transceiver 604 includes a transmitter 610 and a receiver 612 that support bi-directional communication. In general, the wireless communications device 600 may include any number of transmitters and/or receivers for any number of communication systems and frequency bands. All or a portion of the transceiver 604 may be implemented on one or more analog ICs, RF ICs (RFICs), mixed-signal ICs, etc.[0054] A transmitter 610 or a receiver 612 may be implemented with a superheterodyne architecture or a direct-conversion architecture. In the super-heterodyne architecture, a signal is frequency-converted between RF and baseband in multiple stages, e.g., from RF to an intermediate frequency (IF) in one stage, and then from IF to baseband in another stage for a receiver 612. In the direct-conversion architecture, a signal is frequency converted between RF and baseband in one stage. The superheterodyne and direct-conversion architectures may use different circuit blocks and/or have different requirements. In the wireless communications device 600 in Figure 6, the transmitter 610 and the receiver 612 are implemented with the direct-conversion architecture.[0055] In the transmit path, the data processor 608 processes data to be transmitted and provides I and Q analog output signals to the transmitter 610. In the exemplary wireless communications device 600, the data processor 608 includes digital-to-analog- converters (DACs) 614(1) and 614(2) for converting digital signals generated by the data processor 608 into the I and Q analog output signals, e.g., I and Q output currents, for further processing.[0056] Within the transmitter 610, lowpass filters 616(1), 616(2) filter the I and Q analog output signals, respectively, to remove undesired images caused by the prior digital-to-analog conversion. Amplifiers (AMP) 618(1), 618(2) amplify the signals from the lowpass filters 616(1), 616(2), respectively, and provide I and Q baseband signals. An upconverter 620 upconverts the I and Q baseband signals with I and Q transmit (TX) local oscillator (LO) signals through mixers 624(1), 624(2) from a TX LO signal generator 622 to provide an upconverted signal 626. A filter 628 filters the upconverted signal 626 to remove undesired images caused by the frequency upconversion as well as noise in a receive frequency band. A power amplifier (PA) 630 amplifies the upconverted signal 626 from the filter 628 to obtain the desired output power level and provides a transmit RF signal. The transmit RF signal is routed through a duplexer or switch 632 and transmitted via an antenna 634.[0057] In the receive path, the antenna 634 receives signals transmitted by base stations and provides a received RF signal, which is routed through the duplexer or switch 632 and provided to a low noise amplifier (LNA) 636. The duplexer or switch 632 is designed to operate with a specific RX-to-TX duplexer frequency separation, such that RX signals are isolated from TX signals. The received RF signal is amplified by the LNA 636 and filtered by a filter 638 to obtain a desired RF input signal. Downconversion mixers 640(1), 640(2) mix an output of the filter 638 with I and Q receive (RX) LO signals (i.e., LO_I and LO_Q) from an RX LO signal generator 642 to generate I and Q baseband signals. The I and Q baseband signals are amplified by amplifiers (AMP) 644(1), 644(2) and further filtered by lowpass filters 646(1), 646(2) to obtain I and Q analog input signals, which are provided to the data processor 608. In this example, the data processor 608 includes analog-to-digital-converters (ADCs) 648(1), 648(2) for converting the analog input signals into digital signals to be further processed by the data processor 608.[0058] In the wireless communications device 600 in Figure 6, the TX LO signal generator 622 generates the I and Q TX LO signals used for frequency upconversion, while the RX LO signal generator 642 generates the I and Q RX LO signals used for frequency downconversion. Each LO signal is a periodic signal with a particular fundamental frequency. A transmit (TX) phase-locked loop (PLL) circuit 650 receives timing information from the data processor 608 and generates a control signal used to adjust the frequency and/or phase of the TX LO signals from the TX LO signal generator 622. Similarly, a receive (RX) phase-locked loop (PLL) circuit 652 receives timing information from the data processor 608 and generates a control signal used to adjust the frequency and/or phase of the RX LO signals from the RX LO signal generator 642.[0059] Those of skill in the art will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithms described in connection with the aspects disclosed herein may be implemented as electronic hardware, instructions stored in memory or in another computer-readable medium and executed by a processor or other processing device, or combinations of both. The master and slave devices described herein may be employed in any circuit, hardware component, integrated circuit (IC), or IC chip, as examples. Memory disclosed herein may be any type and size of memory and may be configured to store any type of information desired. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. How such functionality is implemented depends upon the particular application, design choices, and/or design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[0060] The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.[0061] The aspects disclosed herein may be embodied in hardware and in instructions that are stored in hardware, and may reside, for example, in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a CD-ROM, or any other form of computer readable medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a remote station. In the alternative, the processor and the storage medium may reside as discrete components in a remote station, base station, or server.[0062] It is also noted that the operational steps described in any of the exemplary aspects herein are described to provide examples and discussion. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary aspects may be combined. It is to be understood that the operational steps illustrated in the flow chart diagrams may be subject to numerous different modifications as will be readily apparent to one of skill in the art. Those of skill in the art will also understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0063] The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
A semiconductor device assembly is provided. The assembly includes a first semiconductor device including a plurality of electrical contacts on an upper surface thereof; a monolithic silicon structure having a lower surface in contact with the upper surface of the first semiconductor device, the monolithic silicon structure including a cavity extending from the lower surface completely through a body of the monolithic silicon structure to a top surface of the monolithic silicon structure; and a second semiconductor device disposed in the cavity, the second semiconductor device including a plurality of interconnects, each operatively coupled to a corresponding one of the plurality of electrical contacts.
A semiconductor device assembly, comprising:a first semiconductor device including a plurality of electrical contacts on an upper surface thereof;a monolithic silicon structure having a lower surface in contact with the upper surface of the first semiconductor device, the monolithic silicon structure including a cavity extending from the lower surface completely through a body of the monolithic silicon structure to a top surface of the monolithic silicon structure; anda second semiconductor device disposed in the cavity, the second semiconductor device including a plurality of interconnects, each operatively coupled to a corresponding one of the plurality of electrical contacts.The semiconductor device assembly of claim 1, wherein the monolithic silicon structure has a plan area corresponding in size and shape to a plan area of the first semiconductor device.The semiconductor device assembly of claim 1, wherein the upper surface of the first semiconductor device includes a plurality of thermal contacts in direct contact with the lower surface of the monolithic silicon structure.The semiconductor device assembly of claim 3, wherein the monolithic silicon structure includes a plurality of metallic heat extraction structures in direct contact with the plurality of thermal contacts and extending completely through the body of the monolithic silicon structure.The semiconductor device assembly of claim 1, wherein the lower surface of the monolithic silicon structure is bonded to the upper surface of the first semiconductor device by a dielectric bond.The semiconductor device assembly of claim 1, wherein the plurality of interconnects is a first plurality of interconnects, the cavity is a first cavity, the monolithic structure includes a second cavity extending from the lower surface completely through the body of the monolithic silicon structure to the top surface of the monolithic silicon structure, and further comprising a third semiconductor device disposed in the second cavity and including a second plurality of interconnects, each operatively coupled to a corresponding one of the plurality of electrical contacts.The semiconductor device assembly of claim 1, wherein the second semiconductor device includes a vertical stack of electrically coupled memory devices.The semiconductor device assembly of claim 1, wherein one or more of the upper surface of the first semiconductor device and the lower surface of the monolithic silicon structure include a redistribution layer.A semiconductor device assembly, comprising:a first semiconductor device including an upper surface;a monolithic silicon structure having a lower surface in contact with the upper surface of the first semiconductor device, the monolithic silicon structure including a cavity extending from the lower surface completely through a body of the monolithic silicon structure to a top surface of the monolithic silicon structure; anda second semiconductor device directly coupled to the first semiconductor device and disposed in the cavity such that a back surface of the second semiconductor device is generally coplanar with the top surface of the monolithic silicon structure.The semiconductor device assembly of claim 9, further comprising a third semiconductor device disposed on and electrically coupled to the back surface of the second semiconductor device.The semiconductor device assembly of claim 10, wherein the third semiconductor device is: encapsulated by a mold material; or disposed in a second cavity of a second monolithic silicon structure disposed over the first monolithic silicon structure.A semiconductor device assembly, comprising:a first semiconductor device including an upper surface;a second semiconductor device directly carried by an upper surface of the first semiconductor device; anda monolithic silicon structure having a lower surface in contact with the upper surface of the first semiconductor device, the monolithic silicon structure including a cavity extending from the lower surface completely through a body of the monolithic silicon structure to a top surface of the monolithic silicon structure,wherein the monolithic silicon structure completely surrounds a plurality of sidewalls of the second semiconductor device.The semiconductor device assembly of claim 9 or the semiconductor device assembly of claim 12, wherein the monolithic silicon structure includes a plurality of metallic heat extraction structures laterally spaced apart from the cavity and extending completely through the body of the monolithic silicon structure.The semiconductor device assembly of claim 4 or the semiconductor device assembly of claim 13, wherein each of the plurality of metallic heat extraction structures comprises a column or fin of a metal material.The semiconductor device assembly of claim 13, wherein each of the plurality of metallic heat extraction structures has an exposed upper surface that is generally coplanar with the top surface of the monolithic silicon structure.
CROSS-REFERENCE TO RELATED APPLICATIONSThis application contains subject matter related to concurrently-filed U.S. Patent Applications, titled "SEMICONDUCTOR DEVICE ASSEMBLIES INCLUDING MONOLITHIC SILICON STRUCTURES FOR THERMAL DISSIPATION AND METHODS OF MAKING THE SAME." The related applications, of which the disclosures are incorporated by reference herein, are assigned to Micron Technology, Inc., and are identified by attorney docket numbers 010829-9679.US00 and 010829-9680.US00.TECHNICAL FIELDThe present disclosure generally relates to semiconductor device assemblies, and more particularly relates to semiconductor device assemblies including monolithic silicon structures for thermal dissipation and methods of making the same.BACKGROUNDMicroelectronic devices generally have a die (i.e., a chip) that includes integrated circuitry with a high density of very small components. Typically, dies include an array of very small bond pads electrically coupled to the integrated circuitry. The bond pads are external electrical contacts through which the supply voltage, signals, etc., are transmitted to and from the integrated circuitry. After dies are formed, they are "packaged" to couple the bond pads to a larger array of electrical terminals that can be more easily coupled to the various power supply lines, signal lines, and ground lines. Conventional processes for packaging dies include electrically coupling the bond pads on the dies to an array of leads, ball pads, or other types of electrical terminals, and encapsulating the dies to protect them from environmental factors (e.g., moisture, particulates, static electricity, and physical impact).BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 is a simplified schematic cross-sectional view of a monolithic silicon structure for thermal dissipation in accordance with one embodiment of the present disclosure.Figures 2 through 10 are simplified schematic cross-sectional views of semiconductor device assemblies at various stages in a process of fabrication in accordance with embodiments of the present disclosure.Figures 11 through 14 are simplified schematic cross-sectional views of monolithic silicon structures for thermal dissipation at various stages in a process of fabrication in accordance with embodiments of the present disclosure.Figures 15 through 20 are simplified schematic cross-sectional views of semiconductor device assemblies at various stages in a process of fabrication in accordance with embodiments of the present disclosure.Figures 21 through 25 are simplified schematic cross-sectional views of monolithic silicon structures for thermal dissipation at various stages in a process of fabrication in accordance with embodiments of the present disclosure.Figure 26 is a simplified schematic cross-sectional view of a semiconductor device assembly in accordance with one embodiment of the present disclosure.Figure 27 is a schematic view showing a system that includes a semiconductor device assembly configured in accordance with an embodiment of the present disclosure.DETAILED DESCRIPTIONSpecific details of several embodiments of semiconductor devices, and associated systems and methods, are described below. A person skilled in the relevant art will recognize that suitable stages of the methods described herein can be performed at the wafer level or at the die level. Therefore, depending upon the context in which it is used, the term "substrate" can refer to a wafer-level substrate or to a singulated, die-level substrate. Furthermore, unless the context indicates otherwise, structures disclosed herein can be formed using conventional semiconductor-manufacturing techniques. Materials can be deposited, for example, using chemical vapor deposition, physical vapor deposition, atomic layer deposition, plating, electroless plating, spin coating, and/or other suitable techniques. Similarly, materials can be removed, for example, using plasma etching, wet etching, chemical-mechanical planarization, or other suitable techniques.Some semiconductor device assemblies include structures configured to assist in the extraction of heat from one or more semiconductor devices in the assembly. These structures are frequently formed from metals with high thermal conductivity, such as copper, silver, aluminum, or alloys thereof. Because the coefficient of thermal expansion (CTE) of these metals may vary greatly from the CTE of the semiconductor devices in the assembly, delamination, cracking, or other types of mechanical damage due to thermal cycling can pose a challenge to these assemblies. Moreover, the fabrication techniques used to form structures from these metals, and to shape them to accommodate additional devices in the assembly, require different tooling than is used for most other assembly processes and can greatly increase the expense of the assemblies in which they are integrated.To address these drawbacks and others, various embodiments of the present application provide semiconductor device assemblies in which a monolithic silicon structure is provided for thermal dissipation between the surface of a lower die in a multi-die structure and an outer (e.g., upper) surface of the assembly. The monolithic silicon structure can include cavities extending partially or completely therethrough, in which additional semiconductor devices (e.g., dies, die stacks, packages, assemblies, etc.) can be provided. The additional semiconductor devices can be electrically coupled to the same surface of the lower die to which the monolithic silicon structure is attached (e.g., by oxide-oxide bonding, hybrid bonding, adhesive, interconnects, or the like). The monolithic silicon structure, by virtue of its high thermal conductivity and the close match of its coefficient of thermal expansion to that of the lower die, provides improved thermal management without the risks of damage associated with other thermal management structures.Figure 1 is a simplified schematic partial cross-sectional view of a monolithic silicon structure 100 in accordance with an embodiment of the present disclosure. Monolithic silicon structure 100 includes one or more cavities (two are illustrated) extending at least part way through the thickness (e.g., into the body) of the monolithic silicon structure 100. The structure 100 can be formed, e.g., from a blank silicon wafer, in which cavities have been formed (e.g., by masking and directionally etching, laser ablating, etc.). The structure 100 can be kept at a wafer-level for subsequent wafer-level processing steps, or can optionally be singulated prior to subsequent processing steps.In accordance with one aspect of the present disclosure, monolithic silicon structure 100 can be pre-populated with semiconductor devices in the cavities thereof prior to integration into a larger semiconductor device assembly. Figure 2 is a simplified schematic cross-sectional view of a monolithic silicon structure 100 in which several semiconductor devices have been disposed in accordance with one embodiment of the present disclosure. As can be seen with reference to Figure 2 , semiconductor devices 102 (e.g., individual dies, vertical stacks of interconnected dice, device packages, device assemblies, etc.) have been disposed into the cavities of monolithic silicon structure 100. Each semiconductor device 102 may be secured in the corresponding cavities by an adhesive (e.g., a thermal interface material) between the back surface of the semiconductor device and the facing interior surface of the cavity. The cavities may be sized such that small gaps 103 (e.g., optionally filled with an adhesive, an underfill, an encapsulant, or the like) remain surrounding the semiconductor devices 102 to ease the process of disposing them in the cavities. In other embodiments, gaps 103 may be minimized or even eliminated through careful matching of the exterior dimensions of the semiconductor devices 102 and the cavities. To facilitate the integration of the semiconductor devices 102 and the monolithic silicon structure 100 into a larger assembly, a redistribution layer 104, including one or more thermal pads 105 (e.g., comprising copper, silver, aluminum, or other metals compatible with a metal-metal bonding operation) aligned with the monolithic silicon structure 100 and one or more interconnects 106 (e.g., pads, pillars, UBMs, pins, solder balls, etc.) operatively coupled to the semiconductor devices 102 can be formed. In other embodiments, the redistribution layer can be omitted and semiconductor devices 102 can be provided with interconnects prior to population into the monolithic silicon structure 100 (e.g., coplanar with the bonding surface of the monolithic silicon structure 100).Turning to Figure 3 , the populated monolithic silicon structure 100 is illustrated being aligned in preparation for bonding to another semiconductor device (e.g., the aforementioned lower semiconductor device in the assembly), in accordance with one embodiment of the present disclosure. The lower semiconductor device 110 includes a dielectric layer 109 in which are disposed electrical contacts 107 and thermal contacts 108. The populated monolithic silicon structure 100 can be bonded to the lower semiconductor device 110 such that the thermal pads 105 are coupled to the thermal contacts 107 and the interconnects 106 are coupled to the electrical contacts 108 to form semiconductor device assembly 400, as illustrated in accordance with one embodiment of the disclosure in Figure 4 . The bonding operation can be a hybrid bonding operation, in which a dielectric-dielectric bond (e.g., an oxide-oxide bond) is formed between the dielectric of redistribution layer 104 and the dielectric layer 109 formed over the lower semiconductor device 110 and metal-metal bonds are formed between corresponding ones of the thermal pads 105 and the thermal contacts 107, and between corresponding ones of the interconnects 106 and the electrical contacts 108.Although in the foregoing example embodiments semiconductor device assembly 400 has been illustrated as formed through a hybrid bonding operation, in other embodiments the bond between a populated monolithic silicon structure and a lower semiconductor device can be achieved with adhesive layers (e.g., thermal interface material (TIM)), solder interconnects with or without underfill, or any other bonding method well known to those skilled in the art.In accordance with an additional aspect of the present disclosure, semiconductor device assembly 400 can optionally be subject to further processing to remove the portions of the monolithic silicon structure 100 overlying the cavities in which semiconductor devices 102 have been disposed, in order to reduce a height of the assembly and/or to provide additional connectivity options. In this regard, Figure 5 is a simplified schematic cross-sectional view of a semiconductor device assembly 500, in which an assembly like that illustrated in Figure 4 has been subjected to a backside thinning operation (e.g., by chemical-mechanical polishing (CMP), grinding, etc.) to remove portions of material from the monolithic silicon structure 100 in order to expose the back surfaces of semiconductor devices 102 and to reduce the overall height of the assembly 500.In an embodiment in which semiconductor devices 102 include backside contacts for further connectivity, removing the portions of material from the monolithic silicon structure 100 covering the back surfaces of semiconductor devices 102 can permit additional devices to be integrated into the semiconductor device assembly. One such arrangement is shown in Figure 6 , in which is illustrated a simplified schematic cross-sectional view of a semiconductor device assembly 600. As can be seen with reference to Figure 6 , an assembly like that illustrated in Figure 5 has had additional semiconductor devices 111 (e.g., individual dies, vertical stacks of interconnected dice, device packages, device assemblies, etc.) connected to the exposed backside contacts of semiconductor devices 102 (e.g., through traditional flip-chip interconnections, solder ball arrays, hybrid bonding, etc.). The additional semiconductor devices 111 can then be encapsulated by a layer of mold material 112 to provide mechanical protection thereto.Alternatively, rather than individually connecting additional semiconductor devices to the exposed backside contacts of semiconductor devices 102, as illustrated in Figure 6 , in another embodiment one or more additional pre-populated monolithic silicon structures (e.g., like that illustrated in Figure 2 ) can be bonded to the semiconductor assembly 500 illustrated in Figure 5 to provide an assembly with a high density of devices while retaining good thermal performance. One such assembly is shown in Figure 7 , in which is illustrated a simplified schematic cross-sectional view of a semiconductor device assembly 700, in which an assembly like that illustrated in Figure 5 has had an additional monolithic silicon structure 113 populated with semiconductor devices bonded to thereto.As one of skill in the art will readily appreciate, the processes illustrated in Figures 5 and 7 can be iteratively repeated, such that an additional populated monolithic silicon structure can itself be subjected to another backside thinning operation to expose the backside contacts of the semiconductor devices therein for bonding to yet another populated monolithic silicon structure, in accordance with one aspect of the present disclosure.Alternatively or additionally, rather than a backside thinning operation which completely removes the material of a monolithic silicon structure covering the back surfaces of the semiconductor devices populated in cavities thereof, in another embodiment the material of a monolithic silicon structure covering the back surfaces of the semiconductor devices populated in cavities thereof can merely be thinned sufficiently to permit the formation of vias (e.g., through-silicon vias (TSVs)) through the thinned material to connect to the backside contacts of the semiconductor devices. This may be more readily understood with reference to Figure 8 , in which is shown an assembly like that of Figure 4 that has been subjected to a backside thinning operation which removed a portion of the material covering the back surfaces of the semiconductor devices in the cavities, and has been further subjected to a TSV formation operation (e.g., forming openings through the silicon material, passivating the openings, removing the passivation from the bottom of the openings to expose backside contacts, plating a conductor into the openings, etc.) providing TSVs 114 extending through the thinned material to contact backside contacts of the semiconductor devices to facilitate further connectivity.Turning to Figure 9 , a simplified schematic cross-sectional view of a semiconductor device assembly 900 is illustrated, in which an assembly like that shown in Figure 8 has had additional semiconductor devices 111 (e.g., individual dies, vertical stacks of interconnected dice, device packages, device assemblies, etc.) connected to the TSVs 114 extending through the monolithic silicon structure 100 to semiconductor devices 102 (e.g., through traditional flip-chip interconnections, solder ball arrays, hybrid bonding, etc.). The additional semiconductor devices 111 can then be encapsulated by a layer of mold material 112 to provide mechanical protection thereto, as described in greater detail above with reference to Figure 6 ..Alternatively, rather than individually connecting additional semiconductor devices to the TSVs 114 as illustrated in Figure 9 , in another embodiment one or more additional pre-populated monolithic silicon structures (e.g., like that illustrated in Figure 2 ) can be bonded to the semiconductor assembly illustrated in Figure 8 to provide an assembly with a high density of devices while retaining good thermal performance. One such assembly is shown in Figure 10 , in which is illustrated a simplified schematic cross-sectional view of a semiconductor device assembly 100, in which an assembly like that illustrated in Figure 8 has had an additional monolithic silicon structure 113 populated with semiconductor devices bonded to thereto.As set forth above, a monolithic silicon structure can be fabricated from a blank silicon wafer via traditional etching techniques for forming openings or cavities in silicon. Alternatively or additionally, methods for fabricating monolithic silicon structures can include highly-controllable and high-speed etching processes as set forth in greater detail below, in accordance with various embodiments of the present disclosure.Turning to Figure 11 , a precursor structure from which a monolithic silicon structure will be formed is shown in a simplified partial cross-sectional view at a step in the formation process in accordance with one embodiment of the present disclosure. The precursor structure includes a silicon wafer 1100 on which has been formed passivation layer 1101 (e.g., a dielectric material) in which are formed one or more thermal pads 1102. A mask layer 1103 is formed over the passivation layer 1101, with a pattern corresponding to the cavities to be formed in the silicon wafer 1100. More particularly, the mask layer 1103 includes a pattern of small openings (e.g., corresponding to narrow columnar or fin-like structures) that overlie a region in the silicon wafer 1100 where the cavities are to be formed. As can be seen with reference to Figure 12 , the small openings 1104 can be etched at least partially into a thickness of the silicon wafer 1100 to remove some of the material from where the cavities are to be formed. An advantage of etching a smaller amount of material from the cavity, rather than the entire cavity, is that the directional etching operation can be completed more quickly than if the mask opening corresponded to the full size of the eventual cavity opening. Having anisotropically etched these "slivers" of material out of the silicon wafer 1100, a subsequent isotropic (e.g., wet) etch operation can be performed to remove the remaining material from the silicon wafer 1100 where the cavities are to be formed. The result of such an operation is illustrated in Figure 13 , which shows cavities 1105 having been formed by this two-step anisotropic and isotropic etching process in accordance with one embodiment of the present disclosure. After removing the remains of mask layer 1103 (e.g., via a chemical and/or mechanical removal process), as shown in Figure 14 , monolithic silicon structure 1400, with included thermal pads 1102 and cavities 1105, is ready for the processes previously described in greater detail above with reference to Figures 2 through 10 .As an alternative to pre-populating a monolithic silicon structure like those of Figures 1 or 14 with semiconductor devices before attaching the monolithic silicon structure to a lower semiconductor device in an assembly, some embodiments of the disclosure can involve attaching a monolithic silicon structure to a semiconductor device, backside thinning the monolithic silicon structure to reveal the cavities therein, and subsequently disposing semiconductor devices inside the cavities. One such approach to forming a semiconductor device assembly is shown at various stages in the process in Figures 15 to 20 , according to various embodiments of the present disclosure.Turning to Figure 15 , the monolithic silicon structure 1400 of Figure 14 is shown after having been bonded to a lower semiconductor device 1401 in accordance with one aspect of the disclosure. In this regard, monolithic silicon structure 1400 is bonded to the lower semiconductor device 1401 such that the thermal pads 1102 are coupled to thermal contacts 1402 of the lower semiconductor device 1401. The bonding operation can be a hybrid bonding operation, in which a dielectric-dielectric bond (e.g., an oxide-oxide bond) is formed between the dielectric 1101 of the monolithic silicon structure and a dielectric layer 1403 formed over the lower semiconductor device 1401 and metal-metal bonds are formed between corresponding ones of the thermal pads 1102 and the thermal contacts 1402.The monolithic silicon structure 1400 can, after bonding to the lower semiconductor device 1401, be subjected to a backside thinning operation (e.g., by chemical-mechanical polishing (CMP), grinding, etc.) to remove portions of material from the monolithic silicon structure 1400 in order to expose the cavities 1105, as illustrated in Figure 16 . With the cavities 1105 thus opened, semiconductor devices (e.g., individual dies, vertical stacks of interconnected dice, device packages, device assemblies, etc.) 1701 can be disposed in the cavities 1105, and an encapsulant (e.g., mold material) 1702 can be disposed over (and optionally around, depending upon the relative sizes of the semiconductor devices 1701 and cavities 1105) the semiconductor devices 1701, to produce semiconductor device assembly 1700, as shown in Figure 17 . Subsequent processing steps (e.g., singulating the assembly 1700 from wafer- or panel-level, thinning and providing external connections to the lower semiconductor device 1401, etc.) can be performed at this point (and are not illustrated to preserve the clarity of the disclosure).Alternatively, the semiconductor device assembly 1700 can be subjected to additional processing operations to remove the overlying portions of the encapsulant material 1702 and expose the back surfaces of the semiconductor devices 1701, analogously to the processes described above with reference to Figures 4 and 5 , in order to thin the assembly 1700 and/or prepare the assembly for additional connectivity. In this regard, Figure 18 is a simplified schematic cross-sectional view of a semiconductor device assembly 1800, in which an assembly like that illustrated in Figure 17 has been subjected to a backside thinning operation (e.g., by chemical-mechanical polishing (CMP), grinding, etc.) to remove overlying portions of the encapsulant 1702 in order to expose (and optionally to planarize) the back surfaces of semiconductor devices 1701 and to reduce the overall height of the assembly 1800.In an embodiment in which semiconductor devices 1701 include backside contacts for further connectivity, removing the portions of material from the encapsulant 1702 covering the back surfaces of semiconductor devices 1701 can permit additional devices to be integrated into the semiconductor device assembly, as described in greater detail above with respect to Figures 6 and 7 . In this regard, additional semiconductor devices can be directly attached to the exposed backside contacts of semiconductor devices 1701 and then encapsulated by a layer of mold material (e.g., analogously to the arrangement illustrated in Figure 6 ). Alternatively, rather than individually connecting additional semiconductor devices to the exposed backside contacts of semiconductor devices 1701, in another embodiment one or more additional pre-populated monolithic silicon structures (e.g., like that illustrated in Figure 2 ) can be bonded to the semiconductor assembly 1800 illustrated in Figure 18 to provide an assembly with a high density of devices while retaining good thermal performance. In yet another embodiment, the processes illustrated in Figures through 18 can be iteratively performed on the assembly 1800 of Figure 18 (e.g., disposing another monolithic silicon structure 1400 over the assembly 1800, thinning the monolithic silicon structure 1400 to open the cavities 1105 therein, disposing additional semiconductor devices in the exposed cavities, encapsulating with a mold material, and optionally thinning the overlying mold material), to provide an assembly with a high density of devices while retaining good thermal performance. As one of skill in the art will readily appreciate, the foregoing processes can be mixed, matched, and iteratively repeated, such that additional tiers of semiconductor devices can be provided until a desired device density has been achieved.Semiconductor device assembly has been illustrated as being formed over a lower semiconductor device 1401 which has yet to be thinned or provided with backside contacts (e.g., on a lower surface thereof in the illustrated orientation). Figure 19 illustrates a process by which the lower semiconductor device 1401 can be thinned and provided with TSVs and backside contacts in accordance with one aspect of the present disclosure. As can be seen with reference to Figure 19 semiconductor device assembly 1800 has been bonded to a temporary carrier wafer 1901 by a layer of adhesive 1902 disposed over the monolithic silicon structure 1400 and the exposed back surfaces of semiconductor devices 1701. While supported mechanically by the carrier wafer 1901, the back surface of lower semiconductor device 1401 can be thinned (e.g., by CMP, grinding, etc.) to reduce a total height of the assembly and to permit the formation of TSVs 1903 through a remaining thickness of lower semiconductor device 1401. Backside contacts (e.g., pads, pillars, under-bump metallization (UBM), etc.) can be formed, such as those carrying solder ball array 1904, using any one of a number of methods known to those of skill in the art. In another embodiment, rather than forming vias 1904 after thinning the lower semiconductor device 1401, buried TSVs already formed in lower semiconductor device 1401 at an earlier stage of processing may merely be exposed by the thinning operation illustrated in Figure 19 . Once the thinning and contact formation is complete, temporary carrier wafer 1901 and adhesive 1902 can be removed, resulting in completed semiconductor device assembly 2000, as illustrated in Figure 20 .Although the silicon material of the foregoing monolithic silicon structures enjoys a high thermal conductivity, it can be advantageous in some circumstances to include copper, silver, aluminum, or other highly thermally conductive metals in some regions of a monolithic silicon structure to further enhance the heat management capabilities thereof while minimizing the difference in CTE between the structure and the semiconductor devices in the assembly. In this regard, figures 21 through 26 illustrate the fabrication and integration of one embodiment of a monolithic silicon structure which includes metallic heat extraction structures.Turning to Figure 21 , a precursor structure from which a monolithic silicon structure will be formed is shown in a simplified partial cross-sectional view at a step in the formation process in accordance with one embodiment of the present disclosure. The precursor structure includes a silicon wafer 2100 on which has been formed passivation layer 2101 (e.g., a dielectric material) in which can optionally be formed one or more thermal pads (not illustrated). A mask layer 2102 is formed over the passivation layer 2101, with a pattern corresponding both to the cavities and the metallic heat extraction structures to be formed in the silicon wafer 2100. More particularly, the mask layer 2102 includes a pattern of small openings (e.g., corresponding to narrow columnar or fin-like structures) that overlie both regions in the silicon wafer 2100 where the cavities are to be formed and regions in the silicon wafer 2100 where the metallic heat extraction structures are to be formed.As can be seen with reference to Figure 22 , the small openings 2103 can be etched at least partially into a thickness of the silicon wafer 2100 to remove some of the material from where the cavities are to be formed and to create openings in which metallic heat extraction structures can be plated. Having anisotropically etched these "slivers" of material out of the silicon wafer 2100, a plating operation can then be formed to fill the small openings 2103 with metallic structures, both in the regions where cavities are to be formed and in the regions where the metallic heat extraction structures 2105 are to remain. The excess metal material can be removed (e.g., by a CMP operation, a grinding operation, a wet etch operation, etc.), and another mask structure 2106 can be disposed over the silicon wafer 2100, with openings exposing the metal material in the regions where the cavities are to be formed, but not exposing the metallic heat extraction structures 2105.A subsequent isotropic (e.g., wet) etch operation can be performed to remove the metal structures and the remaining silicon material from the silicon wafer 2100 where the cavities are to be formed. The result of such an operation is illustrated in Figure 25 , which shows cavities 2107 and metallic heat extraction structures 2105 having been formed by this process in accordance with one embodiment of the present disclosure. After removing the remains of mask layer 2106 (e.g., via a chemical and/or mechanical removal process), monolithic silicon structure 2500, with included metallic heat extraction structures 2105 and cavities 2107, is ready for the processes previously described in greater detail above with reference to Figures 2 through 10 and/or 15 through 20. In this regard, Figure 26 illustrates a simplified schematic cross-sectional view of a semiconductor device assembly 2600 in accordance with one embodiment of the present disclosure. Assembly 2600 includes a monolithic silicon structure 2500 in which are disposed metallic heat extraction structures 2105 for extracting heat from a lower semiconductor device 2602 (e.g., through contact with thermal contacts in the lower semiconductor device 2602). The assembly 2600 further includes one or more semiconductor devices (two are illustrated) in cavities of the monolithic silicon structure, coupled to the lower semiconductor device 2602.As will be readily understood by those of skill in the art, although the foregoing examples are illustrated with partial cross-sectional views in which a single lower semiconductor device is bonded to a single monolithic structure, embodiments of the present disclosure contemplate wafer-level processing in which an un-singulated wafer comprising a plurality of lower semiconductor devices is bonded to a wafer-level monolithic silicon structure to provide a wafer-level intermediate structure from which individual assemblies can be singulated. Alternatively, in another embodiment, singulated monolithic silicon structures can be individually bonded to an un-singulated wafer comprising a plurality of lower semiconductor devices. In yet another embodiment, singulated monolithic silicon structures can be individually bonded to singulated lower semiconductor devices.Although in the foregoing example embodiments monolithic silicon structures have been illustrated and described as including thermal pads or metallic heat extraction structures in contact with corresponding thermal contacts on a lower semiconductor device, in other embodiments these features can be omitted and a monolithic silicon structure can be bonded to a surface of a lower semiconductor device without any intermediating metal structures.Although in the foregoing example embodiments monolithic silicon structures have been illustrated and described as including two cavities of the same depth and plan area with similarly-sized semiconductor devices therein, those of skill in the art will readily appreciate that the number of cavities is not so limited, and monolithic silicon structures in other embodiments may have more or fewer cavities, cavities of different plan areas and/or depths to accommodate semiconductor devices (or other electrical components, including passive circuit components) of different sizes and shapes.Moreover, although in the foregoing example embodiments monolithic silicon structures have been illustrated and described as disposed over a lower semiconductor die having a same plan area as the monolithic silicon structure, those of skill in the art will readily appreciate that monolithic silicon structures can be employed in other arrangements (e.g., bonded to more than one lower die, bonded to a device substrate, etc.) and need not have a same plan area as the device on which they are carried.In accordance with one aspect of the present disclosure, the semiconductor device assemblies illustrated and described above could include memory dies, such as dynamic random access memory (DRAM) dies, NOT-AND (NAND) memory dies, NOT-OR (NOR) memory dies, magnetic random access memory (MRAM) dies, phase change memory (PCM) dies, ferroelectric random access memory (FeRAM) dies, static random access memory (SRAM) dies, or the like. In an embodiment in which multiple dies are provided in a single assembly, the semiconductor devices could be memory dies of a same kind (e.g., both NAND, both DRAM, etc.) or memory dies of different kinds (e.g., one DRAM and one NAND, etc.). In accordance with another aspect of the present disclosure, the semiconductor dies of the assemblies illustrated and described above could include logic dies (e.g., controller dies, processor dies, etc.), or a mix of logic and memory dies (e.g., a memory controller die and a memory die controlled thereby).Any one of the semiconductor devices and semiconductor device assemblies described above can be incorporated into any of a myriad of larger and/or more complex systems, a representative example of which is system 2700 shown schematically in Figure 27 . The system 2700 can include a semiconductor device assembly (e.g., or a discrete semiconductor device) 2702, a power source 2704, a driver 2706, a processor 2708, and/or other subsystems or components 2710. The semiconductor device assembly 2702 can include features generally similar to those of the semiconductor devices described above. The resulting system 2700 can perform any of a wide variety of functions, such as memory storage, data processing, and/or other suitable functions. Accordingly, representative systems 2700 can include, without limitation, hand-held devices (e.g., mobile phones, tablets, digital readers, and digital audio players), computers, vehicles, appliances and other products. Components of the system 2700 may be housed in a single unit or distributed over multiple, interconnected units (e.g., through a communications network). The components of the system 2700 can also include remote devices and any of a wide variety of computer readable media.The devices discussed herein, including a memory device, may be formed on a semiconductor substrate or die, such as silicon, germanium, silicon-germanium alloy, gallium arsenide, gallium nitride, etc. In some cases, the substrate is a semiconductor wafer. In other cases, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOP), or epitaxial layers of semiconductor materials on another substrate. The conductivity of the substrate, or sub-regions of the substrate, may be controlled through doping using various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping may be performed during the initial formation or growth of the substrate, by ion-implantation, or by any other doping means.The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. Other examples and implementations are within the scope of the disclosure and appended claims. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations.As used herein, including in the claims, "or" as used in a list of items (for example, a list of items prefaced by a phrase such as "at least one of" or "one or more of") indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase "based on" shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as "based on condition A" may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase "based on" shall be construed in the same manner as the phrase "based at least in part on."As used herein, the terms "vertical," "lateral," "upper," "lower," "above," and "below" can refer to relative directions or positions of features in the semiconductor devices in view of the orientation shown in the Figures. For example, "upper" or "uppermost" can refer to a feature positioned closer to the top of a page than another feature. These terms, however, should be construed broadly to include semiconductor devices having other orientations, such as inverted or inclined orientations where top/bottom, over/under, above/below, up/down, and left/right can be interchanged depending on the orientation.It should be noted that the methods described above describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Furthermore, embodiments from two or more of the methods may be combined.From the foregoing, it will be appreciated that specific embodiments of the invention have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. Rather, in the foregoing description, numerous specific details are discussed to provide a thorough and enabling description for embodiments of the present technology. One skilled in the relevant art, however, will recognize that the disclosure can be practiced without one or more of the specific details. In other instances, well-known structures or operations often associated with memory systems and devices are not shown, or are not described in detail, to avoid obscuring other aspects of the technology. In general, it should be understood that various other devices, systems, and methods in addition to those specific embodiments disclosed herein may be within the scope of the present technology.
Systems, methods, and apparatus are described for supporting packed data convolution instructions with shift control and width control. In one embodiment, a hardware processor includes decoder circuitry to decode a single instruction into a decoded single instruction, the single instruction having fields identifying a first packed data source, a second packed data source, a packed data destination, a sliding window width, and a span and having an opcode, the opcode instructs the execution circuitry to generate a first block of contiguous elements of the first packed data source having a width of a sliding window width, generate a second block of contiguous elements of the first packed data source having a width of the sliding window width and shifted by a span, multiplying each element of the first block with a corresponding element of a respective block of the second packed data source to generate a first set of products, adding the first set of products together to generate a first sum, multiplying each element of the second block with a corresponding element of a respective block of the second packed data source to generate a second set of products, adding the second set of products together to generate a second sum, and storing the first sum in the first element of the packed data destination and the second sum in the second element of the destination data element; and execution circuitry to execute the decoded single instruction according to the opcode.
1. A device comprising:a decoder circuit for decoding a single instruction into a decoded single instruction having fields identifying a first packed data source, a second packed data source, a packed data destination, a sliding window width and a stride and having an operation code, the opcode instructs the executive circuit to:generating a first block of consecutive elements of the first compressed data source with a width equal to the sliding window width,generating a second block of consecutive elements of the first packed data source having a width of the sliding window width and shifted by the stride,multiplying each element of said first block with a corresponding element of a corresponding block of said second source of packed data to generate a first set of products,adding together said first set of products to generate a first sum,multiplying each element of the second block with a corresponding element of a corresponding block of the second source of packed data to generate a second set of products,adding together the second set of products to generate a second sum, andstoring the first sum in a first element of the packed data destination and storing the second sum in a second element of the packed data destination; andThe execution circuit is configured to execute the decoded single instruction according to the operation code.2. The apparatus of claim 1, wherein the sliding window width is selectable from a plurality of widths.3. The apparatus of claim 2, wherein the sliding window width is identified by an immediate value of the single instruction.4. The apparatus of claim 1, wherein the span is selectable from a plurality of spans.5. The apparatus of claim 4, wherein the span is identified by an immediate value of the single instruction.6. The apparatus of claim 1, wherein the sliding window width and the stride are identified by an immediate value of the single instruction.7. The apparatus of claim 1, wherein the first packed data source and the second packed data source have the same element width, and the packed data destination has a wider element width.8. The apparatus of any one of claims 1-7, wherein the packed data destination is also a third packed data source; and the opcode instructs the execution circuitry to: prior to the storing , updating the first sum by accumulating the first sum with the first value of the first element from the third source of packed data, adding the second sum with the first value of the first element from the third source of packed data The second value of the two elements is accumulated to update the second sum; and the storing is storing the updated first sum in the first element of the packed data destination and updating the updated second sum stored in said second element of said packed data destination.9. A method comprising:decoding, by decoder circuitry of the processor core, a single instruction into a decoded single instruction having fields identifying a first packed data source, a second packed data source, a packed data destination, a sliding window width, and a stride, and has an opcode that instructs the execution circuit to:generating a first block of consecutive elements of the first compressed data source with a width equal to the sliding window width,generating a second block of consecutive elements of the first packed data source having a width of the sliding window width and shifted by the stride,multiplying each element of said first block with a corresponding element of a corresponding block of said second source of packed data to generate a first set of products,adding together said first set of products to generate a first sum,multiplying each element of the second block with a corresponding element of a corresponding block of the second source of packed data to generate a second set of products,adding together the second set of products to generate a second sum, andstoring the first sum in a first element of the packed data destination and storing the second sum in a second element of the packed data destination; andThe decoded single instruction is executed by the execution circuitry of the processor core according to the opcode.10. The method of claim 9, wherein the sliding window width is selectable from a plurality of widths.11. The method of claim 10, wherein the sliding window width is identified by an immediate value of the single instruction.12. The method of claim 9, wherein the span is selectable from a plurality of spans.13. The method of claim 12, wherein the span is identified by an immediate value of the single instruction.14. The method of claim 9, wherein the sliding window width and the stride are identified by an immediate value of the single instruction.15. The method of claim 9, wherein the first packed data source and the second packed data source have the same element width, and the packed data destination has a wider element width.16. The method of any one of claims 9-15, wherein the packed data destination is also a third packed data source; and the opcode instructs the execution circuit to: prior to the storing , updating the first sum by accumulating the first sum with the first value of the first element from the third source of packed data, adding the second sum with the first value of the first element from the third source of packed data The second value of the two elements is accumulated to update the second sum; and the storing is storing the updated first sum in the first element of the packed data destination and updating the updated second sum stored in said second element of said packed data destination.17. A non-transitory machine-readable medium storing program code which, when executed by a machine, causes the machine to perform a method comprising the steps of:decoding, by decoder circuitry of the processor core, a single instruction into a decoded single instruction having fields identifying a first packed data source, a second packed data source, a packed data destination, a sliding window width, and a stride, and With an opcode, the operator instructs the executive circuit to:generating a first block of consecutive elements of the first compressed data source with a width equal to the sliding window width,generating a second block of consecutive elements of the first packed data source having a width of the sliding window width and shifted by the stride,multiplying each element of said first block with a corresponding element of a corresponding block of said second source of packed data to generate a first set of products,adding together said first set of products to generate a first sum,multiplying each element of the second block with a corresponding element of a corresponding block of the second source of packed data to generate a second set of products,adding together the second set of products to generate a second sum, andstoring the first sum in a first element of the packed data destination and storing the second sum in a second element of the packed data destination; andThe decoded single instruction is executed by the execution circuitry of the processor core according to the opcode.18. The non-transitory machine readable medium of claim 17, wherein the sliding window width is selectable from a plurality of widths.19. The non-transitory machine readable medium of claim 18, wherein the sliding window width is identified by an immediate value of the single instruction.20. The non-transitory machine readable medium of claim 17, wherein the span is selectable from a plurality of spans.21. The non-transitory machine readable medium of claim 20, wherein the span is identified by an immediate value of the single instruction.22. The non-transitory machine readable medium of claim 17, wherein the sliding window width and the stride are identified by an immediate value of the single instruction.23. The non-transitory machine-readable medium of claim 17, wherein the first packed data source and the second packed data source have the same element width, and the packed data destination has a wider element width.24. The non-transitory machine-readable medium of any one of claims 17-23, wherein the packed data destination is also a third packed data source; and the opcode instructs the execution circuit to use : prior to said storing, update said first sum by accumulating said first sum with a first value of a first element from said third packed data source, and sum said second sum with a first value from said third packed data source The second values of the second elements of the three packed data sources are accumulated to update the second sum; and the storing is storing the updated first sum in the first element of the packed data destination and The updated second sum is stored in the second element of the packed data destination.
Means for packed data convolution instruction with shift control and width control, method and systemtechnical fieldThe present disclosure relates generally to computer processor architecture, and more particularly to circuits for implementing packed data convolution instructions with shift control and width control.Background techniqueA processor or collection of processors executes instructions from an instruction set such as an instruction set architecture (ISA). The instruction set is the programming-related portion of a computer architecture and generally includes native data types, instructions, register architecture, addressing modes, memory architecture, interrupt and exception handling, and external input and output (I/O). It should be noted that the term instruction may refer herein to macroinstructions, ie, instructions provided to a processor for execution, or to microinstructions, ie, instructions decoded by a decoder of a processor.Description of drawingsThe present disclosure is illustrated by way of example and not limitation in the accompanying drawings, in which like reference numerals indicate like elements, in which:1 illustrates a block diagram of a computer system including a hardware processor with execution circuitry having scalar circuitry and vector/single instruction multiple data (SIMD) circuitry according to an embodiment of the disclosure.Figure 2 illustrates a hardware processor coupled to storage including one or more packed data convolution instructions with shift control and/or width control, according to an embodiment of the disclosure.3 illustrates a method of processing packed data convolution instructions with shift control and width control, according to an embodiment of the disclosure.4 illustrates a circuit including an execution circuit having a sliding window circuit, a multiplier circuit, and an adder circuit according to an embodiment of the disclosure.5 illustrates a circuit including an execution circuit having a sliding window circuit, a multiplier circuit, and an adder circuit according to an embodiment of the disclosure.6 illustrates a circuit including an execution circuit having a sliding window circuit, a multiplier circuit, and an adder/accumulator circuit, according to an embodiment of the disclosure.Figure 7A is a block diagram illustrating a generic vector friendly instruction format and its class A instruction templates according to an embodiment of the present disclosure.FIG. 7B is a block diagram illustrating a generic vector friendly instruction format and its class B instruction templates according to an embodiment of the disclosure.FIG. 8A is a block diagram illustrating fields for the general vector friendly instruction format in FIGS. 7A and 7B according to an embodiment of the disclosure.Figure 8B is a block diagram illustrating the fields of the specific vector friendly instruction format in Figure 8A that make up the complete opcode field according to one embodiment of the present disclosure.8C is a block diagram illustrating the fields of the specific vector friendly instruction format in FIG. 8A that make up the register index field according to one embodiment of the disclosure.FIG. 8D is a block diagram illustrating the fields of the specific vector friendly instruction format in FIG. 8A that make up the extended operation field 750 according to one embodiment of the disclosure.Figure 9 is a block diagram of a register architecture according to one embodiment of the present disclosure.10A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming out-of-order issue/execution pipeline according to an embodiment of the disclosure.10B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register-renaming out-of-order issue/execution architecture core to be included in a processor according to an embodiment of the disclosure.11A is a block diagram of a single processor core and its connection to an on-die interconnect network and a local subset of its level 2 (L2) cache according to an embodiment of the disclosure.11B is an expanded view of portions of the processor core in FIG. 11A according to an embodiment of the disclosure.12 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have an integrated graphics device, according to an embodiment of the disclosure.Figure 13 is a block diagram of a system according to one embodiment of the present disclosure.FIG. 14 is a block diagram of a more specific exemplary system according to an embodiment of the disclosure.FIG. 15 shows a block diagram of a second more specific exemplary system according to an embodiment of the present disclosure.FIG. 16 shows a block diagram of a system on a chip (SoC) according to an embodiment of the present disclosure.17 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set, according to an embodiment of the disclosure.detailed descriptionIn the following description, numerous specific details are set forth. However, it is understood that embodiments of the disclosure may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understanding of this description.References in the specification to "one embodiment," "an embodiment," "example embodiment," etc. indicate that the described embodiments may include a particular feature, structure, or characteristic, but that not every embodiment necessarily includes the particular feature, structure, or characteristic. feature, structure or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Furthermore, when a particular feature, structure or characteristic is described in connection with an embodiment, it is considered to be within the scope of those skilled in the art to affect such feature, structure or characteristic in conjunction with other embodiments whether or not explicitly described.A (eg, hardware) processor (eg, having one or more cores) can execute instructions (eg, threads of instructions) to operate on data, eg, to perform arithmetic, logical, or other functions. For example, software may request an operation, and a hardware processor (eg, one or more cores of the hardware processor) may perform the operation in response to the request. One non-limiting example of an operation is a convolution operation, eg, involving multiplication of certain vector elements (eg, a value vector and a filter weight vector) and adding the result to one or more elements of the resulting result. In some embodiments, an instruction set architecture (ISA) (e.g., extensions thereof) (e.g., AVX-512 Vector Neural Network Instructions (AVX512 VNNI) extensions) includes instructions designed to implement convolutional neural network-based algorithms One or more instructions accelerated by hardware. In some embodiments, the ISA includes any of the following: (i) packed data (e.g., vector) instructions that convert individual elements (e.g., 8-bit wide or 16-bit wide) of a first source packed data operand bits wide) are multiplied by corresponding individual elements (e.g., 8-bit wide or 16-bit wide) of the second source packed data operand to generate corresponding (e.g., 16-bit wide or 32-bit wide) products, for those Some of the products in are summed to produce corresponding sums, and those sums are stored in the corresponding elements of the destination packed data operand; (ii) same as (i) above, except on overflow of intermediate sums, which are in destination Saturation to 0x7FFF_FFFF/0x8000_0000 for positive/negative numbers in ground operands; (iii) packed data (e.g., vector) instructions that convert individual elements of the first source packed data operand (e.g., 8-bit wide or 16-bit wide ) are multiplied by corresponding individual elements (e.g., 8-bit wide or 16-bit wide) of the second source packed data operand to generate corresponding (e.g., 16-bit wide or 32-bit wide) products, for those products some of the products are summed to generate corresponding sums, the elements (e.g., 16-bit wide or 32-bit wide) in a third source packed data operand (e.g., also destination packed said operand) are accumulated, and store those accumulated sums in the corresponding elements of the destination packed data operand; or (iv) same as (iii) above, except that on intermediate and overflow it saturates in the destination operand for positive/negative numbers respectively to 0x7FFF_FFFF/0x8000_0000.In some embodiments, media processing (e.g., video encoding, quality analysis, and other pre- and post-processing steps) relies on convolutions and/or various filtering operations (e.g., Sobel, Loop Restoration, FIR ). In some embodiments, the media encoder utilizes one or more convolution instructions to achieve performance, but is only able to achieve low (e.g., 50%- 60%) efficiency. For example, for separable convolution filters, in some embodiments, the kernel performs a byte-level shift for each convolution by the order of the filter length, and requires the programmer to manually organize Data for vector channels, which consume computational resources and increase path length. In some embodiments, the programmer uses multiple instructions to perform the convolution operation. In some embodiments, the programmer will use a convolution instruction that includes masking (eg, taking a mask as an input operand). In some embodiments, a programmer uses a convolution instruction along with one or more pack and/or shuffle instructions to perform a convolution operation.Embodiments herein are directed to convolution instructions that overcome these problems by providing an efficient way to allow the programmer to specify the width of the convolution operation in terms of the number and/or stride of elements in a sliding window that is input to the convolution instruction , the convolution instructions such as instructions (i) to (iv) discussed above. Embodiments herein support instructions that allow the programmer to specify (e.g., as immediate values) the start element position and element vector width in the instruction, such as and thereby allow the programmer not to create a mask before executing the convolution instruction And utilize (eg, a series of) insert and/or shuffling instructions to arrange the data appropriately. This reduces the number of instructions to execute in the filtering, machine learning, and codec cores, and removes overhead that causes contention in processing (eg, central processing unit (CPU)) resources.Embodiments herein are directed to efficient and flexible convolution instructions for performing convolutions of different widths and offsets (eg, strides) in a packed data (eg, vectorized) format. Embodiments herein are directed to fusing data shuffling and shifting (eg, striding) with convolution arithmetic operations, eg, to efficiently support convolution instructions for media codecs, filters, and other kernels. Embodiments herein do not require the programmer (eg, for filter operations) to include one or more non-convolution instructions to marshal the data into a suitable layout prior to using the convolution instructions for the processor. Embodiments herein eliminate the use of instruction(s) associated with data preparation, eg, reduce the number of instructions in the critical path and provide differentiated performance for filtering loops. Embodiments herein are directed to convolution instructions that eliminate the programmer's need to pre-pack data with other instructions, e.g., in some embodiments, eliminate many (e.g., approximately 50%) of shuffling and ALU ( ALU) operation.Embodiments herein are directed to convolution instructions that allow more efficient execution of machine learning kernels, filtering and image processing kernels. Embodiments herein are directed to a single convolution instruction supporting dot product vector type operations with configurable offset (eg, stride) and/or configurable sliding window width. Embodiments herein are directed to packed data (eg, vector) convolution instructions with offset (eg, shift) and width control over a sliding window. Embodiments herein are directed to convolution instructions that allow a programmer to specify both the offset (eg, stride) and width of a filtering operation. In some embodiments, such instructions take two source operations and are used to shift a (eg, immediate) control value through the first source to create a variable-length sliding window. In some embodiments, the width of the sliding window is less than some number (eg, 16) of element widths, and/or the offset (stride) is a power of two.In some embodiments, operations, and thus the instructions disclosed herein, are executed on numerical data in a format selectable between different formats (e.g., representations) in a computing system (e.g., accelerator and/or processor) of. In some embodiments, numbers are in fixed-point or floating-point format. Integers can be represented in binary format. Signed integers can be represented in two (2)'s complement format (eg, where leading zeros indicate positive integers and leading ones indicate negative integers). (eg, real) numbers may be represented in floating-point format, eg, to represent numbers of different magnitudes with a fixed number of digits.One example of a numerical format is one in which numbers are generally rounded to a fixed number of significant digits (significant digits) and scaled using an exponent to some fixed base (eg, base two, ten, or sixteen). An example of a numeric format is as follows, where S represents the sign bit, M represents the mantissa, and E represents the exponent:x=significant digits×base exponent(1)Examples of floating point formats are as follows:x=(-1)S×1.M×2E-deviation (2)According to the IEEE 754 standard for binary FP arithmetic, the mantissa is an unsigned number (eg, a binary fraction), and normalized floating-point numbers have a single one in the most significant bit (MSB) position. In some embodiments, this bit (eg, to the left of the decimal point) is implicit, and thus the mantissa need not store it. In some embodiments, the exponent is represented herein as a non-negative integer from which a constant bias is subtracted. Examples of floating point formats are floating point 16 (e.g., binary 16), floating point 32 (e.g., binary 32), floating point 64 (e.g., binary 64), floating point 128 (e.g., binary 128), and floating point 256 (eg, binary 256), but any number of sign bits, significand bits (eg, its mantissa), or exponent bits may be used in some embodiments. In one embodiment, the binary 16 format has one bit for the sign bit, 5 bits for the exponent, and 11 bits implicitly for the significand (10 bits stored explicitly). In one embodiment, the binary 32 format has one bit for the sign bit, 8 bits for the exponent, and 24 bits implicitly for the significand (23 bits stored explicitly). In one embodiment, the binary64 format has one bit for the sign bit, 11 bits for the exponent, and 53 bits implicitly (52 bits stored explicitly) for the significand. In one embodiment, the binary 128 format has one bit for the sign bit, 15 bits for the exponent, and 113 bits implicitly for the significand (112 bits stored explicitly). In one embodiment, the binary 256 format has one bit for the sign bit, 19 bits for the exponent, and 237 bits implicitly (236 bits stored explicitly) for the significand.In some embodiments, the instruction format includes opcodes (e.g., a suitable subset of opcodes) or other fields (e.g., operands or immediates) that instruct the execution circuitry to use for: generating widths of the sliding window width The first block of continuous elements of the first compressed data source, generate the second block of continuous elements of the first compressed data source whose width is the width of the sliding window and is shifted by the stride, and convert the first block of Each element is multiplied with the corresponding element of the corresponding block of the second packed data source to generate a first set of products, the first set of products is added together to generate a first sum, and each element of the second block is combined with the first set of products multiplying corresponding elements of corresponding blocks of the two packed data sources to generate a second set of products, adding together the second set of products to generate a second sum, and storing the first sum in the first element of the packed data destination And store the second sum in the second element of the packed data destination. While the above example is for two blocks, the number of blocks could be more, such as 3 (or more), 4, 5, 6, or any other number.The instructions disclosed herein are modifications to the operation of a processor (eg, of a computer) itself. Instruction decoding circuitry (eg, a decoder) that does not have such instructions as part of its instruction set will not decode as discussed herein. Execution circuitry that does not have such instructions as part of its instruction set will not perform as discussed herein. For example, a single instruction, when the processor decodes the single instruction into a decoded instruction and the decoded instruction is instructed by the processor, causes the processor to: generate a sequence of elements of the first source of packed data whose width is the width of the sliding window The first block generates the second block whose width is the width of the sliding window and is shifted by the stride, and the continuous elements of the first compressed data source, and matches each element of the first block with the corresponding one of the second compressed data source multiplying corresponding elements of the blocks to generate a first set of products, adding together the first set of products to generate a first sum, combining each element of the second block with the corresponding element of the corresponding block of the second packed data source multiply to generate a second set of products, add the second set of products together to generate a second sum, and store the first sum in the first element of the packed data destination and store the second sum in the packed data destination In the second element of , it is an improvement to the operation of the processor (eg, of a computer) itself.1 illustrates a block diagram of a computer system 101 including a hardware processor 100 having execution circuitry 104 with scalar circuitry 112 and vector/single instruction multiple data (SIMD) circuitry 114 according to an embodiment of the disclosure.The depicted hardware processor 100 includes hardware decoder circuitry 102 (eg, a decoding unit) and hardware execution circuitry 104 (eg, an execution unit). The depicted hardware processor 100 includes (eg, packed data or vector) register(s) 106 . For example, registers may include one or more of registers for accessing (eg, loading and/or storing) data in addition to or instead of accessing (eg, loading or storing) data in memory 110 . The depicted hardware processor 100 includes a cache memory 108 . For example, in addition to or instead of accessing (eg, loading or storing) data in memory 110 and/or register(s) 106, a cache may include a cache for accessing (eg, loading and/or storing) data One or more cache blocks.The depicted execution circuitry 104 includes scalar circuitry 112 and/or vector/single instruction multiple data (SIMD) circuitry 114 . In some embodiments, only one or any combination of scalar circuitry 112 and/or vector/single instruction multiple data (SIMD) circuitry 114 is present (eg, utilized). In some embodiments, scalar circuitry 112 operates on scalar values (eg, multiple single numbers). In some embodiments, vector/SIMD circuitry 114 operates on vector or packed data values.Note that the figures herein may not depict all data communication connections. Those of ordinary skill in the art will appreciate that this is done so as not to obscure certain details in the drawings. Note that a bidirectional arrow in the figures may not require bidirectional communication, eg, it may indicate unidirectional communication (eg, to or from that component or device). Any or all combinations of communication paths may be utilized in certain embodiments herein.In some embodiments, the hardware decoder 102 receives (eg, a single) instruction (eg, a macroinstruction) and decodes the instruction into, eg, microinstructions and/or micro-operations. In some embodiments, hardware execution circuitry 104 executes decoded instructions (eg, macroinstructions) to perform one or more operations. For example, the instruction to be decoded by decoder circuit 102 and for the decoded instruction to be executed by execution circuit 104 may be any instruction discussed herein, such as any of the instructions discussed in FIGS. 2-6 . In some embodiments, scalar circuitry includes sliding window circuit(s) 112A, and/or vector/single instruction multiple data (SIMD) circuitry 114 includes sliding window circuit(s), as discussed herein.2 illustrates a hardware processor 200 coupled to storage 202 including one or more packed data convolution instructions 204 with shift control and/or width control, according to an embodiment of the disclosure. In some embodiments, the packed data convolution instruction is according to any of the disclosures herein. In one embodiment, instructions (eg, macroinstructions) are fetched from storage 202 and sent to decoder circuitry 206 , eg, in response to a request to perform an operation. In the depicted embodiment, decoder circuitry 206 (eg, decoder circuitry) decodes instructions into decoded instructions (eg, one or more microinstructions or one or more micro-operations). The decoded instructions are then sent for execution, eg, via scheduler circuitry 208 to schedule the decoded instructions for execution.In some embodiments (e.g., where the processor/core supports out-of-order (OoO) execution), the processor includes a register rename/allocator circuit coupled to register file/memory circuit 210 (e.g., unit), which Register renaming/allocator circuitry is used to allocate resources and perform register renaming on registers (eg, vector registers associated with logic operations and test instructions). In some embodiments, (eg, for out-of-order execution), the processor includes one or more scheduler circuits 208 coupled to the decoders. The scheduler circuit(s) may schedule one or more operations associated with the decoded instruction (including one or more operations decoded from a packed data convolution instruction with shift control and/or width control), for execution on the execution circuit 212 .In some embodiments, writeback circuitry 214 is included to write the results of instructions back to their destination (e.g., write them to register(s) and/or memory), e.g., so that those results are stored within the processor is visible (eg, visible outside the execution circuitry that produced those results).One or more of these components (e.g., decoder circuitry 206, register renaming/register allocator/scheduler 208, execution circuitry 212, register file/memory 210, or writeback circuitry 214) may be implemented on a hardware processor (and, for example, across multiple cores each having an instance of these components).FIG. 3 illustrates a method 300 of processing a packed data convolution instruction with shift control and width control, according to an embodiment of the disclosure. A processor (or, for example, a processor core) may perform method 300, for example, in response to receiving a request to execute instructions from software. The depicted method 300 includes processing a packed data convolution instruction with shift control and width control through the following steps: At 302, a single instruction is fetched having an identifier identifying a first packed data source, a second packed data source, Fields for packed data destination, sliding window width, and stride and having an opcode instructing the execution circuit to: generate a first block of consecutive elements of the first packed data source with a width of the sliding window width, generate a width A second block of consecutive elements of the first packed source that is the width of the sliding window and shifted by the stride, multiplying each element of the first block with the corresponding element of the corresponding block of the second packed source to generate a first set of products, add together the first set of products to generate the first sum, multiply each element of the second block with the corresponding element of the corresponding block of the second packed data source to generate the second product A set that adds together the second set of products to generate the second sum, and stores the first sum in the first element of the packed data destination and stores the second sum in the second element of the packed data destination; in At 304, the instruction is decoded into a decoded instruction; at 306, the data associated with the identified source operand is fetched; (optionally) at 308, the decoded instruction is dispatched for execution; at 310 at , execute the decoded instruction according to the opcode; (optionally) at 312, where the packed data destination is also the third packed data source, and the opcode instructs the execution circuit to combine the first sum with The first value of the first element from the third packed data source is accumulated to update the first sum, the second sum is accumulated with the second value of the second element from the third packed data source to update the second sum, and stored as storing the updated first sum in the first element of the packed data destination and storing the updated second sum in the second element of the packed data destination; and at 314, committing the result of the executed instruction .FIG. 4 illustrates a circuit 400 including an execution circuit 410 having a sliding window circuit 414 , a multiplier circuit 416 and an adder circuit 418 according to an embodiment of the disclosure. In some embodiments, sliding window circuit 414, multiplier circuit 416, and/or adder circuit 418 are split into multiple lanes therein (eg, to perform their operations on subsets of data in parallel). In some embodiments, a decoder (eg, decoder circuit 102 in FIG. 1 or decoder circuit 206 in FIG. 2 ) decodes the instruction into a decoded instruction that causes execution circuit 410 to utilize a sliding Window circuit 414, multiplier circuit 416, and adder circuit 418 perform packed data convolution operations (e.g., decoded instructions to execution circuit 410) with shift control and width control (e.g., as indicated by sliding window control 412). Indicate which components are to be used, for example, here sliding window circuit 414 , multiplier circuit 416 and adder circuit 418 ).In some embodiments, the sliding window control 412 is indicated by an immediate value of the instruction. In some embodiments, the width of the window is selected from 2-element width (e.g., where the element width is also indicated by the directive), 4-element width, 6-element width, 8-element width, 10-element width, or any other number of element widths . In some embodiments, each successive window (e.g., except the first window) spans from 1 element (e.g., where the element width is also indicated by the instruction), 2 elements, 3 elements, 4 elements, 5 elements, Choose from 6 elements, 7 elements, 8 elements or any other number of elements.In the depicted embodiment, instruction format 401 includes one or more fields identifying packed data destination 406 , first packed data source 402 , second packed data source 404 , and sliding window control 412 . In the depicted embodiment, the first source of packed data 402 and the second source of packed data 404 each include a number "N" of elements (eg, where N is any positive integer greater than 1) (through 0 to ( N-1) to index), for example, where each source (eg, and each destination) is the same number of elements and/or bit-wide (eg, 256-bit wide, 512-bit wide, or 1024-bit wide). In some embodiments, each element in the packed data source has the same bit width (eg, one or more bytes), and/or each element in the first packed data source 401 and the second packed data source 402 Each of the elements in one has the same bit width (eg, one or more bytes).In some embodiments, each element in the packed data destination has the same bit width (eg, one or more bytes), eg, wider than the element width of the source. In some embodiments, sliding window circuit 414, multiplier circuit 416, and/or adder circuit 418 have the same (or different) hardware width.In some embodiments, the sliding window circuit 414 of the execution circuit 410 generates a first block of consecutive elements of the first packed data source 402 with a width of the sliding window width shifted by the stride , the second block of consecutive elements of the first compressed data source 402 , etc., for example to create a set of blocks whose cumulative width is the width of the second source 404 . In some embodiments, the generation of each block is performed in parallel. In some embodiments, execution circuit 410 then causes the result from sliding window circuit 414 to be sent to multiplier circuit 416 .In some embodiments, the multiplier circuit 416 of the execution circuit 410 multiplies each element of the first block with the corresponding element of the corresponding block of the second packed data source 404 to generate a first set of products, and the second Each element of a block is multiplied with a corresponding element of a corresponding block of the second packed data source 404 to generate a second set of products, etc., eg, to create several sets of products for the number of pairs of blocks. In some embodiments, the multiplication is performed on each pair of elements in parallel (eg, in each block). In some embodiments, execution circuit 410 then causes the result from multiplier circuit 416 to be sent to adder circuit 418 .In some embodiments, the adder circuit 418 of the execution circuit 410 adds together a first set of products to generate a first sum, a second set of products together to generate a second sum, etc., for example, for block pairs The number and/or number of elements of Destination 406 creates a number and set. In some embodiments, the addition is performed on each set of products in parallel (eg, for each block pass).In some embodiments, the data results from adder circuit 418 are then stored into corresponding elements of destination 406 .FIG. 5 illustrates a circuit 500 including an execution circuit 510 having a sliding window circuit 514 , a multiplier circuit 516 and an adder circuit 518 according to an embodiment of the disclosure. Although the example in FIG. 5 utilizes 16-element sources and 4-element destinations, it should be understood that this is one example and other embodiments utilize other element widths.In some embodiments, sliding window circuit 514, multiplier circuit 516, and/or adder circuit 518 are split into multiple lanes thereof (eg, to perform their operations on subsets of data in parallel). In some embodiments, a decoder (e.g., decoder circuit 102 in FIG. 1 or decoder circuit 206 in FIG. 2 ) decodes the instruction into a decoded instruction that causes execution circuit 510 to utilize sliding Window circuit 514, multiplier circuit 516, and adder circuit 518 perform packed data convolution operations (e.g., decoded instructions to execution circuit 510) with shift control and width control (e.g., as indicated by sliding window control 512). Indicate which components are to be used, eg here sliding window circuit 514, multiplier circuit 516 and adder circuit 518).In some embodiments, the sliding window control 512 is indicated by an immediate value of the instruction. In some embodiments, the width of the window ranges from 2-element width (e.g., where the element width is also indicated by the directive), 5-element width, 6-element width, 8-element width, 10-element width, etc., or any other number of element widths selection, which in Figure 5 was selected to be 4 elements wide. In some embodiments, each successive window (e.g., except the first window) spans from 1 element (e.g., where the element width is also indicated by the instruction), 2 elements, 3 elements, 4 elements, 5 elements, 6 elements, 7 elements, 8 elements, or any other number of elements is chosen as 1 element in Figure 5.In the depicted embodiment, instruction format 501 includes one or more fields identifying packed data destination 506 , first packed data source 502 , second packed data source 504 , and sliding window control 512 . In the depicted embodiment, the first source of packed data 502 and the second source of packed data 504 each include a number of elements "16" (eg, where N is any positive integer greater than 1) (eg, through 0 through 15), for example, where each source is the same number of elements and/or bit width (eg, 256 bits wide, 512 bits wide, or 1024 bits wide). In some embodiments, each element in the packed data source has the same bit width (eg, one or more bytes), and/or each element in the first packed data source 502 and the second packed data source 504 Each of the elements in one has the same bit width (eg, one or more bytes).In some embodiments, each element in the packed data destination has the same bit width (eg, one or more bytes), eg, wider than the element width of the source. In some embodiments, sliding window circuit 514, multiplier circuit 516, and/or adder circuit 518 have the same (or different) hardware width.In some embodiments, the sliding window circuit 514 of the execution circuit 510 generates the first block 520A (indexed A15-A12) of the first block 520A (indexed A15-A12) of consecutive elements of the first compressed data source 502 having a sliding window width of four elements. A block 520A, generating a second block 520B of consecutive elements of the first packed data source 502 with a sliding window width of four elements and shifted by a span of one element (i.e., A14 is 1 from A15) (Index A14-A11), generating a sliding window width of four elements and shifted by a span of one element (i.e., A13 spans 1 from A14), the third of the consecutive elements of the first compressed data source 502 block 520C (indexed A13-A10), and generates a sliding window width of four elements and is shifted by a span of one element (i.e., A12 is a continuation of the first packed data source 502 spanning 1 from A13). A fourth block 520D of elements (indexed A12-A9), for example, to create a set of blocks 520A-520D whose cumulative width is the width of the second source 504 (eg, 16 elements as shown). In some embodiments, the generation of each block is performed in parallel. In some embodiments, execution circuit 510 then causes the result from sliding window circuit 514 to be sent to multiplier circuit 516 .In some embodiments, the multiplier circuit 516 of the execution circuit 510 multiplies each element of the first block 520A with the corresponding element of the corresponding block (indexed B15-B12) of the second packed data source 504 to generate The first set of products 522A, multiplies each element of the second block 520B with the corresponding element of the corresponding block (indexed B11-B8) of the second packed data source 504 to generate the second set of products 522B, the third Each element of block 520C is multiplied with a corresponding element of a corresponding block (indexed B7-B4) of second source of compressed data 504 to generate a third set of products 522C, and each element of fourth block 520D is multiplied with Corresponding elements of corresponding blocks (indexed B3-B0) of the second source of compressed data 504 are multiplied to generate a fourth product set 522D, creating four product sets for four pairs of blocks. In some embodiments, the multiplication is performed on each pair of elements in parallel (eg, in each block). In some embodiments, execution circuit 510 then causes the result from multiplier circuit 516 to be sent to adder circuit 518 .In some embodiments, adder circuit 518 of execution circuit 510 adds together first set of products 522A to generate first sum 524A, adds together second set of products 522B to generate second sum 524B, and adds together The set of products 522C is added together to generate the third sum 524C, and the fourth set of products 522D is added together to generate the fourth sum 524D, thereby creating four products for the four pairs of blocks and/or the four elements of the destination 506 gather. In some embodiments, the addition is performed on each set of products in parallel (eg, for each block pass).In some embodiments, data results from adder circuit 518 are then stored into corresponding elements of destination 506, for example, result 524A is stored into element C3 of destination 506 and result 524B is stored into destination 506 Result 524C is stored in element C1 of destination 506 , and result 524D is stored in element C0 of destination 506 .FIG. 6 illustrates a circuit 600 including an execution circuit 610 having a sliding window circuit 614 , a multiplier circuit 616 and an adder/accumulator circuit 618 according to an embodiment of the disclosure. In some embodiments, an instruction having format 601 has a field indicating a third source of packed data (eg, the same location as the destination).In some embodiments, sliding window circuit 614, multiplier circuit 616, and/or adder/accumulator circuit 618 are split into multiple lanes thereof (eg, to perform their operations on subsets of data in parallel). In some embodiments, a decoder (eg, decoder circuit 102 in FIG. 1 or decoder circuit 206 in FIG. 2 ) decodes the instruction into a decoded instruction that causes execution circuit 610 to utilize a sliding Window circuit 614, multiplier circuit 616, and adder/accumulator circuit 618 perform packed data convolution operations (e.g., decoded instructions to Execution circuit 614 indicates which components are to be used, eg here sliding window circuit 614 , multiplier circuit 616 and adder/accumulator circuit 618 ).In some embodiments, the sliding window control 612 is indicated by an immediate value of the instruction. In some embodiments, the width of the window is selected from 2-element width (e.g., where the element width is also indicated by the directive), 4-element width, 6-element width, 8-element width, 10-element width, or any other number of element widths . In some embodiments, each successive window (e.g., except the first window) spans from 1 element (e.g., where the element width is also indicated by the instruction), 2 elements, 3 elements, 4 elements, 5 elements, Choose from 6 elements, 7 elements, 8 elements or any other number of elements.In the depicted embodiment, the instruction format 601 includes one or more fields identifying a packed data destination 606 , a first packed data source 602 , a second packed data source 604 , and a sliding window control 612 . In the depicted embodiment, the first source of packed data 602 and the second source of packed data 604 each include a number "N" of elements (e.g., where N is any positive integer greater than 1) (e.g., through 0 to (N-1)), for example, where each source (eg, and each destination) is the same number of elements and/or bit-wide (eg, 256-bit wide, 512-bit wide, or 1024-bit wide). In some embodiments, each element in the packed data source has the same bit width (e.g., one or more bytes), and/or each element in the first packed data source 602 and the second packed data source 604 Each of the elements in one has the same bit width (eg, one or more bytes).In some embodiments, each element in the packed data destination has the same bit width (eg, one or more bytes), eg, wider than the element width of the source. In some embodiments, sliding window circuit 614, multiplier circuit 616, and/or adder/accumulator circuit 618 have the same (or different) hardware width.In some embodiments, the sliding window circuit 614 of the execution circuit 610 generates a first block of consecutive elements of the first packed data source 602 having a width of the sliding window width shifted by the stride , the second blocks of consecutive elements of the first compressed data source 602 , etc., for example to create a set of blocks whose cumulative width is the width of the second source 604 . In some embodiments, the generation of each block is performed in parallel. In some embodiments, execution circuit 610 then causes the result from sliding window circuit 614 to be sent to multiplier circuit 616 .In some embodiments, the multiplier circuit 616 of the execution circuit 610 multiplies each element of the first block with the corresponding element of the corresponding block of the second packed data source 604 to generate a first set of products, and the second Each element of a block is multiplied with a corresponding element of a corresponding block of the second packed data source 604 to generate a second set of products, etc., eg, to create several sets of products for the number of pairs of blocks. In some embodiments, each pair of element-wise multiplications is performed in parallel (eg, in each block). In some embodiments, execution circuit 610 then causes the result from multiplier circuit 616 to be sent to adder/accumulator circuit 618 .In some embodiments, the adder(s) of the adder/accumulator circuit 618 of the execution circuit 610 add together the first set of products to generate the first sum and the second set of products together to generate the first sum. Two sums, etc., eg, create sum sets for the number of block pairs and/or the number of elements of the destination 606 . In some embodiments, the addition is performed on each set of products in parallel (eg, for each block pass).In some embodiments, the data result from the adder circuit 518 is then accumulated with the third source value from the destination/source 606, and the first sum is accumulated with the first value of the first element from the third packed data source 606 (e.g., where destination/source3 606 has a number of elements "M" (e.g., where M is any positive integer greater than 1) (e.g., indexed 0 to (M-1))) (e.g. , where M is less than N)) to update the first sum, the second sum is accumulated with the second value of the second element from the third packed data source 606 to update the second sum, etc., for example for the number of block pairs and/or or the number of elements of destination 606 to create several updated sum sets. In some embodiments, accumulation is performed on each set of values in parallel (eg, for each accumulator channel).In some embodiments, the data result from the adder circuit 518 is then stored into the corresponding element of the destination 606, for example, the updated first sum is stored into the first element of the packed data destination, the updated The second sum of is stored into the second element of the packed data destination, and so on.Exemplary architectures, systems, etc. that may be used in the foregoing are detailed below. An exemplary (eg, vector) instruction format for the instructions disclosed herein is detailed below.At least some embodiments of the disclosed technology may be described in terms of the following examples:Example 1. A device comprising:a decoder circuit for decoding a single instruction into a decoded single instruction having fields identifying a first packed data source, a second packed data source, a packed data destination, a sliding window width and a stride and having an opcode , this opcode instructs the execution circuit to:generate a first block of consecutive elements of the first packed data source with a width of the sliding window width, generate a second block of consecutive elements of the first packed data source with a width of the sliding window width shifted by the stride,multiplying each element of the first block with a corresponding element of a corresponding block of the second packed data source to generate a first set of products,adding together the first set of products to generate the first sum,multiplying each element of the second block with a corresponding element of a corresponding block of the second packed data source to generate a second set of products,adding together the second set of products to generate the second sum, andstoring the first sum in a first element of the packed data destination and storing the second sum in a second element of the packed data destination; andExecution circuitry for executing the decoded single instruction based on the opcode.Example 2. The apparatus of example 1, wherein the sliding window width is selectable from a plurality of widths.Example 3. The apparatus of example 2, wherein the sliding window width is identified by an immediate value of a single instruction.Example 4. The apparatus of example 1, wherein the span is selectable from a plurality of spans.Example 5. The apparatus of example 4, wherein the span is identified by an immediate value of a single instruction.Example 6. The apparatus of example 1, wherein the sliding window width and stride are identified by an immediate value of a single instruction.Example 7. The apparatus of example 1, wherein the first packed data source and the second packed data source have the same element width, and the packed data destination has a wider element width.Example 8. The apparatus of example 1, wherein the packed data destination is also a third packed data source; and the opcode instructs the execution circuit to: combine the first sum with the first element from the third packed data source before storing The first sum is updated by accumulating the first value, the second sum is updated by accumulating the second sum with the second value of the second element from the third source of packed data; and storing the updated first sum is stored in The first element of the packed data destination is packed and the updated second sum is stored in the second element of the packed data destination.Example 9. A method comprising:The single instruction is decoded by a decoder circuit of the processor core into a decoded single instruction having fields identifying a first packed data source, a second packed data source, a packed data destination, a sliding window width and a stride and having An opcode that instructs the executive circuit to:generate a first block of consecutive elements of the first packed data source with a width of the sliding window width, generate a second block of consecutive elements of the first packed data source with a width of the sliding window width shifted by the stride,multiplying each element of the first block with a corresponding element of a corresponding block of the second packed data source to generate a first set of products,adding together the first set of products to generate the first sum,multiplying each element of the second block with a corresponding element of a corresponding block of the second packed data source to generate a second set of products,adding together the second set of products to generate the second sum, andstoring the first sum in a first element of the packed data destination and storing the second sum in a second element of the packed data destination; andThe decoded single instruction is executed by the execution circuitry of the processor core according to the opcode.Example 10. The method of example 9, wherein the sliding window width is selectable from a plurality of widths.Example 11. The method of example 10, wherein the sliding window width is identified by an immediate value of a single instruction.Example 12. The method of example 9, wherein the span is selectable from a plurality of spans.Example 13. The method of example 12, wherein the span is identified by an immediate value of a single instruction.Example 14. The method of example 9, wherein the sliding window width and stride are identified by an immediate value of a single instruction.Example 15. The method of example 9, wherein the first packed data source and the second packed data source have the same element width, and the packed data destination has a wider element width.Example 16. The method of example 9, wherein the packed data destination is also a third packed data source; and the opcode instructs the execution circuit to: combine the first sum with the first element from the third packed data source before storing The first sum is updated by accumulating the first value, the second sum is updated by accumulating the second sum with the second value of the second element from the third source of packed data; and storing the updated first sum is stored in The first element of the packed data destination is packed and the updated second sum is stored in the second element of the packed data destination.Example 17. A non-transitory machine-readable medium storing program code which, when executed by a machine, causes the machine to perform a method comprising the steps of:The single instruction is decoded by a decoder circuit of the processor core into a decoded single instruction having fields identifying a first packed data source, a second packed data source, a packed data destination, a sliding window width and a stride and having An opcode that instructs the executive circuit to:generate a first block of consecutive elements of the first packed data source with a width of the sliding window width, generate a second block of consecutive elements of the first packed data source with a width of the sliding window width shifted by the stride,multiplying each element of the first block with a corresponding element of a corresponding block of the second packed data source to generate a first set of products,adding together the first set of products to generate the first sum,multiplying each element of the second block with a corresponding element of a corresponding block of the second packed data source to generate a second set of products,adding together the second set of products to generate the second sum, andstoring the first sum in a first element of the packed data destination and storing the second sum in a second element of the packed data destination; andThe decoded single instruction is executed by the execution circuitry of the processor core according to the opcode.Example 18. The non-transitory machine-readable medium of example 17, wherein the sliding window width is selectable from a plurality of widths.Example 19. The non-transitory machine-readable medium of example 18, wherein the sliding window width is identified by an immediate value of a single instruction.Example 20. The non-transitory machine readable medium of example 17, wherein the span is selectable from a plurality of spans.Example 21. The non-transitory machine-readable medium of example 20, wherein the span is identified by an immediate value of a single instruction.Example 22. The non-transitory machine-readable medium of example 17, wherein the sliding window width and stride are identified by an immediate value of a single instruction.Example 23. The non-transitory machine-readable medium of example 17, wherein the first packed data source and the second packed data source have the same element width, and the packed data destination has a wider element width.Example 24. The non-transitory machine-readable medium of example 17, wherein the packed data destination is also a third packed data source; and the opcode instructs the execution circuit to: combine the first sum with the packed data from the third packed data before storing accumulating a first value of a first element of a source to update the first sum, accumulating a second sum with a second value of a second element from a third source of packed data to update the second sum; and storing the updated The first sum of is stored in the first element of the packed data destination and the updated second sum is stored in the second element of the packed data destination.In yet another embodiment, an apparatus includes a data storage device storing code that, when executed by a hardware processor, causes the hardware processor to perform any of the methods disclosed herein. The device can be as described in the detailed description. Methods can be as described in the detailed description.An instruction set may include one or more instruction formats. A given instruction format may define various fields (e.g., number of bits, position of bits) to specify the operation to be performed (e.g., opcode) and the operand(s) and/or (multiple) additional data fields (eg, masks), etc. Some instruction formats are broken down further by the definition of instruction templates (or subformats). For example, an instruction template for a given instruction format may be defined as having a different number of fields (included fields are generally in the same order, but at least some fields have different bit positions because fewer fields are included) of that instruction format. subset, and/or defined to have a given field interpreted differently. Thus, each instruction of the ISA is expressed using a given instruction format (and, if defined, by a given one of the instruction templates for that instruction format), and includes a field. For example, an exemplary ADD (addition) instruction has a specific opcode and instruction format that includes an opcode field for specifying the opcode and for selecting operands (source 1/destination and source 2) and the presence of the ADD instruction in the instruction stream will cause specific content in the operand field to select a specific operand. A set of SIMD extensions known as Advanced Vector Extensions (AVX) (AVX1 and AVX2) and utilizing the Vector Extensions (VEX) coding scheme have been introduced and/or released (see e.g. Software Development for Architectures 64 and IA-32, November 2018 Author's Manual; and see the Architectural Instruction Set Extensions Programming Reference, October 2018).Exemplary Instruction FormatEmbodiments of the instruction(s) described herein can be embodied in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) may execute on such systems, architectures and pipelines, but are not limited to those systems, architectures and pipelines detailed.Generic Vector Friendly Instruction FormatA vector friendly instruction format is an instruction format suitable for vector instructions (eg, there are specific fields dedicated to vector operations). Although an embodiment is described in which both vector and scalar operations are supported by a vector friendly instruction format, alternative embodiments use only vector operations by a vector friendly instruction format.7A-7B are block diagrams illustrating a generic vector friendly instruction format and its instruction templates according to an embodiment of the disclosure. 7A is a block diagram illustrating a generic vector-friendly instruction format and its class A instruction templates according to an embodiment of the present disclosure; and FIG. 7B is a block diagram illustrating a generic vector friendly instruction format and its class B instruction templates according to an embodiment of the present disclosure block diagram. Specifically, class A and class B instruction templates are defined for the generic vector friendly instruction format 700 , both of which include no memory access 705 instruction templates and memory access 720 instruction templates. The term "generic" in the context of a vector friendly instruction format refers to an instruction format that is not tied to any particular instruction set.While embodiments of the present disclosure will be described in which the vector friendly instruction format supports the following cases: 64-byte vector operand length (or size) with 32-bit (4-byte) or 64-bit (8-byte) data element width (or size) (and thus, a 64-byte vector consists of 16 doubleword-sized elements, or, alternatively, 8 quadword-sized elements); a 64-byte vector operand length (or size) is the same as a 16-bit ( 2 bytes) or 8-bit (1 byte) data element width (or size); 32-byte vector operand length (or size) and 32-bit (4-byte), 64-bit (8-byte), 16-bit (2 bytes) or 8-bit (1 byte) data element width (or size); and 16-byte vector operand length (or size) with 32-bit (4-byte), 64-bit (8-byte), 16-bit (2 bytes), or 8-bit (1 byte) data element width (or size); but alternative embodiments may support larger, smaller, and/or different vector operand sizes (e.g., 256 bytes vector operand) with a larger, smaller, or different data element width (for example, a 128-bit (16-byte) data element width).The class A instruction templates in FIG. 7A include: 1) In the instruction templates of no memory access 705, the instruction templates showing a full rounding control type operation 710 without memory access and the data transformation type operation 715 of no memory access Instruction templates; and 2) In the instruction templates of memory access 720 , an instruction template of timeliness 725 of memory access and an instruction template of non-timeliness 730 of memory access are shown. The class B instruction templates in FIG. 7B include: 1) In the instruction templates of no memory access 705, the instruction templates of the partial rounding control type operation 712 showing the write mask control of no memory access and the write mask of no memory access and 2) within the instruction template for memory access 720, the instruction template for writemask control 727 for memory access is shown.The generic vector friendly instruction format 700 includes the following fields listed below in the order illustrated in FIGS. 7A-7B .Format field 740 - A specific value in this field (instruction format identifier value) uniquely identifies the vector friendly instruction format, and thus identifies that the instruction appears in the vector friendly instruction format in the instruction stream. Thus, this field is optional in the sense that it is not required for instruction sets that only have a generic vector friendly instruction format.Base Operations field 742 - its content distinguishes between different base operations.Register Index Field 744 - its content specifies the location of the source or destination operand in a register or in memory, either directly or through address generation. These fields include a sufficient number of bits to select N registers from a PxQ (eg, 32x512, 16x128, 32x1024, 64x1024) register file. Although N can be as many as three source registers and one destination register in one embodiment, alternative embodiments can support more or fewer source and destination registers (for example, can support up to two source registers, where these One of the sources also serves as the destination; up to three sources can be supported where one of the sources also serves as the destination; up to two sources and one destination can be supported).Modifier (modifier) field 746 - the content of which will specify memory access instructions appearing in the general vector instruction format and do not specify memory access instructions appearing in the general vector instruction format; A distinction is made between instruction templates and memory access 720 . Memory access operations read and/or write to the memory hierarchy (in some cases using values in registers to specify source and/or destination addresses), while non-memory access operations do not (e.g., source and/or destination ground is a register). Although in one embodiment this field also selects between three different ways to perform memory address calculations, alternative embodiments may support more, fewer or different ways to perform memory address calculations.Extended Operations field 750 - its content distinguishes which of a variety of different operations to perform in addition to the base operation. This field is context specific. In one embodiment of the present disclosure, this field is divided into a class field 768 , an alpha field 752 and a beta field 754 . Extended operations field 750 allows multiple sets of common operations to be performed in a single instruction instead of 2, 3 or 4 instructions.Scale field 760 - whose content allows scaling of the content of the index field for memory address generation (eg, for address generation using (2 scale*index+base)).Displacement field 762A - its content is used as part of memory address generation (eg, for address generation using (2 scale*index+base+displacement)).Displacement Factor Field 762B (note that the juxtaposition of Displacement Field 762A directly on Displacement Factor Field 762B indicates use of one or the other) - whose content is used as part of address generation; it specifies the size by which memory accesses will be scaled (N ) - where N is the number of bytes in the memory access (eg, for address generation using (2 scale * index + base address + scaled displacement)). Redundant low-order bits are ignored, and therefore the contents of the displacement factor field are multiplied by the total memory operand size (N) to generate the final displacement to be used in computing the effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field 774 (described later herein) and the data manipulation field 754C. The displacement field 762A and the displacement factor field 762B are not used for the instruction template of the no memory access 705 and/or different embodiments may implement only one of the two or neither of the two, in the sense that the displacement Field 762A and Shift Factor field 762B are optional.Data Element Width Field 764 - its content distinguishes which of multiple data element widths will be used (in some embodiments for all instructions; in other embodiments only for some of the instructions). This field is optional in the sense that it is not required if only one data element width is supported and/or some aspect of the opcode is used to support data element widths.Writemask field 770 - its content controls, on a data element position by data element position basis, whether the data element position in the destination vector operand reflects the results of base and augment operations. Class A instruction templates support coalescing-writemasking, while class B instruction templates support both coalescing-writemasking and zeroing-writemasking. When combined, a vector mask allows any set of elements in the destination to be protected from updates during the execution of any operation (specified by the base and augmentation operations); in another embodiment, keep where the corresponding mask bits have 0 The old value of each element of the destination. Conversely, when zeroed, a vector mask allows any set of elements in the destination to be zeroed during execution of any operation (specified by the base and augmentation operations); in one embodiment, the elements of the destination are in the corresponding mask Bits are set to 0 when they have a value of 0. A subset of this functionality is the ability to control the vector length of the operation being performed (i.e., the span from the first to the last element being modified), however, the elements being modified do not have to be contiguous. Thus, writemask field 770 allows partial vector operations, including loads, stores, arithmetic, logic, and the like. Although described where the content of writemask field 770 selects one of a plurality of writemask registers that contains the writemask to use (and thus, the content of writemask field 770 indirectly identifies the mask to execute), but alternate embodiments instead or additionally allow the contents of the mask write field 770 to directly specify the mask to execute.Immediate Field 772 - its content allows specification of an immediate. This field is optional in the sense that it is absent in implementations of the generic vector-friendly format that do not support immediates and is absent in instructions that do not use immediates.Class field 768 - its content differentiates between instructions of different classes. Referring to Figures 7A-7B, the content of this field selects between Type A and Type B instructions. In FIGS. 7A-7B , rounded squares are used to indicate that a particular value exists in a field (eg, Class A 768A and Class B 768B for class field 768 in FIGS. 7A-7B , respectively).Type A instruction templateIn the case of an instruction template of type A non-memory access 705, the alpha field 752 is interpreted as its content to distinguish which of the different types of augmented operations are to be performed (e.g. round-type operations 710 and no-memory accesses for no-memory accesses) The instruction template for the accessed data transformation type operation 715 specifies the RS field 752A of rounding 752A.1 and data transformation 752A.2) respectively, while the beta field 754 distinguishes which of the specified types of operations is to be performed. In the instruction template for no memory access 705, scale field 760, displacement field 762A, and displacement scale field 762B do not exist.Instruction Templates with No Memory Access - Fully Round Controlled OperationsIn the no memory access full round control type operation 710 instruction template, the beta field 754 is interpreted as a round control field 754A whose content(s) provide static rounding. Although in the described embodiment of the present disclosure the rounding control field 754A includes the suppress all floating point exceptions (SAE) field 756 and the rounding operation control field 758, alternative embodiments may support both concepts, which may be combined Concepts are encoded as the same field, or have only one or the other of these concepts/fields (for example, there may be only the rounding operation control field 758).SAE field 756 - its content distinguishes whether exception event reporting is disabled; when the content of SAE field 756 indicates that suppression is enabled, the given instruction does not report floating point exception flags of any kind and does not invoke any floating point exception handler.Rounding operation control field 758 - its content distinguishes which of a set of rounding operations is to be performed (eg, round up, round down, round toward zero, and round to nearest). Thus, the round operation control field 758 allows the rounding mode to be changed on an instruction-by-instruction basis. In one embodiment of the present disclosure where the processor includes a control register for specifying the rounding mode, the content of the rounding operation control field 750 overrides this register value.Instruction templates without memory access - data transformation type operationsIn the instruction template of the data transformation type operation 715 with no memory access, the beta field 754 is interpreted as a data transformation field 754B, whose content distinguishes which of multiple data transformations is to be performed (e.g., no data transformation, mixing, broadcasting) .In the case of an instruction template for class A memory access 720, the alpha field 752 is interpreted as an eviction hint field 752B whose content distinguishes which of the eviction hints to use (in FIG. and the instruction template of memory access non-timeliness 730 respectively specify timeliness 752B.1 and non-timeliness 752B.2), and the β field 754 is interpreted as a data manipulation field 754C, and its content distinguishes multiple data manipulation operations to be performed (also known as a primitive) (for example, no manipulation, broadcast, up-conversion of the source, and down-conversion of the destination). The instruction template for memory access 720 includes scale field 760, and optionally includes displacement field 762A or displacement scale field 762B.Vector memory instructions use conversion support to perform vector loads from memory and vector stores to memory. Like ordinary vector instructions, vector memory instructions transfer data from/to memory in a data-element fashion, where the actual elements transferred are specified by the contents of the vector mask selected as the write mask.Instruction Templates for Memory Access - Time SensitiveTime-sensitive data is data that is likely to be reused quickly enough to benefit from caching operations. However, this is a hint, and different processors can implement it differently, including ignoring the hint entirely.Instruction templates for memory accesses - not time sensitiveNon-time-sensitive data is data that is unlikely to be reused quickly enough to benefit from caching operations in the first level cache and should be given eviction priority. However, this is a hint, and different processors can implement it differently, including ignoring the hint entirely.Type B instruction templateIn the case of a Type B instruction template, the alpha field 752 is interpreted as a writemask control (Z) field 752C, the content of which distinguishes whether the writemask controlled by the writemask field 770 should be coalesced or zeroed.In the case of an instruction template for a class B non-memory access 705, part of the β field 754 is interpreted as the RL field 757A, whose content distinguishes which of the different types of extended operations to perform (e.g., write masking for no memory accesses). The instruction template of the code control section rounding control type operation 712 and the instruction template of the write mask control VSIZE type operation 717 without memory access specify rounding 757A.1 and vector size (VSIZE) 757A.2), respectively, while the beta field 754 The remainder of the distinguishes which of the specified types of operations is to be performed. In the instruction template for no memory access 705, scale field 760, displacement field 762A, and displacement scale field 762B do not exist.In the instruction template of the writemask control section rounding control type operation 710 with no memory access, the remainder of the beta field 754 is interpreted as the rounding operation field 759A, and exception reporting is disabled (the given instruction does not report any kind of floating-point exception flags and does not raise any floating-point exception handlers).Rounding Operation Control Field 759A - As with Rounding Operation Control Field 758, its content distinguishes which of a set of rounding operations to perform (e.g., round up, round down, round toward zero, and round to nearest ). Thus, the round operation control field 759A allows the rounding mode to be changed on an instruction-by-instruction basis. In one embodiment of the present disclosure where the processor includes a control register for specifying the rounding mode, the content of the rounding operation control field 750 overrides the register value.In the instruction template of the writemask control VSIZE type operation 717 with no memory access, the remainder of the β field 754 is interpreted as a vector length field 759B whose content distinguishes which of multiple data vector lengths (e.g., 128 bytes, 256 bytes, or 512 bytes).In the case of the instruction template for class B memory access 720, a portion of the beta field 754 is interpreted as a broadcast field 757B whose content distinguishes whether a broadcast-type data manipulation operation is to be performed, while the rest of the beta field 754 is interpreted as a vector length field 759B. The instruction template for memory access 720 includes scale field 760, and optionally includes displacement field 762A or displacement scale field 762B.For general vector friendly instruction format 700 , full opcode field 774 is shown including format field 740 , base operation field 742 , and data element width field 764 . Although an embodiment is shown in which the full opcode field 774 includes all of these fields, in embodiments that do not support all of these fields, the full opcode field 774 includes less than all of these fields. Full opcode field 774 provides the operation code (opcode).Extended operation field 750, data element width field 764, and writemask field 770 allow these features to be specified on an instruction-by-instruction basis in a generic vector friendly instruction format.The combination of the writemask field and the data element width field creates various types of instructions, since these instructions allow the mask to be applied based on different data element widths.The various instruction templates present within Class A and Class B are beneficial in different situations. In some embodiments of the present disclosure, different processors or different cores within a processor may support only class A, only class B, or may support both classes. For example, a high-performance general-purpose out-of-order core intended for general-purpose computing may only support class B, a core intended primarily for graphics and/or scientific (throughput) computing may only support class A, and be designed to use Cores for both general purpose computing and graphics and/or scientific (throughput) computing can support both class A and class B (of course, with some mix of templates and instructions from both classes, but not from both classes) All templates and cores of instructions are within the scope of this disclosure). Likewise, a single processor may include multiple cores all supporting the same class, or where different cores support different classes. For example, in a processor with separate graphics cores and general-purpose cores, one of the graphics cores intended primarily for graphics and/or scientific computing may only support Class A, while one or more of the general-purpose cores One could be a high-performance general-purpose core with class B-only out-of-order execution and register renaming intended for general-purpose computing. Another processor that does not have a separate graphics core may include one or more general-purpose in-order or out-of-order cores that support both class A and class B. Of course, features from one class may also be implemented in other classes in different embodiments of the present disclosure. A program written in a high-level language will be made (e.g., just-in-time or statically compiled) into a variety of different executable forms including: 1) having only the (multiple) or 2) have alternative routines and have control flow code written using a different combination of instructions from all classes that selects those routines to be based on what is currently executing code that the processor supports to execute.Exemplary Specific Vector Friendly Instruction FormatFIG. 8A is a block diagram illustrating an exemplary specific vector friendly instruction format according to an embodiment of the disclosure. Figure 8A shows a specific vector friendly instruction format 800 that is specific in the sense that it specifies the location, size, interpretation and order of fields, and the values of some of those fields. The specific vector friendly instruction format 800 can be used to extend the x86 instruction set, and thus some of the fields are similar or identical to those used in the existing x86 instruction set and its extensions (eg, AVX). The format remains consistent with the prefix encoding field, real opcode byte field, MOD R/M field, SIB field, displacement field, and immediate field of the existing x86 instruction set with extensions. Illustrating the fields from Figures 7A-7B, the fields from Figure 8A map to the fields from Figures 7A-7B.It should be understood that although embodiments of the present disclosure are described with reference to the specific vector friendly instruction format 800 in the context of the general vector friendly instruction format 700 for purposes of illustration, the present disclosure is not limited to the specific vector friendly instruction format 800 unless otherwise stated . For example, the general vector friendly instruction format 700 contemplates various possible sizes for various fields, while the specific vector friendly instruction format 800 is shown with fields of a particular size. As a specific example, although data element width field 764 is illustrated as a one-bit field in specific vector friendly instruction format 800, the disclosure is not so limited (i.e., general vector friendly instruction format 700 contemplates other sizes for data element width field 764 ).The generic vector friendly instruction format 700 includes the following fields listed below in the order illustrated in FIG. 8A .EVEX Prefix (Bytes 0-3) 802 - Encoded in four bytes.Format Field 740 (EVEX Byte 0, Bits [7:0]) - The first byte (EVEX Byte 0) is the Format Field 740, and it contains 0x62 (in one embodiment of the disclosure, for unique value to distinguish vector-friendly instruction formats).The second-fourth bytes (EVEX bytes 1-3) include a number of bit fields providing specific capabilities.REX field 805 (EVEX byte 1, bits [7-5]) - consists of the EVEX.R bit field (EVEX byte 1, bits [7]–R), the EVEX.X bit field (EVEX byte 1, bits [6]–X) and (757BEX byte 1, bit [5]–B). The EVEX.R, EVEX.X, and EVEX.B bit fields provide the same functionality as the corresponding VEX bit fields and are encoded using 1's complement notation, for example, ZMM0 is encoded as 1111B and ZMM15 is encoded as 0000B. The other fields of these instructions encode the lower three bits of the register index (rrr, xxx, and bbb) as known in the art, whereby a value can be obtained by adding EVEX.R, EVEX.X, and EVEX.B to form Rrrr, Xxxx and Bbbb.REX' field 710 - This is the first part of the REX' field 710 and is the EVEX.R' bitfield (EVEX word Section 1, bits [4]–R'). In one embodiment of the present disclosure, this bit is stored in bit-reversed format along with the other bits indicated below to distinguish (in the well-known x86 32-bit mode) from the BOUND instruction whose real opcodeword The section is 62, but the value 11 in the MOD field is not accepted in the MOD R/M field (described below); alternate embodiments of the present disclosure do not store this indicated bit, along with other indicated bits below, in inverted format. A value of 1 is used to encode the lower 16 registers. In other words, R'Rrrr is formed by combining EVEX.R', EVEX.R, and other RRRs from other fields.Opcode Mapping Field 815 (EVEX byte 1, bits [3:0] - mmmm) - its content encodes the implied leading opcode byte (OF, OF 38, or OF 3).Data Element Width Field 764 (EVEX byte 2, bits [7] - W) - represented by the notation EVEX.W. EVEX.W is used to define the granularity (size) of a data type (32-bit data element or 64-bit data element).EVEX.vvvv 820 (EVEX byte 2, bits [6:3]-vvvv) - The role of EVEX.vvvv may include the following: 1) EVEX.vvvv pairs the first source specified in inverted (1's complement) form register operand encoding, and valid for instructions with two or more source operands; 2) EVEX.vvvv encodes a destination register operand specified in 1's complement for a particular vector displacement; or 3 )EVEX.vvvv does not encode any operands, this field is reserved and should contain 1111b. Thus, the EVEX.vvvv field 820 encodes the 4 low-order bits of the first source register specifier stored in inverted (1's complement) form. Depending on the instruction, an additional different EVEX bit field is used to extend the specifier size to 32 registers.EVEX.U 768 class field (EVEX byte 2, bit[2]-U) - if EVEX.U=0, it indicates class A or EVEX.U0; if EVEX.U=1, it indicates class B or EVEX.U1.Prefix encoding field 825 (EVEX byte 2, bits [1:0]-pp) - provides additional bits for the base operation field. In addition to providing support for legacy SSE instructions in EVEX prefix format, this also has the benefit of compressing SIMD prefixes (EVEX prefixes require only 2 bits instead of bytes to express SIMD prefixes). In one embodiment, to support legacy SSE instructions using SIMD prefixes (66H, F2H, F3H) both in legacy format and in EVEX prefixed format, these legacy SIMD prefixes are encoded into the SIMD prefix encoding field; and at runtime The PLA is expanded to a legacy SIMD prefix before being provided to the decoder (so, without modification, the PLA can execute these legacy instructions in both legacy and EVEX formats). While newer instructions may use the contents of the EVEX prefix encoding field directly as an opcode extension, for consistency certain embodiments extend in a similar fashion, but allow for different meanings specified by these legacy SIMD prefixes. An alternate embodiment could redesign the PLA to support 2-bit SIMD prefix encoding, and thus not require extensions.Alpha field 752 (EVEX byte 3, bit [7] - EH, also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX.writemask control, and EVEX.N; also illustrated in alpha)— — As previously stated, this field is context-specific.β field 754 (EVEX byte 3, bits [6:4]-SSS, also known as EVEX.s2-0, EVEX.r2-0, EVEX.rr1, EVEX.LL0, EVEX.LLB, also illustrated in βββ ) - As mentioned earlier, this field is context-specific.REX' field 710 - This is the rest of the REX' field and is the EVEX.V' bit field that can be used to encode the upper 16 or lower 16 registers of the extended 32 register set (EVEX byte 3, bits [3]–V'). This bit is stored in bit-reversed format. A value of 1 is used to encode the lower 16 registers. In other words, V'VVVV is formed by combining EVEX.V', EVEX.vvvv.Writemask field 770 (EVEX byte 3, bits [2:0]-kkk) - its content specifies the index of the register in the writemask register, as previously described. In one embodiment of the present disclosure, a specific value of EVEX.kkk=000 has special behavior that implies no write mask is used for a specific instruction (this can be implemented in various ways, including using a write mask hardwired to all objects or hardware that bypasses the masking hardware to do so).The real opcode field 830 (byte 4) is also referred to as the opcode byte. Part of the opcode is specified in this field.MOD R/M field 840 (Byte 5) includes MOD field 842 , Reg field 844 and R/M field 846 . As previously described, the contents of the MOD field 842 distinguish memory access operations from non-memory access operations. The role of the Reg field 844 can be reduced to two cases: to encode either a destination register operand or a source register operand; or to be treated as an opcode extension and not used to encode any instruction operands. The role of the R/M field 846 may include the following: encoding an instruction operand that references a memory address; or encoding a destination register operand or a source register operand.Scale, Index, Base (SIB) Byte (Byte 6) - As previously described, the contents of the scale field 750 are used for memory address generation. SIB.xxx 854 and SIB.bbb 856 - The contents of these fields have been mentioned previously for register indices Xxxx and Bbbb.Displacement field 762A (bytes 7-10) - When the MOD field 842 contains 10, bytes 7-10 are the displacement field 762A, and it works like a traditional 32-bit displacement (disp32), and at byte granularity .Shift Factor Field 762B (Byte 7) - When the MOD field 842 contains 01, Byte 7 is the Shift Factor field 762B. The location of this field is the same as that of the legacy x86 instruction set 8-bit displacement ( disp8 ) that works at byte granularity. Since disp8 is sign-extended, it can only be addressed between -128 and 127 byte offsets; in terms of 64-byte cache lines, disp8 usage can be set to only four really useful values -128 8 bits for , -64, 0, and 64; since a larger range is often required, disp32 is used; however, disp32 requires 4 bytes. In contrast to disp8 and disp32, the displacement factor field 762B is a reinterpretation of disp8; when the displacement factor field 762B is used, the actual displacement is determined by multiplying the contents of the displacement factor field by the size (N) of the memory operand access. This type of displacement is called disp8*N. This reduces the average instruction length (single byte for displacement, but with much larger range). Such compressed displacements are based on the assumption that the effective displacement is a multiple of the granularity of the memory access, and thus the redundant low-order bits of the address offset need not be encoded. In other words, the displacement factor field 762B replaces the traditional x86 instruction set 8-bit displacement. Thus, the displacement factor field 762B is encoded in the same way as an x86 instruction set 8-bit displacement (thus, no change in the ModRM/SIB encoding rules), the only difference being that disp8 is overloaded to disp8*N. In other words, there is no change in the encoding rules or encoding length, only in the hardware interpretation of the displacement value (which requires scaling the displacement to the size of the memory operand to obtain a byte-wise address offset). The immediate field 772 operates as previously described.full opcode fieldFigure 8B is a block diagram illustrating the fields in the specific vector friendly instruction format 800 that make up the full opcode field 774, according to one embodiment of the present disclosure. Specifically, full opcode field 774 includes format field 740 , base operation field 742 , and data element width (W) field 764 . The base operation field 742 includes a prefix encoding field 825 , an opcode mapping field 815 and a real opcode field 830 .register index fieldFIG. 8C is a block diagram illustrating the fields in the specific vector friendly instruction format 800 that make up the register index field 744 according to one embodiment of the disclosure. Specifically, register index field 744 includes REX field 805, REX' field 810, MODR/M.reg field 844, MODR/M.r/m field 846, VVVV field 820, xxx field 854 and bbb field 856.Extended Action FieldFIG. 8D is a block diagram illustrating the fields of the specific vector friendly instruction format 800 that make up the extended operation field 750 according to one embodiment of the present disclosure. When the class (U) field 768 contains 0, it indicates EVEX.U0 (A class 768A); when it contains 1, it indicates EVEX.U1 (B class 768B). When U=0 and MOD field 842 contains 11 (indicating no memory access operation), alpha field 752 (EVEX byte 3, bits [7] - EH) is interpreted as rs field 752A. When the rs field 752A contains 1 (round 752A.1), the beta field 754 (EVEX byte 3, bits [6:4] - SSS) is interpreted as the round control field 754A. Rounding control field 754A includes a one-bit SAE field 756 and a two-bit rounding operation field 758 . When rs field 752A contains 0 (data transform 752A.2), beta field 754 (EVEX byte 3, bits [6:4] - SSS) is interpreted as three-bit data transform field 754B. When U=0 and the MOD field 842 contains 00, 01, or 10 (indicating a memory access operation), the alpha field 752 (EVEX byte 3, bits [7] - EH) is interpreted as the eviction hint (EH) field 752B, and Beta field 754 (EVEX byte 3, bits [6:4] - SSS) is interpreted as three-bit data manipulation field 754C.When U=1, alpha field 752 (EVEX byte 3, bits [7] - EH) is interpreted as writemask control (Z) field 752C. When U=1 and MOD field 842 contains 11 (indicating no memory access operation), part of β field 754 (EVEX byte 3, bits [4] - S0) is interpreted as RL field 757A; 757A.1), the remainder of the beta field 754 (EVEX byte 3, bits [6-5]–S2-1) is interpreted as the rounding operation field 759A, and when the RL field 757A contains 0 (VSIZE 757. A2), the remainder of the β field 754 (EVEX byte 3, bits [6-5]-S2-1) is interpreted as the vector length field 759B (EVEX byte 3, bits [6-5]-L1-0 ). When U=1 and the MOD field 842 contains 00, 01, or 10 (indicating a memory access operation), the beta field 754 (EVEX byte 3, bits [6:4] - SSS) is interpreted as the vector length field 759B (EVEX word section 3, bits [6-5]–L1-0) and broadcast field 757B (EVEX byte 3, bits [4]–B).Exemplary Register ArchitectureFIG. 9 is a block diagram of a register architecture 900 according to one embodiment of the disclosure. In the illustrated embodiment, there are thirty-two 512-bit wide vector registers 910; these registers are referenced as zmm0 through zmm31. The lower order 256 bits of the lower 16 zmm registers are overlayed on registers ymm0-16. The lower order 128 bits of the lower 16 zmm registers (the lower order 128 bits of the ymm registers) are overlaid on registers xmm0-15. The specific vector friendly instruction format 800 operates on these overlaid register files, as illustrated in the following table.In other words, vector length field 759B selects between a maximum length and one or more other shorter lengths, where each such shorter length is half the previous length, and there is no instruction template for vector length field 759B Operates on maximum vector length. Furthermore, in one embodiment, the class B instruction templates of the specific vector friendly instruction format 800 operate on packed or scalar single/double precision floating point data as well as packed or scalar integer data. Scalar operations are operations performed on the lowest order data element locations in zmm/ymm/xmm registers; depending on the embodiment, higher order data element locations are either left the same as before the instruction or are zeroed.Write Mask Registers 915 - In the illustrated embodiment, there are 8 write mask registers (k0 through k7), each 64 bits in size. In an alternative embodiment, the size of the write mask register 915 is 16 bits. As previously stated, in one embodiment of the present disclosure, the vector mask register k0 cannot be used as a writemask; when the encoding that normally indicates k0 is used as a writemask, it selects the hardwired writemask 0xFFFF , thereby effectively disabling writemasks from being used for that instruction.General Purpose Registers 925 - In the embodiment shown, there are sixteen 64-bit general purpose registers that are used with existing x86 addressing modes to address memory operands. These registers are referred to by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.Scalar floating point stack register file (x87 stack) 945, on top of which is overlaid the MMX packed integer flat register file 950 - in the illustrated embodiment, the x87 stack is used to use the x87 instruction set extensions for 32/64 An eight-element stack that performs scalar floating-point operations on 80-bit floating-point data; and uses MMX registers to perform operations on 64-bit packed integer data, and to hold operands for some operations performed between MMX and XMM registers.Alternative embodiments of the present disclosure may use wider or narrower registers. Additionally, alternative embodiments of the present disclosure may use more, fewer, or different register files and registers.Exemplary core architecture, processor and computer architectureProcessor cores can be implemented in different ways, for different purposes, and in different processors. For example, implementations of such cores may include: 1) general-purpose in-order cores intended for general-purpose computing; 2) high-performance general-purpose out-of-order cores intended for general-purpose computing; 3) intended primarily for graphics and/or Or dedicated cores for scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general-purpose in-order cores intended for general-purpose computing and/or one or more general-purpose out-of-order cores intended for general-purpose computing; and 2 ) coprocessor comprising one or more dedicated cores intended primarily for graphics and/or scientific (throughput) use. Such different processors result in different computer system architectures, which may include: 1) a coprocessor on a separate chip from the CPU; 2) a coprocessor in the same package as the CPU but on a separate die 3) coprocessors on the same die as the CPU (in which case such coprocessors are sometimes referred to as dedicated logic or as dedicated cores, such as integrated graphics and and/or scientific (throughput) logic); and 4) a system-on-chip that can integrate the described CPU (sometimes referred to as application core(s) or application processor(s), co-processing controller and additional functions are included on the same die. An exemplary core architecture is described next, followed by an exemplary processor and computer architecture.Exemplary Core ArchitectureIn-order and out-of-order kernel block diagrams10A is a block diagram illustrating an exemplary in-order pipeline and an exemplary register renaming out-of-order issue/execution pipeline according to various embodiments of the present disclosure. 10B is a block diagram illustrating an example embodiment of an in-order architecture core and an example register renaming out-of-order issue/execution architecture core to be included in a processor according to various embodiments of the present disclosure. The solid-lined boxes in FIGS. 10A-10B illustrate in-order pipelines and in-order cores, while the optional addition of dashed-lined boxes illustrate register-renaming, out-of-order issue/execution pipelines and cores. Considering that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In FIG. 10A, processor pipeline 1000 includes fetch stage 1002, length decode stage 1004, decode stage 1006, allocate stage 1008, rename stage 1010, dispatch (also called dispatch or issue) stage 1012, register read/memory Read stage 1014 , execute stage 1016 , writeback/memory write stage 1018 , exception handling stage 1022 and commit stage 1024 .FIG. 10B shows processor core 1090 including front end unit 1030 coupled to execution engine unit 1050 , and both front end unit 1030 and execution engine unit 1050 are coupled to memory unit 1070 . Core 1090 may be a Reduced Instruction Set Computing (RISC) core, a Complex Instruction Set Computing (CISC) core, a Very Long Instruction Word (VLIW) core, or a hybrid or alternative core type. As yet another option, core 1090 may be a special purpose core such as, for example, a network or communication core, a compression engine, a coprocessor core, a general purpose computing graphics processing unit (GPGPU) core, a graphics core, or the like.Front-end unit 1030 includes branch prediction unit 1032 coupled to instruction cache unit 1034 coupled to instruction translation lookaside buffer (TLB) 1036 coupled to instruction translation lookaside buffer 1036 Fetch unit 1038 , which is coupled to decode unit 1040 . Decode unit 1040 (or decoder or decoder unit) may decode instructions (e.g., macroinstructions) and generate one or more instructions decoded from, or otherwise reflecting, or derived from, the original instructions. micro-operations, microcode entry points, microinstructions, other instructions, or other control signals as output. The decoding unit 1040 can be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read-only memories (ROMs), and the like. In one embodiment, core 1090 includes a microcode ROM or other medium (eg, in decode unit 1040, or otherwise within front end unit 1030) that stores microcode for certain macroinstructions. Decode unit 1040 is coupled to rename/allocator unit 1052 in execution engine unit 1050 .Execution engine unit 1050 includes a rename/allocator unit 1052 coupled to a retirement unit 1054 and a set 1056 of one or more scheduler units. Scheduler unit(s) 1056 represents any number of different schedulers, including reservation stations, central instruction windows, and the like. Scheduler unit(s) 1056 are coupled to physical register file unit(s) 1058 . Each physical register file unit of physical register file unit(s) 1058 represents one or more physical register files, where different physical register files store one or more different data types, such as scalar integer, scalar floating Point, packed integer, packed floating point, vector integer, vector floating point, state (eg instruction pointer which is the address of the next instruction to execute), etc. In one embodiment, physical register file unit(s) 1058 include vector register units, write mask register units, and scalar register units. These register units can provide architectural vector registers, vector mask registers, and general purpose registers. Physical register file unit(s) 1058 are overlaid by retirement unit(s) 1054 to illustrate the various ways in which register renaming and out-of-order execution can be achieved (e.g., using reorder buffer(s) and retirement register(s) heap; use (multiple) future files, (multiple) history buffers, (multiple) retirement register files; use register maps and register pools, etc.). Retirement unit(s) 1054 and physical register file unit(s) 1058 are coupled to execution cluster(s) 1060 . Execution cluster(s) 1060 includes a set 1062 of one or more execution units and a set 1064 of one or more memory access units. Execution unit 1062 may perform various operations (eg, shift, add, subtract, multiply) and may perform on various data types (eg, scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include multiple execution units dedicated to a particular function or set of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. Scheduler unit(s) 1056, physical register file unit(s) 1058, and execution cluster(s) 1060 are shown as potentially multiple, as certain embodiments create separate pipelines (e.g., scalar integer pipeline, scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or each with its own scheduler unit, physical register file unit(s), and/or Execution cluster's memory access pipeline - and in case of separate memory access pipelines, some embodiments are implemented where only the pipeline's execution cluster has memory access unit(s 1064). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the remaining pipelines may be in-order.Set of memory access units 1064 is coupled to memory unit 1070, which includes a data TLB unit 1072, which is coupled to a data cache unit 1074, which is coupled to a second level (L2) high-speed Cache unit 1076. In one exemplary embodiment, the memory access unit 1064 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 1072 in the memory unit 1070 . Instruction cache unit 1034 is also coupled to a level two (L2) cache unit 1076 in memory unit 1070 . The L2 cache unit 1076 is coupled to one or more other levels of cache, and ultimately to main memory.In some embodiments, prefetch circuitry 1078 is included to prefetch data, eg, to predict access addresses and bring data for those addresses into one or more caches (eg, from memory 1080 ).As an example, an exemplary out-of-order issue/execution core architecture for register renaming may implement pipeline 1000 as follows: 1) instruction fetch 1038 executes fetch stage 1002 and length decode stage 1004; 2) decode unit 1040 executes decode stage 1006; 3) rename/allocator unit 1052 executes allocation stage 1008 and rename stage 1010; 4) scheduler unit(s) 1056 executes dispatch stage 1012; 5) physical register file unit(s) 1058 and memory unit 1070 executes Register read/memory read stage 1014; execution cluster 1060 executes execute stage 1016; 6) memory unit 1070 and physical register file unit(s) 1058 execute writeback/memory write stage 1018; 7) each unit may involve Exception handling stage 1022; and 8) Retirement unit 1054 and physical register file unit(s) 1058 execute commit stage 1024.Core 1090 may support one or more instruction sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); MIPS instruction set from MIPS Technologies, Inc., Sunnyvale, Calif.; The ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings Inc., which includes the instruction(s) described herein. In one embodiment, core 1090 includes logic to support packed data instruction set extensions (eg, AVX1, AVX2), thereby allowing packed data to be used to perform operations used by many multimedia applications.It should be understood that a core can support multithreading (a collection of two or more operations or threads executing in parallel), and that this multithreading can be accomplished in a variety of ways, including time division multithreading, simultaneous multithreading, Threading (where a single physical core provides a logical core for each of the threads the physical core is simultaneously multithreading), or a combination thereof (e.g., time-division fetching and decoding followed by simultaneous multithreading such as in hyperthreading techniques change).Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming can be used in in-order architectures. Although the illustrated embodiment of the processor also includes separate instruction and data cache units 1034/1074 and a shared L2 cache unit 1076, alternative embodiments may have a single internal cache for both instructions and data , such as, for example, a first level (L1) internal cache or multiple levels of internal cache. In some embodiments, the system may include a combination of internal caches and external caches external to the cores and/or processors. Alternatively, all cache memory may be external to the core and/or processor.Concrete Exemplary Ordered Core Architecture11A-11B illustrate a block diagram of a more specific exemplary in-order core architecture, which would be one logical block among several logical blocks (including other cores of the same type and/or different types) in a chip. Depending on the application, the logic blocks communicate with some fixed function logic, memory I/O interfaces, and other necessary I/O logic through a high-bandwidth interconnect network (eg, a ring network).11A is a block diagram of a single processor core and its connection to an on-die interconnect network 1102 and its local subset 1104 of level two (L2) caches, according to an embodiment of the disclosure. In one embodiment, the instruction decode unit 1100 supports the x86 instruction set with packed data instruction set extension. The L1 cache 1106 allows low-latency access to cache memory into scalar and vector units. Although in one embodiment (to simplify the design), scalar unit 1108 and vector unit 1110 use separate sets of registers (scalar registers 1112 and vector registers 1114, respectively), and data transferred between these registers is written to memory , and then read back from the first level (L1) cache 1106, but alternative embodiments of the present disclosure may use a different approach (for example, using a single set of registers or including allowing data to be transferred between the two register files without requiring communication path to be written and read back).The local subset of L2 cache 1104 is a portion of the global L2 cache that is divided into separate local subsets, one for each processor core. Each processor core has a direct access path to its own local subset 1104 of the L2 cache. Data read by a processor core is stored in its L2 cache subset 1104 and can be quickly accessed in parallel with other processor cores accessing their own local L2 cache subset. Data written by a processor core is stored in its own L2 cache subset 1104 and flushed from other subsets if necessary. The ring network ensures the consistency of shared data. The ring network is bidirectional to allow agents such as processor cores, L2 caches, and other logic blocks to communicate with each other within the chip. Each ring data path is 1012 bits wide in each direction.11B is an expanded view of a portion of the processor core in FIG. 11A according to an embodiment of the disclosure. FIG. 11B includes L1 data cache 1106A portion of L1 cache 1104 , and more details about vector unit 1110 and vector register 1114 . In particular, vector unit 1110 is a 16-wide vector processing unit (VPU) (see 16-wide ALU 1128 ) that executes one or more of integer, single-precision floating-point, and double-precision floating-point instructions. The VPU supports mixing of register inputs through mixing unit 1120 , value conversion through value conversion units 1122A-B , and replication of memory inputs through replication unit 1124 . Write mask register 1126 allows masking of the resulting vector writes.12 is a block diagram of a processor 1200 that may have more than one core, may have an integrated memory controller, and may have an integrated graphics device, according to an embodiment of the disclosure. 12 illustrates a processor 1200 with a single core 1202A, a system agent 1210, a set 1216 of one or more bus controller units, while the optional addition of dashed boxes illustrates multiple cores 1202A-N , a set 1214 of one or more integrated memory controller units in a system agent unit 1210 and a replacement processor 1200 for dedicated logic 1208 .Thus, different implementations of processor 1200 may include: 1) a CPU, where application-specific logic 1208 is integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and cores 1202A-N are one or more Multiple general-purpose cores (e.g., general-purpose in-order cores, general-purpose out-of-order cores, a combination of both); 2) coprocessors, where cores 1202A-N are intended primarily for graphics and/or scientific (throughput) and 3) coprocessors, where cores 1202A-N are a large number of general purpose in-order cores. Thus, processor 1200 may be a general purpose processor, a coprocessor, or a special purpose processor, such as, for example, a network or communications processor, a compression engine, a graphics processor, a GPGPU (general purpose graphics processing unit), a high throughput integrated many-core (MIC) coprocessors (including 30 or more cores), embedded processors, and more. The processor may be implemented on one or more chips. Processor 1200 may be part of and/or may be implemented on one or more substrates using any of a variety of process technologies such as, for example, BiCMOS, CMOS, or NMOS.The memory hierarchy includes one or more levels of cache within the core, a set 1206 of one or more shared cache units, and external memory (not shown) coupled to a set 1214 of integrated memory controller units. The set of shared cache units 1206 may include one or more intermediate levels of cache, such as a second level (L2), third level (L3), fourth level (L4) or other level of cache, last level cache (LLC) and/or a combination of the above. While in one embodiment a ring-based interconnect unit 1212 interconnects the integrated graphics logic 1208, the set of shared cache units 1206, and the system agent unit 1210/integrated memory controller unit(s) 1214, alternative embodiments Such units may be interconnected using any number of known techniques. In one embodiment, coherency is maintained between one or more cache units 1206 and cores 1202A-N.In some embodiments, one or more cores 1202A-N are capable of multithreading. System agent 1210 includes those components that coordinate and operate cores 1202A-N. The system agent unit 1210 may include, for example, a power control unit (PCU) and a display unit. The PCU may be or include the logic and components required to regulate the power states of the cores 1202A-N and the integrated graphics logic 1208 . The display unit is used to drive one or more externally connected displays.Cores 1202A-N may be homogeneous or heterogeneous in terms of architectural instruction sets; that is, two or more of cores 1202A-N may be capable of executing the same set of instructions that other cores may be capable of executing. set of only a subset or a different set of instructions.Exemplary Computer Architecture13-16 are block diagrams of exemplary computer architectures. Known in the art for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network equipment, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices , video game equipment, set-top boxes, microcontrollers, cellular phones, portable media players, handheld devices, and other system designs and configurations of various other electronic devices are also suitable. In general, a wide variety of systems or electronic devices capable of containing a processor and/or other execution logic as disclosed herein are generally suitable.Referring now to FIG. 13 , shown is a block diagram of a system 1300 according to one embodiment of the present disclosure. System 1300 may include one or more processors 1310 , 1315 coupled to a controller hub 1320 . In one embodiment, controller hub 1320 includes graphics memory controller hub (GMCH) 1390 and input/output hub (IOH) 1350 (which may be on separate chips); GMCH 1390 includes memory and graphics controller, memory 1340 and coprocessor 1345 couple to the memory and graphics controller; IOH 1350 couples input/output (I/O) devices 1360 to GMCH 1390 . Alternatively, one or both of the memory and graphics controller are integrated within the processor (as described herein), the memory 1340 and coprocessor 1345 are directly coupled to the processor 1310, and the controller hub 1320 communicates with the IOH 1350 in a single chip. The memory 1340 may include (eg, convolution) code 1340A that, eg, when executed, causes the processor to perform any of the methods as described in this disclosure.The optionality of additional processors 1315 is indicated in FIG. 13 by dashed lines. Each processor 1310 , 1315 may include one or more of the processing cores described herein, and may be some version of processor 1200 .Memory 1340 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of both. For at least one embodiment, the controller hub 1320 communicates with the process(s) via a multi-drop bus such as a front-side bus (FSB), a point-to-point interface such as a quick-path interconnect (QPI), or similar connection 1395 Devices 1310, 1315 communicate.In one embodiment, coprocessor 1345 is a special purpose processor such as, for example, a high throughput MIC processor, network or communications processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 1320 may include an integrated graphics accelerator.There may be various differences between physical resources 1310, 1315 in a range of quality measures including architectural, microarchitectural, thermal, power consumption characteristics, and the like.In one embodiment, processor 1310 executes instructions that control general types of data processing operations. Embedded within these instructions may be coprocessor instructions. Processor 1310 identifies these coprocessor instructions as being of a type that should be executed by attached coprocessor 1345 . Accordingly, processor 1310 issues these coprocessor instructions (or control signals representing coprocessor instructions) to coprocessor 1345 over a coprocessor bus or other interconnect. Coprocessor(s) 1345 accept and execute received coprocessor instructions.Referring now to FIG. 14 , shown is a block diagram of a first more specific exemplary system 1400 in accordance with an embodiment of the present disclosure. As shown in FIG. 14 , multiprocessor system 1400 is a point-to-point interconnect system and includes a first processor 1470 and a second processor 1480 coupled via a point-to-point interconnect 1450 . Each of processors 1470 and 1480 may be some version of processor 1200 . In one embodiment of the present disclosure, processors 1470 and 1480 are processors 1310 and 1315 , respectively, and coprocessor 1438 is coprocessor 1345 . In another embodiment, processors 1470 and 1480 are processor 1310 and coprocessor 1345, respectively.Processors 1470 and 1480 are shown including integrated memory controller (IMC) units 1472 and 1482, respectively. Processor 1470 also includes point-to-point (P-P) interfaces 1476 and 1478 as part of its bus controller unit; similarly, second processor 1480 includes P-P interfaces 1486 and 1488 . Processors 1470 , 1480 may exchange information via a P-P interface 1450 using point-to-point (P-P) interface circuitry 1478 , 1488 . As shown in Figure 14, IMCs 1472 and 1482 couple the processors to respective memories, memory 1432 and memory 1434, which may be portions of main memory locally attached to the respective processors.Processors 1470 , 1480 may each exchange information with chipset 1490 via respective P-P interfaces 1452 , 1454 using point-to-point interface circuits 1476 , 1494 , 1486 , 1498 . Chipset 1490 may optionally exchange information with coprocessor 1438 via high performance interface 1439 . In one embodiment, coprocessor 1438 is a special purpose processor such as, for example, a high throughput MIC processor, network or communications processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.A shared cache (not shown) may be included in either processor, or external to the two processors but connected to the processors via a P-P interconnect such that if the processors are placed in a low power mode, either processor Local cache information for one or both processors may be stored in a shared cache.Chipset 1490 may be coupled to first bus 1416 via interface 1496 . In one embodiment, the first bus 1416 may be a Peripheral Component Interconnect (PCI) bus or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present disclosure is not limited thereto .As shown in FIG. 14 , various I/O devices 1414 may be coupled to a first bus 1416 along with a bus bridge 1418 that couples the first bus 1416 to a second bus 1420 . In one embodiment one or A number of additional processors 1415 are coupled to a first bus 1416 . In one embodiment, the second bus 1420 may be a low pin count (LPC) bus. In one embodiment, various devices may be coupled to the second bus 1420 including, for example, a keyboard and/or mouse 1422 , a communication device 1427 , and a storage unit 1428 such as a computer that may include instructions/code and data 1430 disk drive or other mass storage device. Additionally, an audio I/O 1424 may be coupled to the second bus 1420 . Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 14, the system could implement a multi-drop bus or other such architecture.Referring now to FIG. 15 , shown is a block diagram of a second more specific exemplary system 1500 in accordance with an embodiment of the present disclosure. Like elements in FIGS. 14 and 15 use like reference numerals, and certain aspects of FIG. 14 are omitted from FIG. 15 to avoid obscuring other aspects of FIG. 15 .Figure 15 illustrates that processors 1470, 1480 may include integrated memory and I/O control logic ("CL") 1472 and 1482, respectively. Accordingly, the CL 1472, 1482 includes an integrated memory controller unit and includes I/O control logic. FIG. 15 illustrates that not only memory 1432 , 1434 is coupled to CL 1472 , 1482 , but I/O device 1514 is also coupled to control logic 1472 , 1482 . Legacy I/O devices 1515 are coupled to chipset 1490 .Referring now to FIG. 16 , shown is a block diagram of a SoC 1600 in accordance with an embodiment of the disclosure. Similar elements in Fig. 12 are provided with similar reference numerals. Additionally, dashed boxes are optional features on more advanced SoCs. In FIG. 16, the interconnection unit(s) 1602 are coupled to: an application processor 1610 comprising a set of one or more sets of cores 1202A-N and a shared cache unit(s) 1206; a system agent unit 1210; bus controller unit(s) 1216; integrated memory controller unit(s) 1214; set 1620 of one or more coprocessors, which may include integrated graphics logic, image processors, audio processors, and video a processor; a static random access memory (SRAM) unit 1630; a direct memory access (DMA) unit 1632; and a display unit 1640 for coupling to one or more external displays. In one embodiment, coprocessor(s) 1620 include special purpose processors such as, for example, network or communications processors, compression engines, GPGPUs, high throughput MIC processors, or embedded processors, among others.Embodiments (eg, of mechanisms) disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementations. Embodiments of the present disclosure may be implemented as a computer program or program code executing on a programmable system comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements) , at least one input device, and at least one output device.Program code, such as code 1430 illustrated in FIG. 14, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in known manner. For the purposes of this application, a processing system includes any system having a processor, such as, for example, a digital signal processor (DSP), microcontroller, application specific integrated circuit (ASIC), or microprocessor.The program code can be implemented in a high-level procedural or object-oriented programming language to communicate with the processing system. Program code can also be implemented in assembly or machine language, if desired. In fact, the mechanisms described in this paper are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium, the instructions representing various logic in a processor, which when read by a machine cause the machine to manufacture Logic used to implement the techniques described herein. Such representations, known as "IP cores," may be stored on a tangible, machine-readable medium and supplied to various customers or production facilities for loading into the manufacturing machines that actually manufacture the logic or processor.Such machine-readable storage media may include, but are not limited to, non-transitory, tangible arrangements of articles of manufacture or formation by a machine or apparatus, including storage media, such as hard disks; any other type of disk, including floppy disks, optical disks, compact Disk read-only memory (CD-ROM), compact rewritable disk (CD-RW), and magneto-optical disk; semiconductor devices such as read-only memory (ROM), such as dynamic random access memory (DRAM) and static random access memory Random Access Memory (RAM), Erasable Programmable Read-Only Memory (EPROM), Flash Memory, Electrically Erasable Programmable Read-Only Memory (EEPROM); Phase Change Memory (PCM); Magnetic Card or optical card; or any other type of medium suitable for storing electronic instructions.Accordingly, embodiments of the present disclosure also include non-transitory tangible machine-readable media containing instructions or containing design data, such as a hardware description language (HDL), which defines the structures, circuits, devices, processors described herein and/or system characteristics. These embodiments are also referred to as program products.Simulation (including binary transformation, code deformation, etc.)In some cases, an instruction converter may be used to convert instructions from a source instruction set to a target instruction set. For example, an instruction converter may transform (eg, using static binary transformation, dynamic binary transformation including dynamic compilation), morph, emulate, or otherwise convert an instruction into one or more other instructions to be processed by the core. The instruction converter can be implemented in software, hardware, firmware, or a combination thereof. The instruction converter can be on-processor, off-processor, or partly on-processor and partly off-processor.17 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set, according to an embodiment of the disclosure. In the illustrated embodiment, the instruction converter is a software instruction converter, but alternatively, the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. 17 shows that a program in a high-level language 1702 can be compiled using an x86 compiler 1704 to generate x86 binary code 1706 that is natively executable by a processor 1716 having at least one x86 instruction set core. Processor having at least one x86 instruction set core 1716 represents any processor that performs substantially the same function as a processor having at least one x86 instruction set core by compatibly executing or otherwise processing: 1) x86 instruction set cores a substantial portion of the instruction set of a core, or 2) an application or other software that is intended to run on a processor having at least one x86 instruction set core in order to achieve substantially the same results as a processor having at least one x86 instruction set core Object code version. x86 compiler 1704 represents a compiler operable to generate x86 binary code 1706 (e.g., object code) executable on a processor 1716 having at least one x86 instruction set core, with or without additional linkage processing . Similarly, FIG. 17 shows that an alternative instruction set compiler 1708 can be used to compile a program in the form of a high-level language 1702 to generate a program that can be executed by a processor 1714 that does not have at least one x86 instruction set core (e.g., a Sunny, Calif. Alternative instruction set binary code 1710 natively executed by the MIPS instruction set of MIPS Technologies, Inc. of Sunnyvale, Calif., and/or a processor of a core executing the ARM instruction set of ARM Holdings Inc. of Sunnyvale, Calif. Instruction converter 1712 is used to convert x86 binary code 1706 into code that can be natively executed by processor 1714 that does not have an x86 instruction set core. This translated code is unlikely to be identical to the alternate instruction set binary code 1710 because instruction converters capable of doing so are difficult to manufacture; however, the translated code will perform common operations and be composed of instructions from the alternate instruction set. Accordingly, instruction converter 1712 represents, by emulation, emulation or any other process, software, firmware, hardware or a combination thereof that allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute x86 binary code 1706.