abstract
stringlengths
1
4.43k
claims
stringlengths
14
189k
description
stringlengths
5
1.46M
A memory module system with a global shared context. A memory module system can include a plurality of memory modules and at least one processor, which can implement the global shared context. The memory modules of the system can provide the global shared context at least in part by providing an address space shared between the modules and applications running on the modules. The address space sharing can be achieved by having logical addresses global to the modules, and each logical address can be associated with a certain physical address of a specific module.
1.A system including:A plurality of memory modules connected to provide physical memory, and the memory modules of the plurality of memory modules include:A plurality of partitions of the physical memory, wherein a partition of the plurality of partitions is associated with at least one physical memory address; andThe processor, which is configured to:Executing code, where the code, when executed, causes the processor to access a virtual memory address;Mapping the virtual memory address to a shared memory address in the logical memory space shared among the plurality of memory modules; andThe shared memory address is mapped to the physical memory address of a partition in the plurality of partitions.2.The system of claim 1, wherein the shared memory address comprises a sequence of bits, and wherein the mapping of the shared memory address to the physical memory address is based at least in part on the value of a predetermined bit in the sequence of bits.3.The system according to claim 2, wherein the predetermined bit comprises two or more bit groups, wherein the first group of the predetermined bit provides a mapping to a partition of the plurality of partitions, and the predetermined The second set of bits provides a mapping to data locations within the partition.4.The system of claim 3, wherein the predetermined bit includes four bit groups, wherein the third group of the predetermined bit provides a mapping to a cache set, the cache set including one or the plurality of partitions A plurality of partitions, and a fourth set of predetermined bits of the value at least partially provides tag information for the corresponding cache set.5.The system of claim 1, wherein the mapping of the virtual memory address to the shared memory address is based on a page table, wherein an entry in the page table provides a virtual memory address to the shared memory address mapping, wherein The page table can be read and modified by the processor, and wherein the page table is stored in the plurality of partitions.6.The system according to claim 5, wherein each memory module of the plurality of memory modules maintains a corresponding part of the page table, and wherein the corresponding part of the page table of the memory module of the system is a corresponding memory The physical memory address of the module provides mapping.7.The system of claim 6, wherein the modification of the page table is performed via a modification device that communicates a message to the plurality of memory modules, and wherein at least one of the memory modules is configured to report to the The modification device sends a confirmation that the communicated message has been received and the corresponding modification has been entered.8.The system according to claim 5, wherein one or more of the memory modules in the plurality of memory modules maintain page tables of themselves and other memory modules of the system, and the modification of the page table This is done via a modification device broadcasting a message to the plurality of memory modules, and wherein one or more of the plurality of memory modules performs the modification of the page table on a copy of its page table.9.The system according to claim 1, wherein the virtual memory address of the system includes a first bit sequence, wherein the shared memory address of the system includes a second bit sequence, and wherein the virtual memory address to the shared memory address The mapping of is based at least in part on mapping the first bit sequence to the second bit sequence and mapping the second bit sequence to the first bit sequence.10.The system of claim 9, wherein the first bit sequence of the virtual memory address is at least partially offset from the second bit sequence of the shared memory address.11.The system of claim 9, wherein the second bit sequence of the shared memory address is used for a cache address of a cache, wherein the cache includes a group of partitions among the plurality of partitions.12.The system according to claim 11, wherein the cache is a set associative cache.13.The system according to claim 1, wherein the virtual address memory space of the plurality of application programs includes shared memory addresses of the forked process and the merged process of the plurality of application programs, and wherein the plurality of memory modules include A number of synchronization primitives for synchronizing the memory access operations of multiple applications.14.A method including:Executing code, where the code causes the processor to access the virtual memory address;Mapping the virtual memory address to a shared memory address in a logical memory space shared among a plurality of memory modules; andThe shared memory address is mapped to the physical memory address of at least one of the plurality of memory module partitions.15.The method of claim 14, wherein the mapping of the shared memory address is used by the operating system of the device.16.The method of claim 15, wherein the mapping of the shared memory address is modified at least based on user interaction with the device.17.A device including:Multiple memory modules,The memory module of the plurality of memory modules includes:A plurality of partitions of the physical memory, the partitions of the plurality of partitions are associated with at least one physical memory address; andThe processor, which is configured to:Execute code to access the virtual memory address;Mapping the virtual memory address to a shared memory address in the logical memory space shared among the plurality of memory modules; andThe shared memory address is mapped to the physical memory address of a partition in the plurality of partitions.18.The apparatus of claim 17, wherein at least one memory module of the plurality of memory modules includes a part of a graphics processing pipeline distributed among the plurality of memory modules.19.The device of claim 17, wherein the shared memory address comprises a sequence of bits, and wherein the mapping of the shared memory address to the physical memory address is based at least in part on the value of a predetermined bit in the sequence of bits.20.18. The device of claim 17, wherein the mapping of the virtual memory address to the shared memory address is based on a page table, wherein an entry in the page table provides a virtual memory address to the shared memory address mapping, wherein The page table can be read and modified by the processor, and wherein the page table is stored in the plurality of partitions.
Memory module system using global shared contextTechnical fieldAt least some embodiments disclosed herein relate to a memory module system that utilizes a global shared context.Background techniqueSome conventional examples of memory modules may include single in-line memory modules (SIMM) and dual in-line memory modules (DIMMs). The SIMM differs from a dual in-line memory module (DIMM) in that the contacts on the SIMM are redundant on both sides of the module. This will not happen with DIMMs. The DIMM has separate electrical contacts on each side of the module. DIMMs are generally used in current computers that are large enough to include one or more DIMMs, and DIMMs may include multiple dynamic random access memory (DRAM) integrated circuits. For smaller computers such as notebook computers, small form-factor dual in-line memory modules (SO-DIMMs) are usually used.In addition, the memory components can be integrated on a system-on-a-chip (SoC). SoC is an integrated circuit (IC) that integrates computer components in a chip. Common computer components in SoC include central processing unit (CPU), memory, input/output ports, and auxiliary storage devices. All the components of the SoC can be on a single substrate or microchip. The SoC may include various signal processing functions, and may include a dedicated processor or a co-processor, such as a graphics processing unit (GPU). Through tight integration, SoC can consume less power than conventional multi-chip systems with equivalent functionality. This makes SoC beneficial (for example, in smartphones and tablet computers) to integrate mobile computing devices. In addition, SoC can be used for embedded systems and the Internet of Things.Summary of the inventionIn one aspect, the present application relates to a system including: a plurality of memory modules connected to provide physical memory, the memory modules of the plurality of memory modules including: a plurality of partitions of the physical memory, wherein A partition of the plurality of partitions is associated with at least one physical memory address; and a processor configured to: execute code, wherein the code when executed causes the processor to access a virtual memory address; and The virtual memory address is mapped to a shared memory address in a logical memory space shared among the plurality of memory modules; and the shared memory address is mapped to a physical memory address of a partition of the plurality of partitions.In another aspect, the present application relates to a method comprising: executing code, wherein the code causes a processor to access a virtual memory address; and mapping the virtual memory address to logic shared among multiple memory modules A shared memory address in the memory space; and a physical memory address for mapping the shared memory address to at least one of the plurality of memory module partitions.In another aspect, the present application relates to a device that includes: a plurality of memory modules, the memory modules of the plurality of memory modules include: a plurality of partitions of a physical memory, and the partitions of the plurality of partitions are at least A physical memory address is associated; and a processor configured to: execute code to access a virtual memory address; and map the virtual memory address to a share in a logical memory space shared among the plurality of memory modules A memory address; and a physical memory address that maps the shared memory address to a partition of the plurality of partitions.Description of the drawingsThe present disclosure will be more fully understood from the detailed description given below and the accompanying drawings of various embodiments of the present disclosure.Figures 1 and 3 illustrate example memory module systems according to some embodiments of the present disclosure.Figure 2 illustrates an example memory module according to some embodiments of the present disclosure.Figure 4 illustrates an example networked system including a computing device according to some embodiments of the present disclosure.Figures 5 to 7 illustrate a flowchart of example operations that may be performed by the aspects of the memory module depicted in Figures 1 to 4 according to some embodiments of the present disclosure.Figures 8-9 illustrate example physical memory partitions and shared memory address bit sets mapped to at least one partition and at least one data location in the partition according to some embodiments of the present disclosure.Detailed waysAt least some embodiments disclosed herein relate to a memory module system that utilizes a global shared context. The memory module system may include multiple memory modules, where each module is coupled to at least one processor, and the memory module may implement a global shared context. The memory module system may be, include, or be part of SoC, or the memory module of the memory module system may be, include, or be part of SoC. In some embodiments, the SoC in these instances may include a central processing unit (CPU), GPU, and/or neural processing unit (NPU). For example, some of the memory components described herein may be integrated on the SoC or PCB of a device, a computer cluster, a PC, a mobile device, or an embedded device. In addition, the SoC or PCB of the device, computer cluster, PC, mobile device, or embedded device may be integrated into some of the memory components described herein.The memory module of the system may provide an address space shared between the module and the application program running on the module and/or the coupled processor. Address space sharing can be achieved by making the logical address the global logical address of the module, and each logical address can be associated with a certain physical address of a specific module. In some embodiments, the size of the logical address space may be the same as the sum of the physical address spaces of the modules in the memory module system. For example, if there are eight modules, the association (or mapping) from logic to physical address can be realized by a predetermined first group of 3 bits at a predetermined position in the code or address space (for example, 3 bits provide 2^3 numbers). Or eight numbers-each of the eight modules corresponds to a number). The remainder of the logical address bits or part thereof (for example, the second bit group) can be mapped to a specific physical address within each module using the second mapping scheme. In this regard, these (first and second) bit groups need not be adjacent (for example, adjacent bits in an address), and may be dynamically or depending on the decision made by the system (for example, the operating system) and/or the user. Change as needed. The second mapping scheme can be as simple as one-to-one mapping. The third mapping scheme may be more complicated, such as round-robin scheduling among the banks of each memory device in the module, modulus or interleaving on the module ID, and so on.The application program running on the embodiment of the memory module system may have its own virtual address space. In some embodiments, the association between the virtual space and the logical address space of various applications can be implemented through page tables. Such a table can provide virtual to logical addressing, and can further correlate the physical address at each module. The page table can be read and modified by the processor of the memory module, and the page table can be stored in the module. Alternatively, a predetermined architecture and/or algorithm of how virtual addresses are mapped to logical and/or physical addresses can be used. Instances of this architecture may include collection associativity, such as the collection associativity used by the collection associative cache. In addition, the association between the virtual space and the logical address space of various applications can be implemented through a page table or a predetermined architecture and/or algorithm or a combination thereof.In some embodiments, in order to support the association between the virtual space and the logical address space of various applications, the system may use synchronization primitives and/or semantics. The memory module system may also use messaging (for example, point-to-point, broadcast, multicast, targeted by certain IDs, etc.) and/or atomic operations for critical data. Such functionality can be implemented via the corresponding hardware mailbox at each module. For example, a mailbox can be implemented at each memory module processor of the system or at each memory bank of the module.In some embodiments, since the global shared context is effective, a large amount of sharing can be performed, and applications using the system can be composed of various components, shared libraries, and the like. And, this is especially true in some instances where applications can have the same origin or root process. Therefore, when a process is forked instead of copying the context, the context can be extended by preserving sharing in the logical address space that supports the global context. Since the context is global, the processor associated with the memory module does not need to perform context switching among many applications. The virtual context can be cached and maintained in the memory module system (compared to context switching by a centralized processor architecture). The virtual context can be executed by multiple processors of the memory module in the logical space with the aid of synchronization primitives and addressing. Even if a single application context is distributed among several memory modules, it is possible to execute the application context synchronously via such a mechanism.In some embodiments, graphics pipelines (eg, graphics pipelines for geometry, projection, lighting, cropping, rasterization, shading, screen streaming, and/or other functions) can be distributed among several memory modules of the system . In some embodiments, since each memory module may include an SoC with a GPU, the pipeline may use single instruction multiple data (SIMD) operations and/or data via high-bandwidth wired and/or wireless interconnections between modules. Exchange to execute.In some embodiments, in order to efficiently execute task-level parallelism (for example, multiple applications), each processor on each memory module may only move between contexts cached in the memory; and therefore, each The processor can continuously run the byte code of the application in the logical space. In this sense, the operating system (OS) of the device and the running applications can be combined to represent a global shared context. The value of the shared context is the time it is placed in the memory, especially the time it is placed in the non-volatile memory, and the value is continuously evolving and maintained here according to the user using the device or the system including the memory module.Figures 1 and 3 illustrate example memory module systems 100 and 300 according to some embodiments of the present disclosure. Figure 2 illustrates an example memory module 202 according to some embodiments of the present disclosure. The memory module 202 may be a module of the system 100 or the system 300. Figures 1 and 2 illustrate example memory modules 102a, 102b, and 202, respectively, according to some embodiments of the present disclosure. And, such a module may be a part of the system 100 or the system 300.Figure 1 shows a memory module system 100 having a plurality of memory modules (see, for example, memory modules 102a and 102b), which may be (e.g., at least via a processor of the memory module system-see, for example, processors 106a, 106b, 106c) And 106d) at least partially implement the global shared context 101. In addition, FIG. 1 shows that each of the memory modules of the system 100 has multiple physical memory partitions (see, for example, physical memory partitions 104a, 104b, 104c, 104d, 104e, and 104f). Each memory module of the system 100 also has at least one processor (for example, see processors 106a, 106b, 106c, and 106d). As shown, different embodiments of the memory module system 100 may have memory modules, where each memory module may have one processor (e.g., processor 106a), two processors (e.g., processors 106a and 106b), or More than two processors. It should be understood that the dashed box represents optional components. In addition, it should be understood that embodiments of the memory module in the memory module system 100 may have two physical memory partitions or more than two physical memory partitions.Each memory partition can be made up of elements of a memory subsystem or architecture, such as memory dies, banks and ranks, memory chips, memory arrays and sub-arrays, memory rows and columns, memory decks, and stacks.Each memory module of the system 100 is also shown as having a bus (for example, see buses 110a and 110b, where each bus may include multiple buses) that connects multiple physical memory partitions of the memory module (for example, see physical memory The partitions 104a to 104c and the physical memory partitions 104d to 104f) and the processors of the modules (see, for example, the processors 106a to 106b and the processors 106c to 106d). The bus of the memory module (for example, see buses 110a and 110b) may be part of the bus of the memory module system 100 (for example, see the one or more buses 116). One or more buses 116 may connect each memory module of the memory module system 100 to each other and other parts of the memory module system. The one or more buses 116 may also connect the memory module system 100 and parts of the memory module system to other parts of the host system hosting the memory module system. In some examples, the memory module system 100 may be part of and installed in the host system. In addition, one or more of the processors (for example, see processors 106a to 106b and 106c to 106d) of each memory module of the memory module system 100 can arbitrate via the bus of the system 100 (for example, see the buses 110a, 110b and 116) Data communicated.In some embodiments, as shown in FIGS. 1 to 3, the memory module system (for example, see memory module systems 100 and 300) includes a plurality of memory modules (for example, see memory modules 102a to 102b and memory modules 302a, 302b, and 302c), and each of the multiple memory modules (see, for example, memory modules 102a, 102b, and 202) includes multiple physical memory partitions (see, for example, partitions 104a to 104c, partitions 104d to 104f, and partitions 205a, 205b , 205c, 205d, 205e, 205f, 205g, 205h and 205i). Each of the plurality of physical memory partitions may be associated with at least one physical memory address. Additionally, in such embodiments, the memory module system includes at least one processor (see, for example, processors 106a to 106, 106c to 106d, and 206a to 206b). Each processor of the memory module system may be associated with at least one physical memory partition among a plurality of physical memory partitions.In such embodiments and other embodiments, each processor of the memory module system (see, for example, the processors 106a to 106 and 106c to 106d of the system 100) may be configured to execute code based on the memory access related The virtual memory address decoded in the code is connected to access the physical memory of the system (for example, the physical partition of the system memory), where the code can be the code of a program, application, software module or library or an operating system (OS) Part of etc. Each processor of the system (see, for example, processors 106a to 106 and 106c to 106d of system 100) may also be configured to map each of the virtual memory addresses to the physical memory associated with multiple memory modules and Shared memory address shared among multiple memory modules. In some examples, each processor of the system may be configured to map each of the virtual memory addresses to at least one partition of the physical memory of the multiple memory modules (e.g., see partitions 104a to 104c, partitions 104d to 104f Shared memory addresses associated with partitions 205a to 205i) and shared among multiple memory modules. The global shared context (see, for example, global shared context 101) may include any of the aforementioned mappings performed by the processor of the memory module system.In such and other embodiments, each processor of the memory module system (see, for example, the processors 106a to 106b of the system 100) may be configured to slave other processors of the memory module system (see, for example, the system 100). The processors 106c to 106d) and the memory module receive the shared memory address and the data associated with the received shared memory address. Each processor of the memory module system may also be configured to map the received shared memory address to a corresponding physical memory address associated with the processor of the physical memory partition of the system. The global shared context (see, for example, global shared context 101) may include a mapping of the received shared memory address to the corresponding physical memory address associated with the processor of the physical memory partition of the system. Each processor of the memory module system (see, for example, processors 106a to 106b of system 100) may also be configured to assign shared memory addresses based at least in part on the mapping of the sent shared memory addresses to the corresponding physical memory addresses of the system. The data associated with the sent shared memory address is sent to other processors of the system (for example, see processors 106c to 106d of system 100). The global shared context (for example, see global shared context 101) may include a mapping of the sent shared memory address to the corresponding physical memory address of the system associated with the processor (for example, the corresponding physical memory partition of the system associated with the processor) .In such embodiments and other embodiments, each shared memory address of the system (see, for example, memory module systems 100 and 300) may include a bit sequence, and the mapping of shared memory addresses to physical memory addresses may be based at least in part on all shared memory addresses. The value of a predetermined bit in the bit sequence (for example, the bit sequence in the mapping scheme). For example, the memory module of the memory module system may provide an address space shared between the module and the application program running on the module and/or the coupled processor; and this shared address space may be a global shared context (e.g., See part of the global shared context 101). Address space sharing can be achieved by making the logical address the global logical address of all modules, where each logical address is associated with a certain physical address of a specific module. Therefore, in some embodiments, the size of the logical address space may be the same as the sum of the physical address spaces of the modules in the memory module system. For example, if there are eight modules, the logical to physical can be realized by the predetermined first 3-bit group at the predetermined position of the logical address bit and the shared memory address associated with the virtual memory address decoded in the code or address space. Association (or mapping) of addresses (for example, 3 bits provide 2^3 numbers or eight numbers-each of the eight modules corresponds to a number). The remainder of the logical and shared address bits, or a portion thereof (eg, the second bit group), can be mapped to a specific physical address within each module using the second mapping scheme. The second mapping scheme may be as simple as the one-to-one mapping, or may be a more complex scheme, such as round-robin scheduling among the banks of each memory device in the module, or interleaving.In some embodiments, the predetermined bit in the shared address bit sequence may include two or more bit groups (for example, see FIG. 8). The bit sequence may be part of a mapping scheme, which may be part of a global shared context (see, for example, global shared context 101). The first set of predetermined bits may provide mappings to physical memory partitions among multiple physical memory partitions of multiple memory modules (for example, see partitions 104a to 104c, partitions 104d to 104f, and partitions 205a to 205i, and see FIG. 8 The first bit group 804 of, which is mapped to the partition 802b), and the second group of predetermined bits can provide a mapping to the data location within the physical memory partition (for example, see the second bit group 806 in FIG. 8, which is mapped to Data location in partition 802b). The data location in the partition can be in a specific bank, bank, or memory array or row or column or cache line or byte or byte sequence or a combination thereof. In these and other examples, the predetermined bit may include four bit groups (see, for example, FIG. 9). The third group of predetermined bits may provide a mapping to a cache set that includes one or more physical memory partitions among a plurality of physical memory partitions of a plurality of memory modules (for example, see the third bit group in FIG. 9 808, which is mapped to at least the cache set divided among the partitions 802b and 802c), and the value of the fourth set of predetermined bits can at least partially provide tag information for the corresponding cache set (for example, see the fourth bit set in FIG. 9 810, which provides tag information for the cache set divided at least in the partitions 802b and 802c).In some embodiments, the mapping of the virtual memory address of the system (for example, see systems 100 and 300) to the shared memory address of the system is based on a page table. The page table may be part of a global shared context (see, for example, global shared context 101). Each entry in the page table can provide a mapping of virtual memory addresses to shared memory addresses. The page table can be read and modified by the processor of the system (for example, see processors 106a to 106b, 106c to 106d, and 206a to 206b), and the page table can be stored in multiple physical memory partitions of multiple memory modules (for example, see Partitions 104a to 104c, partitions 104d to 104f, and partitions 205a to 205i). The page table may be at least partially cached by the processor to access the most recently or frequently used page table entries more quickly. In some embodiments, the page table may be implemented as a database (for example, SQL or a custom database). Access to these database entries can be implemented by accelerated hardware, which can be part of the memory controller. The physical memory location used to store these databases may be different or separate from the physical memory allocated to the global shared context.In such embodiments and other embodiments, each of a plurality of memory modules (see, for example, memory modules 102a to 102b, memory module 202, and memory modules 302a to 302c) may maintain a corresponding portion of the page table, and The corresponding part of the page table of a given memory module of the memory module system provides a mapping for the physical memory address of the given memory module. The modification of the page table may be performed via a modification device that communicates messages to multiple memory modules (for example, the modification device may be at least one external controller, such as the external controllers 306a to 306b shown in FIG. 3, or the above The modification device may be another memory module or a processor of a memory module associated with at least one memory partition of the memory module, or any other device using a global shared context), and the message may contain the modification. The message may be communicated to multiple memory modules based on the corresponding portion of the page table to be modified, and each of the memory modules may be configured to send an acknowledgement to the modifying device that the communicated message has been received and the corresponding modification has been made or the modification has been rejected, and Denial Reason. In other examples, silence in response to the modified message may mean an agreement, and the receiving module only sends a rejection message.Alternatively, each memory module in a plurality of memory modules (see, for example, memory modules 102a to 102b, memory module 202, and memory modules 302a to 302c) may maintain page tables for itself and other memory modules of the memory module system. In such instances, the modification of the page table may be performed via a modification device that broadcasts a message to multiple memory modules (for example, the modification device may be at least one external controller, such as the controllers 306a to 306a shown in FIG. 3). 306b, or the modification device may be another memory module or a processor of a memory module associated with at least one memory partition of the memory module, or any other device that uses a global shared context), and each of the multiple memory modules One can perform modifications to the page table on its copy of the page table. Therefore, at least some of the time when the self-modifying device modifies its own page table set will be modified later by mutual agreement, and therefore there are fewer conflicts. In the case of few conflicts, any device can respond to the message and the reason for rejection or to a request for further negotiation.An application program running on an embodiment of a memory module system (for example, see memory module systems 100 and 300) may have its own virtual address space (for example, a virtual address space contained in a global shared context (for example, see global shared context 101). Address space). The association between the virtual space and the logical address space of various applications can be implemented through a page table (such as the page table described herein). Simply put, such tables can provide virtual-to-logical and shared addressing (e.g., through the associated physical address at each module). Also, the page table can be read and modified by the processor of the memory module, and the table can be stored in the module. Alternatively, a predetermined architecture and/or algorithm of how virtual addresses are mapped to logical and shared and/or physical addresses can be used. Instances of this architecture may include collection associativity, such as the collection associativity used by the collection associative cache. In addition, the association between the virtual space and the logic and shared address space of various applications can be implemented through a page table or a predetermined architecture and/or algorithm or a combination thereof.In an embodiment using a page table, access via a module can be made by each module that has a part of the page table, so that the part only provides a mapping between the physical address of the module and the associated logical address of the module. Modifications to such a distributed page table can be carried out by broadcasting a message containing such a modification from the modification device (or host processor) to all modules, and the module containing only a part of the page table is responsible for maintaining the part of the table. . Modifications to such a distributed page table can be done via sending a direct message to the responsible module. In such instances, after the modification is made or there is a rejection of the update, the confirmation can be provided to the requesting party by the updated memory module.Alternatively, each module may have page tables for itself and all other modules in the memory module system. Modifications to this type of global (and always synchronized) page table are carried out by broadcasting the message containing the modification from the modification module (or host processor) to all other modules of the system, where all modules execute their own corresponding page tables Revise. In such instances, each module has a copy of the page tables of all other modules of the memory module system. In such embodiments, confirmation message delivery is not used, because the page table is updated through mutual agreement between system modules. In some instances, such as when there is an error, an auxiliary message from a module may notify other modules of the error. Therefore, other modules can then reject the error synchronously. In the case of an error, the module can function through mutual agreement to restore the modification. In instances where it is impossible to recover the modification, the system may run a shootdown subroutine, such as a translation look-aside buffer (TLB) shootdown.In some embodiments using a predetermined architecture and/or algorithm of how virtual addresses are mapped to logical and/or physical addresses, each virtual memory address of the memory module system may include a first bit sequence. And, each shared memory address of the system can include a second bit sequence. The mapping of the virtual memory address of the system to the shared memory address of the system may be based at least in part on mapping the first bit sequence to the second bit sequence and mapping the second bit sequence to the first bit sequence. In such instances, the first bit sequence of the virtual memory address of the system is at least partially offset from the second bit sequence of the shared memory address of the system. In addition, the second bit sequence of the shared memory address of the system may be used for the cache address of the cache, and the cache may include a group of physical memory partitions among the multiple physical memory partitions of the multiple memory modules. Furthermore, in some embodiments, the cache is a set-associative cache.The arrangement of the first bit sequence and the second bit sequence may be offset from each other, or a formula containing an offset may be used. Therefore, it is possible to map the address range of some shared applications or modules shared among many applications. For example, an application or module whose address range is shared may be fixed in a global sharing context, but in a virtual space where the application or module is used via sharing, the application or module may be different . The difference is in the offset or the formula containing the offset. For example, if in the global shared context, the shared module is mapped to the address range 30-40 and two applications, then using the shared module in the virtual address space of the application allows the shared module to pass through the offset Shift mapping: For the first application, the offset is +100 (130-140), and for the second application, the offset is +1000 (1030-1040). In this example, an application using the global shared context can map any range to the available virtual address space range of the application through a simple offset or a formula containing an offset. Since the virtual address space is flexible, the application can find a free mapping range. The application compiler or interpreter or hypervisor can provide semantics for integrating offset-based mapping into the application framework.In some embodiments using a predetermined architecture and/or algorithm of how virtual addresses are mapped to logical/shared and/or physical addresses, each memory bank in each memory module of the system (at least some of the modules of the system) At least some memory banks) can perform the role of a set, the group is identified by the first bit group of the virtual address at a predetermined location and the number is mapped to the memory bank module number. For example, if there are 1024 banks and 8 modules per module, there are 8192 sets. Since 8192=2^13, 13 bits can be used for the virtual address in the first bit group. The remainder of the virtual address bits or part (for example, the second bit group) is mapped to a specific physical address in each set. This second group can be or contain a marker group. The tag is stored with the data, and tag matching can be performed to identify whether the data associated with the address is cached in the collection.In such embodiments, a large cache may be formed in the memory module system, and such cache may store a relatively large amount of data. For example, the cache can store the memory capacity of the memory partition of the multi-memory module. In this example, serial-attached SCSI (SAS), SATA, M.2, or PCIe-attached solid-state drives (SSD) can be coupled to such caches. In addition, in such instances, all or most of the processes can run on memory, and the executed applications can be completely cached in a large cache. In addition, each cache set or at least some of the sets can migrate or cache data from other cache sets. Migrating or caching data from other cache sets can be done by changing the first or second bit group association (e.g., bit position) for a specific cache virtual context.In such embodiments and other embodiments, the global shared context (see, for example, global shared context 101) may include the virtual address memory space of multiple applications. The virtual address memory space of multiple applications may include shared memory addresses of forked processes of multiple applications or merged processes of multiple applications. The forked process is created by dispatching processes from at least one parent, and the merge process is created by merging processes from at least two parent processes. In addition, multiple memory modules of the system (see, for example, memory modules 102a, 102b, 202, and 302a to 302c) may include multiple synchronization primitives for synchronizing the memory access operations of multiple memory modules of multiple applications .In some embodiments, since the global shared context (for example, see the global shared context 101) is effective, a large amount of sharing can be performed, and the application program using the system can be composed of various components, shared libraries, and the like. And, this is especially true in some instances where applications can have the same origin or root process. Therefore, when dispatching processes instead of copying the context, the context can be extended by preserving shares in the logical address space that supports the global context. Since the context is global, the processor associated with the memory module does not require context switching between many applications. The virtual context can be cached and maintained in the memory module system (compared to context switching by a centralized processor architecture). The virtual context can be executed by multiple processors of the memory module with the aid of synchronization primitives and addressing in a logical and shared space. Even if a single application context is distributed among several memory modules, it is possible to execute the application context synchronously via such a mechanism.In addition, in such embodiments and other embodiments, parts of the graphics processing pipeline may be distributed among multiple memory modules. In some embodiments, graphics pipelines (eg, graphics pipelines for geometry, projection, lighting, cropping, rasterization, shading, screen streaming, and/or other functions) can be distributed among several memory modules of the system . In some embodiments, since each memory module may include an SoC with a GPU, the pipeline may be executed via SIMD operations and/or via data exchange using high-bandwidth wired and/or wireless interconnection among the modules.In some embodiments, in order to efficiently execute task-level parallelism (for example, multiple applications), each processor on each memory module may only move between contexts cached in the memory; and therefore, each The processor can continuously run the byte code of the application in the logic and shared space. In this sense, the OS of the device and the running application can be merged together, thereby representing a global shared context (for example, see global shared context 101). The value of the shared context is the time it is placed in the memory, especially the time it is placed in the non-volatile memory, and the value is continuously evolving and maintained here according to the user using the device or the system including the memory module.In some embodiments, the system may include multiple memory modules, and each memory module of the multiple memory modules may be configured to execute program code distributed among the multiple memory modules and associated with at least one program. In such instances, each memory module in the plurality of memory modules may include a plurality of physical memory partitions, and each partition in the plurality of physical memory partitions may be associated with at least one physical memory address. And, in such instances, each memory module in the plurality of memory modules may include at least one processor, and each processor of the system may be associated with at least one of the plurality of physical memory partitions.In such and other embodiments, each processor of the system may be configured to execute code at least in part based on the locality of virtual memory access to multiple physical memory partitions, and based on the association with the memory access The virtual memory address decoded in the code to access the physical memory of the system. And, each processor of the system can be configured to map each of the virtual memory addresses to a shared memory address that is associated with the physical memory of multiple memory modules and shared among the multiple memory modules.In some embodiments, if the program code has a copy at each processor of each memory module, and if the code is requesting access to the physical memory of a memory partition associated with the first processor at a certain time, then The processor may run the portion of the code. If after a certain time the program code is requesting access to the physical memory of another memory partition associated with another processor, then the first processor can communicate the program counter and related data to the other processor so that the second The processor continues execution based on the locality of virtual memory accesses to multiple physical memory partitions. In some embodiments, a first set of processors may be used in place of the first processor, and another set of processors may be used in place of another processor. Also, the first group and the other group may overlap.Additionally, in such and other embodiments, each processor of the system may be configured to receive shared memory addresses and data associated with the received shared memory addresses from other processors and memory modules of the system, and The received shared memory address is mapped to the corresponding physical memory address associated with the processor of the physical memory partition of the system. In addition, each processor of the system may be configured to send shared memory addresses and data associated with the sent shared memory addresses to the system based at least in part on the mapping of the sent shared memory addresses to the corresponding physical memory addresses of the system Other processors of the system. In such instances, at least one memory module of the plurality of memory modules may include a part of a graphics processing pipeline distributed among the plurality of memory modules.Additionally, at least some of the embodiments disclosed herein include systems with multiple such memory modules. More specifically, at least some of the embodiments disclosed herein include a memory module having multiple memory chips, at least one controller (eg, CPU or dedicated controller), and configured to communicate input and output data of the memory module At least one interface device. The input and output data bypass at least one processor (e.g., CPU) of the computing device in which the memory module is installed. And, at least one interface device may be configured to communicate input and output data to at least one other memory module in the computing device. In addition, the memory module may be one of a plurality of memory modules of the memory module system. In some embodiments, the memory module system can be designed such that when a memory module is added to the system, the system increases the cost of one memory module by adding more memory partitions, processors associated with the partitions, and increasing the bandwidth of the memory bus. size.In some embodiments, the memory module may be or include DIMM, SO-DIMM, registered DIMM (RDIMM), mini-RDIMM, socketed memory stack or socket system package or another type of memory Package on Package (PoP). Also, in some embodiments, the memory module may be configured to include a dedicated chip, such as a GPU, an artificial intelligence (AI) accelerator, and/or a processing in memory (PIM) unit. In addition, in some embodiments, the memory module can output the result to a peripheral device (for example, a display or another type of user interface) through a wired connection, a wireless connection, or a combination thereof, without passing through the communication between the processor and the memory module. Memory bus. For example, in some embodiments, the memory module can output the result to the peripheral device through a wired connection or a wireless connection, without passing through the memory bus between the memory module and the main processor of the computing device hosting the memory module. Such memory modules and other memory modules disclosed herein can accelerate the processing of the graphics pipeline (for example, data processing of geometry, projection, lighting, cropping, rasterization, shading, screen streaming, etc.). In addition, a system with multiple such memory modules communicating with each other can further speed up the processing of the graphics pipeline.Figure 2 shows a memory module 202 that is somewhat similar to memory module 102a or 102b. In addition, FIG. 2 shows a memory module 202 having multiple memory chips (see, for example, memory chips 204a, 204b, and 204c). Each of the memory chips in the module 202 includes multiple physical memory partitions (see, for example, partitions 205a to 205i). The memory module 202 also has at least one processor that can at least partially implement the global shared context 101 (see, for example, processors 206a and 206b). In addition, at least some of the partitions in the memory module 202 (and the partitions in the memory modules 102a and 102b in FIG. 1) may partially implement the global shared context 101 (see, for example, partitions 205a to 205c and partition 205e). As shown, different embodiments of the memory module 202 may have one processor (e.g., processor 206a), two processors (e.g., processors 206a and 206b), or more than two processors. It should be understood that the dashed box represents optional components. In addition, it should be understood that embodiments of the memory module 202 may have two memory chips or more than two memory chips.The memory described herein, such as the memory of a memory module, may include various types of memory. For example, such memory may include flash memory with flash memory cells. In addition, such memory may include dynamic random access memory (DRAM), which includes DRAM cells, for example. In addition, for example, such a memory may also include non-volatile random access memory (NVRAM), which includes NVRAM cells. The NVRAM cell may include a 3D XPoint memory cell. In addition, the DRAM cell may be a typical DRAM cell among different types of typical DRAM cells, for example, a cell having a ferroelectric element. In addition, the cell may include a ferroelectric transistor random access memory (FeTRAM) cell. The memory cell may also have at least one of a transistor, a diode, a ferroelectric capacitor, or a combination thereof.The memory module 202 is also shown as having at least one interface device (see, for example, interface devices 208a and 208b). As shown, different embodiments of the memory module 202 may have one interface device (e.g., interface device 208a), two interface devices (e.g., interface devices 208a and 208b), or more than two interface devices. And, as mentioned, it should be understood that the dashed box represents optional components. At least one interface device (see, for example, interface devices 208a and 208b) may be configured to communicate input and output data, including data related to the global shared context for the memory module 202. The input and output data can bypass the processor (for example, the main processor) of the system in which the memory module 202 is installed (for example, see interface devices 208a and 208b are connected to the system in which the memory module 202 is installed via connectors 218a and 218b) Other devices 214 and bypass one or more processors 212 of the system in which the memory module is installed). In some embodiments, as shown in FIG. 2, the input and output data bypass the data bus (eg, the main data bus) of the system in which the memory module 202 is installed (for example, see interface devices 208a and 208b via connectors 218a and 218b is connected to other devices 214 of the system in which the memory module is installed and bypasses one or more buses 216 of the system in which the memory module is installed). It should be understood that the dashed connecting member represents an optional connecting member.The memory module 202 is also shown as having a bus 210 (which may include multiple buses) that connects multiple memory chips (for example, see memory chips 204a, 204b, and 204c), processors (for example, see processors 206a and 206b) ) And interface devices (see, for example, interface devices 208a and 208b). The bus 210 may be part of the bus of the system in which the memory module is installed (see, for example, one or more buses 216) that connects the memory module 202 to the rest of the system in which the memory module is installed. As shown by the dashed portion of the bus 210 that connects the memory module to the one or more buses 216 and the rest of the system, in some embodiments, the bus 210 may be separate from the one or more buses 216, and in other embodiments In this case, the bus 210 may be connected to one or more buses 216. It should be understood that the dashed connecting member represents an optional connecting member. One or more of the processors of the memory module 202 (see, for example, processors 206a and 206b) can arbitrate the data communicated via the bus 210, including data related to the global shared context, and the arbitration bypasses the one or more buses 216的连接件 (see, for example, connectors 218a and 218b).The interface devices and other interface devices mentioned herein may include one or more network interface devices, one or more links, one or more buses, one or more ports, one or more peer-to-peer links, or Any combination of it.In some embodiments, the memory module 202 may implement a global shared context (e.g., see global shared context 101). Generally speaking, the global shared context includes multiple instances of the memory modules 102a, 102b, and/or 202 communicating with each other via their interface devices. Global shared context can be beneficial to graphics processing and graphics applications, including processing using SIMD concepts or vector processing concepts, because large amounts of memory are useful, and data processing close to memory can improve graphics processing. In such embodiments and other embodiments, the interface device (see, for example, interface devices 208a and 208b) may be configured to transmit input and output data to at least one of the memory modules installed in the system in which the communication memory module is installed. One other example.In some embodiments, the memory module 202 or another memory module described herein, the processor 206a or another processor or controller described herein, the interface device 208a, or another interface device described herein The memory chips 204a, 204b, and 204c or other memory chips described herein or any combination thereof may be part of an SoC, a system-on-package (SoP) such as a plug-in chipset system, or a heterogeneous die stack. All these embodiments represent tightly integrated IP blocks and chips, which do not necessarily include PCBs for coupling with each other and the rest of the system. Embodiments or other embodiments that include or are part of an SoC may include one or more GPUs or one or more other types of dedicated processors and/or one or more PIM units. Embodiments or other embodiments that include or are part of an SoC may include a processor that may include or be connected to a memory controller, a display sink (for example, HDMI, DisplayPort, or wireless display interface), Radios for wireless interfaces or networks, AI engines or accelerators, neuromorphic processors, scaling processors, vector processors, CPU cores, etc. In such cases, the global shared context provides a framework for applications to use these devices in an integrated and shared manner.Not shown in FIG. 2, the memory module 202 may also include a plurality of electrical contacts. The memory module 202 may also include a PCB configured for insertion into at least one memory slot of the motherboard. In such embodiments, multiple memory chips (see, for example, memory chips 204a, 204b, and 204c) may be coupled to the PCB, and multiple electrical contacts may be on each side of the PCB. In addition, processors (see, for example, processors 206a and 206b) can be coupled to the PCB, and interface devices (see, for example, interface devices 208a and 208b) can be coupled to the PCB.In some embodiments, the processor (see, for example, processors 206a and 206b) may be, include, or be part of at least one dedicated controller. The dedicated processor or controller may be, include, or be part of: GPU, AI accelerator, NPU, another type of dedicated controller, PIM unit, or any combination thereof. Such devices can be unified by using a global shared context, and can accelerate large-scale applications such as neural networks, big data applications, and machine learning through the global shared context.In some embodiments, the interface device (see, for example, interface devices 208a and 208b) may include at least one wireless interface device that communicates at least partially wirelessly, or may include an on-chip optical interconnect that provides optical communication between chips. Another part of the interface device can communicate via wires. The interface device may also be a hybrid interface device with multiple capabilities and/or channels and channel types. The interface device may be, include, or be a part of a network interface device (for example, a wireless network interface device). The interface device may include at least one wireless interface device, and/or the wired link may be configured to communicate via one or more wired and/or wireless networks, peer-to-peer links, ports, buses, and the like. Therefore, messages and data that are being exchanged in relation to the global shared context can use this type of interface.In some embodiments, the memory module 202 may include a first connector configured to connect a plurality of memory chips (eg, memory chips 204a, 204b, and 204c) to one of the plurality of electrical contacts At least some may communicate the input and output data of the multiple memory chips to the processor of the computing device in which the memory module 202 is installed (for example, the main processor of the computing device). The memory module 202 may also include a second connector configured to connect a plurality of memory chips to the processor (see, for example, the processors 206a and 206b). The memory module 202 may also include one or more third connectors configured to connect the processor to the interface device (see, for example, interface devices 208a and 208b) so that the interface device receives processing from other devices The input data of the processor and the output data of the processor are communicated to other devices via the communication path, and the output data bypasses the processor of the computing device in which the memory module 202 is installed. This type of connector can be used with a global shared context.In some embodiments, wireless communication may be performed among multiple memory modules installed in the system. For example, wireless receivers can allow data communication between aligned-in-space modules (similar to DIMMs installed in PC boards) in very close spaces. This can speed up this type of communication. Specifically, in some embodiments, terahertz wireless communication (THz) can provide a speed of 100 Gb/sec. Therefore, in such instances, in-chip or in-module THz radiation can support a large amount of data exchange between the memory modules disclosed herein, which can be used to at least partially implement page table operations and other data exchanges that share a global context.Figure 3 illustrates an example memory module system 300 according to some embodiments of the present disclosure. The memory module system 300 may include the memory module system 100, be a part of the memory module system 100, or be the memory module system 100, which may at least partially implement a global shared context. As depicted in FIG. 3, the memory module system 300 includes multiple memory modules (see, for example, memory modules 302a, 302b, and 302c). Also, each of the memory modules may include multiple memory chips (although not depicted in FIG. 3). Each of the plurality of memory modules (see, for example, memory modules 302a, 302b, and 302c) may be memory module 102a, 102b, or 202. The memory module system 300 may include at least one external controller (for example, see external controllers 306a and 306b) and at least one interface device (for example, see interface devices 308a and 308b). The memory module system 300 is shown as having a bus 310 (which may include multiple buses) that connects multiple memory modules (see, for example, the memory modules 302a, 302b, and 302c), an external controller (see, for example, the external controller 306a) And 306b) and interface devices (see, for example, interface devices 308a and 308b).As shown, different embodiments of the memory module system 300 may have one interface device (e.g., interface device 308a), two interface devices (e.g., interface devices 308a and 308b), or more than two interface devices. And, as mentioned, it should be understood that the dashed box represents optional components. Interface devices (see, for example, interface devices 308a and 308b) may be configured to communicate input and output data for each of the memory module system 300. The input and output data can bypass the processor (for example, the main processor) of the corresponding system in which one of the memory module systems 300 is installed (for example, see the interface devices 308a and 308b via connectors 318a and 318b) The other devices 314 of the system of the memory module system 300 and bypass one or more processors 312 of the host system). The input and output data can be related to data used by the application via the global shared context.In some embodiments, as shown in FIG. 3, the input and output data bypass the data bus (for example, the main data bus) of the host system in which one of the memory module systems 300 is installed (for example, see interface devices 308a and 308b) It is connected to other devices 314 of the system via the connections 318a and 318b and bypasses the bus 316 of the system (which may have multiple buses)). It should be understood that the dashed connecting member represents an optional connecting member. The global shared context can take advantage of bus bypass and speed up some key operations.In addition, the bus 310 may be a part of the bus of the host system in which the memory module system 300 is installed (for example, see bus 316) that connects the memory module system 300 to the host system in which the memory module system is installed. The rest. As shown by the dashed portion of the bus 310 connecting the memory module system to the bus 316 and the rest of the system, in some embodiments, the bus 310 may be separate from the bus 316, and in other embodiments, the bus 310 may be connected to Bus 316. It should be understood that the dashed connecting member represents an optional connecting member. One or more of the external controllers of the memory module system 300 (see, for example, controllers 306a and 306b) can arbitrate the data communicated via the bus 310 and the connectors that bypass the bus 316 (see, for example, the connectors 318a and 318b) . The data may include at least part of the data used to implement the global shared context, such as data used and processed by processors that exchange messages and perform memory accesses to memory partitions.As shown, the external controller (see, for example, external controllers 306a and 306b) is separate from the plurality of memory modules in the memory module system 300 (see, for example, the memory modules 302a, 302b, and 302c). In some embodiments of the memory module system 300, at least one external controller may be configured to be used by multiple memory module controllers or processors (for example, see processors 106a and 106b and memory modules 102a, 102b, 202, and 302a). To 302c) coordinate calculations. These calculations may be related to calculations performed by the processor as part of the global shared context. In addition, the external controller may be configured to coordinate communication by interface devices of multiple memory modules (see, for example, interface devices 208a and 208b and memory modules 102a, 102b, 202, and 302a to 302c).In addition, as shown, the interface devices (see, for example, interface devices 308a and 308b) may be separate from multiple memory modules in the memory module system 300 (see, for example, memory modules 302a, 302b, and 302c). Show the interface devices of the memory module system 300 (see, for example, interface devices 308a and 308b), each of which may include a wireless interface device that communicates at least partially wirelessly, or may include an on-chip optical interconnect that provides optical communication between chips . Another part of the interface device of the memory module system 300 may communicate via wires. The interface device of the memory module system 300 may also be a hybrid interface device having multiple capabilities and/or channels and channel types. The interface device of the memory module system 300 may be, include a network interface device (for example, a wireless network interface device), or be a part of the network interface device. The interface devices of the memory module system 300 may include wireless interface devices, and/or wired links may be configured to communicate via one or more wired and/or wireless networks, peer-to-peer links, ports, buses, and the like. Therefore, such interface devices can provide enhanced connections (e.g., faster connections) for implementing a global shared context.In addition, multiple memory modules (see, for example, memory modules 302a, 302b, and 302c) may be multiple different types of memory structures. For example, the multiple memory modules may be, part of, or include each of the following: one or more DIMMs, one or more SO-DIMMs, one or more RDIMMs, one or more One or more of mini RDIMMs, one or more socket memory stacks, one or more socket system packages or another type of PoP for memory, different types of memory structures or modules, or any of them combination. Such modules can be integrated into the system using a global shared context.In addition, each memory module described herein can be a different type of memory structure. For example, the memory modules described herein can be, part of, or include all of the following: DIMM, SO-DIMM, RDIMM, mini-RDIMM, socket memory stack, or socket system Packaging or another type of PoP for memory.For example, in some embodiments of the memory module system 300, the system may include multiple DIMMs. And, each DIMM of the plurality of DIMMs may include a PCB configured for insertion into a memory slot of an additional PCB separate from the plurality of DIMMs. In addition, each DIMM of the plurality of DIMMs may include a plurality of memory chips coupled to the PCB, a plurality of electrical contacts on each side of the PCB, at least one controller (e.g., at least one dedicated controller) coupled to the PCB And at least one interface device configured to communicate the input and output data of the DIMM. Input and output data bypass the processor of the computing device in which the DIMM and the system are installed. And, in such embodiments of the system 300 with DIMMS, at least one interface device may be configured to communicate input and output data to at least one other DIMM of the plurality of DIMMs. Such data can be part of the global shared context.In addition, in such embodiments of the system 300 with DIMMS, at least one external controller is separate from the multiple DIMMs and can be configured to coordinate calculations by dedicated controllers of the multiple DIMMs. At least one external controller may also be configured to coordinate communication by interface devices of multiple DIMMs. Also, in such embodiments, the additional PCB is separate from the multiple DIMMs, and may include multiple memory slots configured to receive multiple DIMMs. In addition, the external controller may be coupled to an additional PCB, and the additional PCB may be a motherboard, and the external controller may include a CPU or another type of processor, such as a dedicated controller. Such multiple DIMMs can run at least part of the global shared context.In some embodiments, at least one controller of each DIMM of the plurality of DIMMs may be a dedicated controller. For example, the controller may be, part of, or include each of the following: GPU, AI accelerator, NPU, another type of dedicated controller, PIM unit, or any combination thereof. It should be understood that the aforementioned devices and other parts described with respect to FIGS. 1 to 3 can use a global shared context to unify such devices and parts, and accelerate large-scale applications such as neural networks, big data applications, and machine learning.Figure 4 illustrates an example networked system 400 including at least computing devices 402, 422a, 422b, 422c, and 422d according to some embodiments of the present disclosure. In addition, FIG. 4 illustrates an example portion of an example computing device 402, where the computing device is part of a networked system 400. And, FIG. 4 shows how such computing devices can be integrated into various machines, equipment, and systems, such as IoT devices, mobile devices, communication network devices and equipment (for example, see base station 430), equipment (for example, See device 440) and vehicle (for example, see vehicle 450). It should be understood that the parts and devices described in FIG. 4 can use a global shared context to unify such devices and parts, and enable large-scale applications such as neural networks, big data applications, machine learning, etc., used between devices and parts The program speeds up.The computing device 402 and other computing devices of the networked system 400 (see, for example, computing devices 422a, 422b, 422c, and 422d) may be communicatively coupled to one or more communication networks 420. The computing device 402 includes at least a bus 406, a controller 408 (such as a CPU), a memory 410, a network interface 412, a data storage system 414, and other components 416 (it can be any type of component found in a mobile or computing device, such as a GPS component) , I/O components (such as various types of user interface components) and sensors and cameras). The memory 410 may include the memory modules 102a, 102b, and/or 202 and/or the memory module system 100 and/or 300. Other components 416 may include one or more user interfaces (eg, GUI, auditory user interface, tactile user interface, etc.), displays, different types of sensors, tactile, audio and/or visual input/output devices, additional dedicated memory, a Or multiple additional controllers (e.g. GPU) or any combination thereof. The bus 406 communicatively couples the controller 408, the memory 410, the network interface 412, the data storage system 414, and other components 416, and may couple such components to the second memory 412 in some embodiments.The computing device 402 includes a computer system that includes at least a controller 408, a memory 410 (e.g., read only memory (ROM), flash memory, dynamic random access memory (DRAM) (e.g., synchronous DRAM (SDRAM) or Rambus) DRAM (RDRAM), static random access memory (SRAM), cross-point or crossbar memory, crossbar memory, etc.), and the data storage system 414, which communicate with each other via a bus 406 (which may include multiple buses). In some embodiments In this case, the second memory 418 may not communicate via the bus 406.In other words, FIG. 4 contains a block diagram of a computing device 402 having a computer system in which embodiments of the present disclosure can operate. In some embodiments, a computer system may include a set of instructions that, when executed, cause a machine to at least partially perform any one or more of the methods discussed herein. In such embodiments, the machine may be connected (e.g., networked via the network interface 412) to other machines in the LAN, intranet, extranet, and/or the Internet (e.g., see network 420). The machine can be used as a peer machine in a peer-to-peer (or distributed) network environment or as a server or client machine in a cloud computing infrastructure or environment while being in the capacity of a server or client machine in a client-server network environment operate.The controller 408 represents one or more general processing devices, such as a microprocessor, a CPU, and so on. More precisely, the processing device may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a single instruction multiple data (SIMD), Multiple instruction multiple data (MIMD), or a processor that implements other instruction sets, or a processor that implements a combination of instruction sets. The controller 408 may also be one or more dedicated processing devices, such as ASIC, programmable logic (such as FPGA), digital signal processor (DSP), network processor, and so on. The controller 408 is configured to execute instructions for performing the operations and steps discussed herein. The controller 408 may further include a network interface device, such as the network interface 412, to communicate via one or more communication networks (such as the network 420).The data storage system 414 may include a machine-readable storage medium (also referred to as a computer-readable medium) on which one or more instruction sets or software embodying any one or more of the methods or functions described herein are stored. The data storage system 414 may have execution capabilities, for example, it may at least partially execute instructions residing in the data storage system. The instructions may also completely or at least partially reside in at least one of the memory 410 and/or the controller 408 during execution of the instructions by the computer system, and at least one of the memory 410 and the controller 408 may also It constitutes a machine-readable storage medium. The memory 410 may be or include the main memory of the computing device 402. The memory 410 may have an execution capability, for example, it may at least partially execute instructions resident in the memory.As mentioned, the networked system 400 includes computing devices, and each of the computing devices may include one or more buses, controllers, memories, network interfaces, storage systems, and other components. In addition, each of the computing devices shown in FIG. 4 and described herein may include or be part of a mobile device or the like, for example, a smart phone, a tablet computer, an IoT device, a smart TV, Smart watches, glasses or other smart home appliances, in-vehicle information systems, wearable smart devices, game consoles, PCs, digital cameras, or any combination thereof. As shown, the computing device can be connected to a network 420, which includes at least a local to device network such as Bluetooth, a wide area network (WAN), a local area network (LAN), an intranet, and a mobile wireless network such as 4G or 5G. , Extranet, Internet and/or any combination thereof. In some embodiments, as shown by the dashed connection 418, the memory 410 may include at least one network interface so that the memory can communicate with other devices via the communication network 420, respectively. For example, the memory module or memory module system of the memory 410 (see, for example, the memory modules 102a, 102b, and 202, and the memory module systems 100 and 300) may have its own network interface so that such components can be connected via the communication network 420 Communicate with other devices separately.Each of the computing devices described herein may be or be replaced by: personal computer (PC), tablet PC, set-top box (STB), personal digital assistant (PDA), cellular phone, network equipment, A server, network router, switch, or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be performed by this machine.In addition, although a single machine is described for the computing device 402 shown in FIG. 4, the term "machine" should also be considered to include any collection of machines that individually or collectively execute instruction sets (or multiple instruction sets). ) To perform one or more of the methods or operations discussed herein. Moreover, each of the described computing devices and computing systems may each include at least a bus and/or a motherboard, one or more controllers (such as one or more CPUs), a main memory that may include a temporary data storage device, At least one type of network interface, a storage system that may include a permanent data storage device, and/or any combination thereof. In some multi-device embodiments, one device can complete some parts of the method described herein, and then send the completed result to another device via the network, so that the other device can continue other steps of the method described herein .Although the memory, controller, and data storage device parts are shown in the example embodiments as each being a single part, each part should be considered to contain a single part or multiple parts that can store instructions and perform their respective operations. The term "machine-readable storage medium" should also be considered to include any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure. Therefore, the term "machine-readable storage medium" should be considered to include, but is not limited to, solid-state memory, optical media, and magnetic media.Figures 5 to 7 illustrate flowcharts of example methods 500, 600, and 700 that may be performed by the aspects of the memory module depicted in Figures 1 to 4 according to some embodiments of the present disclosure. For example, each of the methods 500, 600, and 700 may be executed by the processor of the memory module disclosed herein.In FIG. 5, the method 500 starts at step 502 by activating a global context used by at least one program in at least one memory module (for example, see the global shared context 101 shown in FIGS. 1 and 2). At least one memory module may include multiple physical memory partitions (for example, see partitions 104a to 104c, partitions 104d to 104f, and partitions 205a to 205i shown in FIGS. 1 and 2 respectively), and each of the multiple physical memory partitions It can be associated with at least one physical memory address. The at least one memory module may also include at least one processor (for example, see processors 106a to 106, 106c to 106d, and 206a to 206b), and the at least one processor may be related to at least one of the plurality of physical memory partitions United.In such embodiments and other embodiments, at least one processor may be configured to execute code and access a memory having at least one memory module based on a virtual memory address decoded in the code associated with the memory access The physical memory of the module system. At least one processor may be configured to translate and map each of the virtual memory addresses to a shared memory address that is associated with the physical memory of the memory module system and shared among the memory modules of the memory module system. At least one processor may be configured to receive shared memory addresses and data associated with the received shared memory addresses from other processors and memory modules of the memory module system. The at least one processor may be configured to translate and map the received shared memory address to a corresponding physical memory address associated with the at least one processor of the physical memory partition of the memory module system. And, the at least one processor may be configured to send the shared memory address and data associated with the sent shared memory address to the system based at least in part on determining the mapping of the sent shared memory address to the corresponding physical memory address of the system Other processors.At step 504, the method 500 continues to distribute the code of at least one program among the at least one memory module according to the activated global context. The global context (see, for example, the global shared context 101) can be used by the operating system of the device executing at least one program, and the global context can be modified at least according to user interaction with the device. In some embodiments, the method 500 may include distributing at least a portion of a graphics processing pipeline among at least one memory module.At step 506, the method 500 continues to execute each part of the code at least in part according to the locality of the virtual memory access of the program code to the at least one memory module. At step 508, the method 500 continues to access the physical memory of at least one memory module based on the virtual memory address decoded in the code associated with the memory access.In FIG. 6, the method 600 starts at step 602 by executing code by the processor of the memory module of the system and accessing the memory module based on the virtual memory address decoded in the computer program code associated with the memory access of the system The physical storage of the system. At step 604, the method 600 continues with the processor of the memory module mapping each of the virtual memory addresses to a shared memory address associated with the physical memory of the memory module of the system and shared among the memory modules of the system. At step 606, the method 600 continues with the processor of the memory module receiving the shared memory address and data associated with the received shared memory address from the other processors and memory modules of the system.At step 608, the method 600 continues with the processor of the memory module mapping the received shared memory address to the corresponding physical memory address associated with the processor of the physical memory partition of the system. At step 610, the method 600 continues by the processor of the memory module to map the shared memory address and the data associated with the sent shared memory address based at least in part on the mapping of the sent shared memory address to the corresponding physical memory address of the system. Send to other processors in the system.In FIG. 7, the method 700 starts at step 702 by distributing the global context used by the computer program among the memory modules of the memory module system. Step 702 may include step 704 in which the method 700 continues to receive shared memory addresses from other memory modules of the system by the memory module of the system. Step 702 may also include step 706, in which the method 700 continues with the memory module of the system sending the shared memory address to other memory modules of the system.At step 708, the method 700 continues to map the virtual memory address (decoded in the program code associated with the memory access) to the shared memory address associated with the physical memory of the system and shared among the memory modules according to the global context .At step 710, the method 700 continues to distribute the code of at least one program among the at least one memory module according to the distributed global context (for example, via mapping). Step 710 may include step 712 in which the method 700 continues to receive data associated with the received shared memory address from other memory modules of the system by the memory module of the system. Step 710 may include step 714 in which the method 700 continues to send the data associated with the sent shared memory address by the memory module of the system to other memory modules of the system.At step 716, the method 700 continues to execute each part of the code at least partially according to the locality of the virtual memory access of the program code to the memory module of the system. At step 718, the method 700 continues to access the physical memory of the memory module based on the virtual memory address decoded in the code associated with the memory access.In some embodiments, it should be understood that the steps of the methods 500, 600, and 700 can be implemented as a continuous process, for example, each step can be run independently by monitoring input data, performing operations, and outputting data to subsequent steps. And, such steps for each method can be implemented as a discrete event process, for example, each step can be triggered by an event that should trigger and produce a certain output. It should also be understood that each of FIGS. 5 to 7 represents the smallest method within a possible larger method of a computer system that is more complex than the method partially presented in FIGS. 1 to 4. Therefore, the steps depicted in each of Figures 5 to 7 can be combined with other steps that provide access to other steps associated with larger methods of more complex systems.Figures 8 to 9 illustrate example physical memory partitions (see, for example, partition 801, which includes partitions 802a, 802b, 802c, and 802d) according to some embodiments of the present disclosure and mappings to at least one partition and at least one data location in the partition. Shared memory address (for example, see shared memory address 803) bit group. More specifically, the predetermined bit of the shared memory address 803 includes two or more bit groups (for example, see the first bit group 804 and the second bit group 806). The first bit group 804 may provide a mapping to the physical memory partitions among the multiple physical memory partitions of the multiple memory modules described herein (for example, see partition 802b, which maps to the first bit group 804). The second bit group 806 may provide a mapping of data locations within a physical memory partition (see, for example, partition 802b, which contains data locations mapped to the second bit group 806).Specifically, in FIG. 9, the predetermined bit of the shared memory address 803 includes four bit groups. The third bit group 808 may provide a mapping to a cache set that includes one or more of the multiple physical memory partitions of the multiple memory modules described herein (for example, see the third bit group 808, which provides a mapping to at least the cache set associated with the partitions 802b and 802c). The cache set may be distributed across multiple partitions at certain memory locations (e.g., at certain arrays, banks, banks, rows, or columns). And, the value of the fourth bit group 810 can at least partially provide tag information for the corresponding cache set. The tag information in the fourth bit group 810 may provide a tag for determining whether a page or a cache line exists in the cache. The tag matching hardware can perform a tag lookup, and if it finds a tag, then data (eg, data in a page or cache line) is presented or cached in the cache. If the data is not in the cache, the data may need to be accessed in the backing storage device. The tag matching hardware can include multiple comparators or look-up tables (LUTs), or include dedicated memory elements that can provide matching functions. The second bit group 806 may provide a mapping of the data location in the partition (and more specifically, within the cache set) after tag matching (for example, tag matching through the tag information provided by the fourth bit group 810).Regarding the predetermined bits of the shared memory address described herein, the bit groups of the predetermined bits may be arranged in order or out of order, and the groups may be contiguous or not.Some parts of the previous detailed description have been presented with respect to the algorithm and symbolic representation of the operation of the data bits in the computer memory. These algorithm descriptions and representations are the most effective way for those skilled in the data processing field to convey the main idea of their work to other technicians in the field. Algorithms are here and generally considered to be self-consistent sequences of operations that produce the desired result. Operations are operations that require physical control of physical quantities. These quantities are usually but not necessarily in the form of electrical or magnetic signals that can be stored, combined, compared, and otherwise manipulated. Sometimes, mainly for general reasons, it has proven convenient to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, etc.However, it should be borne in mind that all these and similar terms should be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may refer to the actions and processes of a computer system or similar electronic computing device that manipulates the registers and memory of the computer system and converts the data into a computer system. System memory or registers or other such information storage system similarly represents other data as physical quantities.The present disclosure also relates to equipment for performing the operations herein. This apparatus may be specially constructed for the intended purpose, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such computer programs can be stored in a computer-readable storage medium such as but not limited to any type of disk (including floppy disk, optical disk, CD-ROM and magneto-optical disk), read-only memory (ROM), Random access memory (RAM), EPROM, EEPROM, magnetic or optical card or any type of medium suitable for storing electronic instructions and each coupled to a computer system bus.The algorithms and displays presented in this article are not essentially related to any particular computer or other device. Various general-purpose systems may be used with programs according to the teachings herein, or the general-purpose systems may prove to be more specialized devices that are easily constructed to perform the methods. The structure of a variety of these systems will be presented as set forth in the description below. In addition, the present disclosure is not described with reference to any specific programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present disclosure as described herein.The present disclosure may be provided as a computer program product or software, which may include a machine-readable medium having stored thereon instructions that can be used to program a computer system (or other electronic device) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (eg, a computer). In some embodiments, the machine-readable (eg, computer-readable) medium includes machine (eg, computer)-readable storage media, such as read-only memory ("ROM"), random access memory ("RAM"), disk storage media , Optical storage media, flash memory components, etc.In the foregoing specification, the embodiments of the present disclosure have been described with reference to specific example embodiments thereof. It will be apparent that various modifications can be made to the present disclosure without departing from the broader spirit and scope of the embodiments of the present disclosure as set forth in the appended claims. Therefore, the description and drawings should be viewed in an illustrative sense rather than a restrictive sense.
A finite state machine (115) is provided that both serializes virtual GPIO signals (135) and messaging signals (136) and that deserializes virtual GPIO signals and the messaging signals. The finite state machine frames the serialized virtual GPIO signals and messaging signals into frames each demarcated by a start bit and an end bit.
1.An integrated circuit comprising:First processor;a plurality of messaging signal registers, wherein the first processor is configured to write a transmit set of messaging signals into the messaging signal register;Multiple GPIO pins;a GPIO interface configured to receive a first set of signals from the first processor and transmit a portion of the first set of signals as a GPIO signal to a remote processor on the plurality of GPIO pins;Dedicated transmit pin;a finite state machine (FSM) configured to receive a remaining portion of the first set of signals from the GPIO interface and to string the remaining portion as a transmit set of virtual GPIO signals on the dedicated transmit pin Passing to the remote processor, and wherein the FSM is further configured to retrieve a transmit set of messaging signals from the messaging signal register and serially transmit to the remote processor on the dedicated transmit pin The set of transmissions of the messaging signal.2.The integrated circuit of claim 1 further comprising:a dedicated receive pin, wherein the FSM is further configured to serially receive a receive set of virtual GPIO signals from the remote processor on the dedicated receive pin and provide the virtual GPO signal to the GPIO interface Receive set.3.The integrated circuit of claim 2 wherein said GPIO interface is further configured to receive a receive set of GPIO signals from said GPIO pins and to receive a receive set of said GPIO signals to said first processor .4.The integrated circuit of claim 1 wherein said first processor comprises an application processor.5.The integrated circuit of claim 1 wherein said first processor comprises a modem processor.6.The integrated circuit of claim 2 wherein said FSM comprises a in-line-out (PISO) shift register and a serial-in parallel (SIPO) shift register.7.The integrated circuit of claim 2 wherein said FSM is further configured to serially transmit a transmit set of said virtual GPIO signal and a transmit set of said message passing signal in a frame, each frame consisting of The start bit and the end bit are delimited.8.The integrated circuit of claim 7 wherein said FSM is further configured to detect a failure of said remote processor by detecting an end bit that failed to receive one of said frames.9.The integrated circuit of claim 3 wherein the FSM is further configured to serially transmit a transmit set of the virtual GPIO signal and a transmit set of the messaging signal in response to a cycle of an external clock.10.The integrated circuit of claim 9 wherein said FSM is further configured to serially transmit a transmit set of signals in response to a first clock edge of said external clock, and responsive to said external clock The second clock edge receives the receive set serially.11.The integrated circuit of claim 3 wherein said FSM is further configured to serially transmit a set of transmissions of signals as pulse width modulated signals.12.The integrated circuit of claim 11 wherein said FSM comprises an oscillator and at least one counter for counting oscillations from the oscillator, and wherein said FSM is further configured to be responsive to said The count of at least one counter determines the pulse width for each pulse width modulated signal.13.The integrated circuit of claim 12 wherein said oscillator is a ring oscillator.14.The integrated circuit of claim 11 wherein said FSM is further configured to generate each pulse width modulated signal to have a first pulse width or a second pulse width, wherein said second pulse width is greater than The first pulse width.15.A method comprising:Receiving a GPIO signal set from the first processor at the GPIO interface;Transmitting a portion of the GPIO signal set to a remote processor through a dedicated GPIO pin;510:Transmitting the remainder of the GPIO signal set as a virtual GPIO signal to the remote processor on a dedicated transmit pin;A message passing signal is retrieved from a message passing signal register written by the first processor and the retrieved message passing signal is serially transmitted to the remote processor on the dedicated transmit pin.16.The method of claim 15 further comprising:Receiving a receive set of virtual GPIO signals serially from the remote processor on a dedicated receive pin;Receiving a receive set of GPIO signals serially from the remote processor on the dedicated GPIO pin;A receive set of the virtual GPIO signal and a receive set of the GPIO signal are provided to the first processor through the GPIO interface.17.The method of claim 16 further comprising:Receiving a received set of messaging signals serially from the remote processor on the dedicated receive pin;Writing the set of message passing signals to the message passing signal register according to an address of the received set of the messaging signal;A receive set of the message passing signal is retrieved by the first processor from the message passing signal register.18.The method of claim 17 wherein the serially transmitting the virtual GPIO signal and the retrieved message delivery signal are cycles in response to an external clock.19.The method of claim 17 wherein serially transmitting the virtual GPIO signal and the retrieved message passing signal comprises pulse width modulating a signal transmitted on the dedicated transmit pin.20.An integrated circuit comprising:First processor;a plurality of messaging signal registers, wherein the first processor is configured to write a transmit set of messaging signals into the messaging signal register;Multiple GPIO pins;a GPIO interface configured to receive a first set of signals from the processor and transmit a portion of the first set of signals as a GPIO signal to a remote processor on the plurality of GPIO pins;Dedicated transmit pin;Means for performing the following operations: receiving a remaining portion of the first set of signals from the GPIO interface, and serially transmitting the remaining portion as a transmit set of virtual GPIO signals on the dedicated transmit pin to The remote processor, and the transmit set of the message passing signal is retrieved from the messaging signal register and the transmit set of the messaging signal is serially transmitted to the remote processor on the dedicated transmit pin.21.The integrated circuit of claim 20 wherein said means are configured to serially transmit said sets of emissions in response to a cycle of an external clock.22.The integrated circuit of claim 20, further comprising an oscillator; and wherein said means is configured to treat said set of emissions as pulse width modulated in response to counting said oscillations from said oscillator Signals are transmitted serially.
Hybrid virtual GPIORelated applicationThis application claims the benefit of U.S. Provisional Patent Application Serial No. 61/982, file, filed on Apr The entire contents of the application are hereby incorporated by reference in its entirety.Technical fieldThis application relates to general purpose input/output (GPIO), and more particularly to integrated circuits configured to use pin pairs as virtual GPIO pins.backgroundGeneral Purpose Input/Output (GPIO) enables integrated circuit designers to provide pervasive pins that can be customized for specific applications. For example, depending on user needs, GPIO pins can be programmed as output pins or input pins. A GPIO module or peripheral will typically control a group of pins that can change based on interface requirements. Because of the programmability of GPIO pins, they are typically included in microprocessor and microcontroller applications. For example, an application processor in a mobile device can use several GPIO pins for handshake signaling, such as interprocessor communication (IPC) with a modem processor.For such handshake signaling, the sideband signal can be considered "symmetric" if the sideband signal must be both transmitted and received by the processor. If there are n symmetric sideband signals that need to be exchanged, then each processor requires n*2 GPIOs (one GPIO transmits a given signal and one GPIO receives the signal). For example, a symmetric IPC interface between the modem processor and the application processor can include five signals that translate to the resulting IPC signaling requiring 10 GPIO pins. IPC communication requires so many GPIO pins to increase manufacturing costs. In addition, investing too much GPIO for IPC limits the availability of GPIO to other system-level peripheral interfaces. This problem cannot be solved by moving the IPC communication to the main data bus between the processors, as this violates certain corner conditions.In addition to the GPIO signals, the processor conventionally communicates with external devices, such as via an SPI bus with dedicated transmit pins and receive pins for messaging with external devices. In contrast to GPIO signals, such messaging signals are not specific to a particular pin, in other words, various messages can be transmitted on dedicated messaging transmit pins. The receiving device (a priori) does not know what the message is involved in. This is in contrast to the GPIO signal - the GPIO signal is dedicated to a specific GPIO pin, so the fact that the GPIO signal is received on the corresponding GPIO pin The processor identifies the signal. But this is not the case for messaging signals. Such signals have address bits that the processor uses to route the received messaging signals to the appropriate registers. At the time of registration, the processor must then interpret the registered message. The resulting need for dedicated messaging transmit pins and dedicated messaging receive pins adds significantly to manufacturing costs.Accordingly, there is a need in the art for a hybrid GPIO and messaging architecture that is capable of accommodate a large number of input/output signals without requiring an excessive number of pins.OverviewA hybrid virtual GPIO architecture for communicating between two integrated circuits each having a processor is provided. This architecture is considered "hybrid" because it accommodates both GPIO signals and messaging signals. As discussed earlier, GPIO signals in conventional GPIO systems are dedicated to specific pins. This signal is identified to the receiving processor when a GPIO signal is received on the corresponding GPIO pin. However, the messaging signal is received on a dedicated receive pin, such as on a dedicated receive pin in a Serial Peripheral Interface (SPI) or Inter-Process Communication (IPC) interface. Various messaging signals can thus be received on the same dedicated receive pin. In order to distinguish between messaging signals, conventionally the messaging signal includes an address header that contains an address. The receiving processor routes the received message to the appropriate register based on the address. For example, one type of message may relate to the identity of an installed card, such as a wireless card or a GPS card. Such a message would then have an address mapped to the appropriate register so that the corresponding message content can be registered accordingly. By interpreting the results of the registers, the processor can then interpret the identity of the installed card. Other types of messages can be routed to the appropriate registers in a similar manner.In the hybrid GPIO interface disclosed herein, the messaging signal is transmitted on the same dedicated transmit pin that carries the virtual GPIO signal. The number of virtual GPIO signals and the number of messaging signals can be customized for a given transmit and receive processor pair. A handshake protocol is disclosed such that processors in their respective integrated circuits can be informed of the number of virtual GPIOs and messaging signals. Each integrated circuit also includes a hybrid GPIO interface for communicating with the remote processor using the signal set. The set of signals includes a GPIO signal set, a virtual GPIO signal set, and one or more messaging signals. Each integrated circuit thus includes a set of GPIO pins corresponding to the GPIO signal set. These GPIO pins are used to transmit the GPIO signal set in a conventional manner as is known in the GPIO art.In contrast to the GPIO signal set, the virtual GPIO signal set and these messaging signals are not transmitted on the GPIO pins. Alternatively, each integrated circuit uses a dedicated transmit pin and a dedicated receive pin to transmit and receive the virtual GPIO signal set and these messaging signals. In view of this, the virtual GPIO signal set includes a transmit set and a receive set. A finite state machine (FSM) in each integrated circuit is configured to serially transmit the transmit set to the remote processor through the dedicated transmit pin. The finite state machine is further configured to serially receive the received set of virtual GPIO signals from the remote processor on the dedicated receive pin.The messaging signal can include any type of signal that is typically transmitted on a dedicated bus that is shared by various messaging signals. For example, the messaging signal can include an inter-integrated circuit (I2C) signal for initial configuration of the processor. Just like a virtual GPIO signal, a messaging signal can be divided into a transmit set and a receive set. The FSM uses a dedicated transmit pin to serially transmit a messaging signal transmit set and a dedicated receive pin to serially receive a messaging signal receive set.The processor provides a first set of signals to the hybrid GPIO interface. From the hybrid GPIO interface, a portion of the first set of signals is transmitted to the remote processor as a first set of GPIO signals on the first set of corresponding GPIO pins. The remainder of the first set of signals from the processor is provided in parallel to the FSM by the hybrid GPIO interface. Depending on the content of the remaining portion (GPIO or message passing signal), the FSM can then serially transfer the remaining portion as a transmit set of virtual GPIO signals on the dedicated transmit pin.The GPIO interface also receives a second GPIO signal set from the remote processor on the second set of corresponding GPIO pins. Depending on the mode of operation, the FSM serially receives a receive set of virtual GPIO signals or a receive set of messaging signals from a remote processor, and provides the receive set to the hybrid GPIO interface in parallel.There are two main embodiments of the disclosed hybrid virtual GPIO architecture. In a first embodiment, each frame transmitted on a dedicated transmit pin includes a header that identifies whether the frame is a transmit set comprising a virtual GPIO signal or a transmit set comprising a messaging signal. The header may also indicate that the corresponding frame will identify the vGPIO stream length to be set on the receiver side or indicate an acknowledgement of the expected vGPIO stream length. The frame size is thus variable and is determined by the resulting stream length determination frame. In a second embodiment, the header is extended for frames including both virtual GPIO signals and messaging signals such that the extended header identifies the bit position of the virtual GPIO signal and the messaging signal. The hybrid GPIO interface can then provide a second set of signals to the receiving processor, the second set of signals including a second set of GPIO signals and a set of messaging signals from the remote processor.The FSM transmits a virtual GPIO signal and a transmit set of messaging signals in frames each delimited by a start bit and an end bit. The FSM in the remote processor thus receives the transmitted frame as a received set of virtual GPIO signals and messaging signals. By monitoring whether it receives a complete frame including both the start bit and the end bit, the FSM for one processor can detect if the remote processor has failed.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 is a block diagram of an example hybrid virtual GPIO architecture.2A is a high level block diagram of a hybrid virtual GPIO architecture in which a processor communicates with a single remote processor.2B is a high level diagram of a hybrid virtual GPIO architecture in which a processor communicates with two remote processors.3 is a block diagram of a hybrid virtual GPIO finite state machine responsive to an external clock.Figure 4 illustrates the format of a virtual GPIO/messaging signal frame.5 is a flow chart of a method practiced by the GPIO architecture of FIG. 1.Figure 6 illustrates a length programming frame used to program virtual GPIO and message frame lengths.Figure 7 illustrates an acknowledgment frame that is transmitted to acknowledge the frame length programmed in response to the frame of Figure 6.FIG. 8 illustrates an example virtual GPIO frame and an example messaging signal frame.Figure 9 illustrates an example combined virtual GPIO and messaging frame.Figure 10 illustrates a hybrid virtual GPIO finite state machine that does not use an external clock.11 is a timing diagram for transmitting data frames by the finite state machine of FIG.The embodiments of the present invention and its advantages are best understood by referring to the following detailed description. It should be appreciated that in the one or more drawings, the same reference numerals are used to identify the same elements.Detailed DescriptionA hybrid virtual general-purpose input/output (GPIO) architecture is provided that enables the system to use pin pairs as if the pin pair constituted a larger number of multiple GPIO pins and dedicated to messaging signals The transmit pin is the same as the dedicated receive pin. As used herein, "messaging signal" refers to a signal that would conventionally be transmitted on a dedicated transmit pin (such as is practiced in IPC or SPI protocols). The messaging signal thus includes an address that enables the receiving processor to route the received messaging signal to the appropriate register. This hybrid virtual GPIO architecture is considered "virtual" because for system-level applications that create those virtual GPIO signals, this is like the virtual GPIO signals being accommodated for input on regular GPIO pins / The output is the same. In other words, a system on chip (SoC) or processor having a virtual GPIO architecture as disclosed herein does not experience a functional difference between a GPIO signal and a virtual GPIO signal. However, only two pins are used to transmit and receive virtual GPIO signals that would otherwise require their own dedicated GPIO pin pair (if the GPIO signal is symmetrical). The hybrid virtual GPIO architecture is considered "hybrid" because the dedicated transmit pins used to carry the virtual GPIO signals are also used to pass the messaging signals to the remote processor. Similarly, a dedicated receive pin that is used to receive the virtual GPIO signal is also used to receive the messaging signal from remote processing.The virtual GPIO signals disclosed herein may be discussed for IPCs between an application processor and a modem processor in a portable mobile phone or other communication device. However, it will be appreciated that the virtual GPIO circuits and techniques disclosed herein are widely applicable to system-on-a-chip (SoC) or application specific integrated circuits (ASICs) that require GPIO capabilities.The disclosed hybrid virtual GPIO architecture makes the health of the transmitting node transparent to the receiving node. This is an important advantage, especially during the debug phase of the software implementation, as it indicates to the receiving processor when the transmitting processor becomes inactive.To enable such robust virtual GPIO capabilities, each integrated circuit includes a dedicated transmit pin coupled to a transmit line on the circuit board, and a dedicated receive pin coupled to the receive line of the circuit board. In view of this, the virtual GPIO signal can be divided into a transmission set for transmission on the transmission line and a reception set for reception on the reception line. If the signaling is symmetric, then the number of signals in the transmit set of each processor is the same. However, the hybrid virtual GPIO architecture disclosed herein is capable of accommodate asymmetric signaling where the transmit set of one processor's virtual GPIO signal is not the same as the remote processor's transmit set. Similar to a virtual GPIO signal, the messaging signal is also transmitted on the dedicated transmit pin and received on the dedicated receive pin.Turning now to the drawings, FIG. 1 illustrates a hybrid virtual GPIO architecture 101 including an application processor integrated circuit 101 and a modem processor integrated circuit 105 within a mobile telephone or other communication device. Because each integrated circuit is coupled to a dedicated transmit line and a dedicated receive line, the transmit line 110a of the application processor integrated circuit 100 is thus the receive line of the modem processor integrated circuit 105. Similarly, the transmit line 110b of the modem processor integrated circuit 105 is the receive line of the application processor integrated circuit 100. These wires or wires are carried on the circuit board or other physical interconnection between the integrated circuits 100 and 105. Each integrated circuit includes a dedicated transmit pin 112 to couple to a corresponding transmit line (eg, line 110b of modem processor integrated circuit 105). Similarly, each integrated circuit includes a dedicated receive pin 111 to couple to a corresponding receive line (e.g., line 110a of modem processor integrated circuit 105). The finite state machine (FSM) 115 in each integrated circuit uses these dedicated lines and pins to control transmission and reception with reference to an external clock signal 120 from an external clock source (eg, a 32 KHz sleep clock).The application processor integrated circuit 100 includes a processor 101. Similarly, modem processor integrated circuit 105 includes a processor 102. Each processor is coupled by a GPIO interface 103 that interfaces with GPIO pins 125 using GPIO interface 103 in a conventional manner. Portions of these signals processed by each hybrid GPIO interface 103 can be transmitted and received as conventional GPIO signals 130 on conventional GPIO pins 125. However, the remainder of these signals processed by GPIO interface 103 are not transmitted or received through conventional GPIO pin 125. Alternatively, some of the remaining signal portions include a plurality of virtual GPIO signals 135 that are transmitted and received by the corresponding FSM 115 using dedicated transmit pins and dedicated receive pins. For receiving and transmitting message passing signals 136, each FSM 115 also interfaces directly with the corresponding processor. Because the messaging signals 136 are not GPIO signals, they are not coupled through the GPIO interface 103. Each FSM 115 transmits and receives a messaging signal 136 through its dedicated transmit pin 112 and receive pin 111. These pins are thus "hybrid" pins because they are used for both virtual GPIO signal 135 and messaging signal 136.The virtual GPIO signals 135 do not each have their own dedicated pins as in the case of the conventional GPIO signal 130. This is quite advantageous because the hybrid virtual GPIO architecture 101 achieves a significant reduction in pins compared to conventional GPIO embodiments where the virtual GPIO signals 135 will each require their own pins. The messaging signal 136 will also conventionally require another dedicated transmit pin and another dedicated receive pin. However, these additional pins are also eliminated in the advantageous hybrid virtual GPIO architecture of the present application.The integrated circuit may include only one FSM 115 or may include a plurality of such components for interfacing with multiple external systems. 2A illustrates a hybrid virtual GPIO architecture in which integrated circuit 200 includes a single FSM 115 for communicating with a remote processor in integrated circuit 205, including its own FSM 115. In contrast, integrated circuit 220 shown in FIG. 2B includes FSM 115A and FSM 115B for communicating with remote processors in integrated circuits 225 and 230, respectively. In view of this, a system on a chip (SoC), such as the processor discussed herein, can be configured with as many FSMs as are needed to accommodate hybrid virtual GPIO signaling with other SoCs.Regardless of the number of FSMs a processor may have, as indicated in Figure 2A, each FSM communicates using its own dedicated transmit pin 240 and receive pin 245.Referring again to FIG. 1, because virtual GPIO signal 135 is tolerated using a finite state machine such as FSM 115, processors 101 and 102 can be in a dormant or other type of dormant state, yet still be able to receive virtual GPIO signals 135 and messages. Signal 136 is passed. In this manner, the virtual GPIO architecture 101 not only advantageously saves the number of pins per GPIO interface 103, but is also low power.As used herein, a "pin" is a structure (such as a pad that covers a wire or other physical interconnect (eg, a package interconnect or a via via interconnect) that an integrated circuit uses to couple to a circuit board. Or the actual terminology of the actual pin). For example, if each integrated circuit has sixteen GPIO pins or pads 125 as shown in FIG. 1, these pins can be configured to accommodate eight symmetric GPIO signals 130 (for clarity, Figure 1 Only four conventional GPIO signals #1 through #4) or sixteen asymmetric GPIO signals 130 are shown. Moreover, each integrated circuit can use lines 110a and 110b to accommodate input/output docking of multiple (n) virtual GPIO signals 135, where n is an arbitrary complex integer. Similarly, each integrated circuit can use lines 110a and 110b to accommodate input/output docking of multiple (m) messaging signals 136, where m is a plural positive integer. There is no difference between the GPIO signal 130 and the virtual GPIO signal 135 for each processor core: they are simply signals that are to be transmitted and received through the GPIO interface 103 as needed. However, because virtual GPIO signal 135 and messaging signal 136 have no dedicated pins (this is in contrast to conventional GPIO signal 130), virtual GPIO signal 135 and messaging signal 136 are serialized in FSM 115 for online 110a and Transfer on 110b. Upon receipt, each FSM 115 deserializes the received serialized virtual GPIO signal and the received serialized messaging signal. Thus, each FSM 115 acts as a serializer/deserializer for the virtual GPIO signal 135 and the message passing signal 136.The processor may need to receive an interrupt signal in response to changes in selected ones of the GPIO signals or messaging signals. For virtual GPIO signal 135 and messaging signal 136, modem power manager (MPM) 140 monitors these selected GPIO signals and messaging signals in a manner programmed through an interrupt configuration register (not illustrated). Each virtual GPIO signal 135 has a corresponding interrupt configuration register. If the virtual GPIO signal 135 is required to generate an interrupt in response to the signal changing state, the corresponding configuration register will be programmed accordingly. Similarly, if virtual GPIO signal 135 or messaging signal 136 is a signal that does not generate an interrupt regardless of whether the signal has changed state, then the corresponding interrupt configuration register will also be programmed accordingly. The MPM 140 can also include a finite state machine. Thus, like the FSM 115, the MPM 140 is low power and active regardless of whether its processor is in sleep mode or some other undulating state.The virtual GPIO signal 135 can be subdivided into a transmit set and a receive set. In a symmetric system, each transmit set can have the same number. Similarly, each receive set can have the same number of signals. However, it will be appreciated that the virtual GPIO architecture 101 is advantageous in that it can readily accommodate asymmetric signaling embodiments in which the transmit set and messaging signals 136 of the virtual GPIO signal 135 are 136. The set of shots have different sizes, and wherein the received set of GPIO signals 135 and the received set of messaging signals 136 also have different sizes. Regardless of whether the architecture 101 is symmetric or asymmetric, each FSM 115 receives the transmit set of virtual GPIO signals 135 from the GPIO interface 103 in parallel, meaning in parallel that each of these transmit sets is carried on the signal itself at the GPIO The wire 103 between the interface 103 and the FSM 115. The messaging signals 136 are not GPIO signals and thus they are not coupled through the GPIO interface 103. In view of this, the hybrid interface represented by each FSM 115 can be given a certain peripheral address by the corresponding processor 101 or 102. Each FSM 115 is configured to decode an address field 137 in the messaging signal 136 such that a given messaging signal 136 can be stored in the corresponding message register 138. These messaging registers 138 are each mapped to a certain offset of the general address for the FSM 115 within the address space of the corresponding processor 101 or 102. In response to an interrupt from the MPM 140, the processor 101 or 102 can then access the messaging register 138 to obtain an appropriate messaging signal 136. As with virtual GPIO signal 135, messaging signal 136 can be subdivided into a transmit set and a receive set. Whether the architecture is symmetrical or asymmetrical, the resulting transmission of these transmit sets by the FSM 115 occurs on a single transmit pin 112. The transmission of virtual GPIO signal 135 from one processor is integrated into the received set of virtual GPIO signals 135 of the remote processor. Similarly, the transmission of messaging signal 136 is integrated into a received set of messaging signals 136 for the remote processor. The FSM 115 of the remote processor then deserializes the received set of virtual GPIO signals 135 so that it can be presented to the GPIO interface 103 in parallel.Each FSM 115 includes a configuration register (not illustrated) that stores the previous state of the transmit set of virtual GPIO signal 135 and messaging signal 136. In this manner, each FSM 115 can monitor the current state of the virtual GPIO signal 135 transmit set received from the GPIO interface 103 and trigger the serial of the corresponding transmit set only if the current state has changed relative to the previous state. Transfer. In other words, the FSM 115 will only trigger a serial transmission of the transmit set if the state is changed by storing the previous state in the configuration register 107 to monitor one or more signals within the transmit set. Each processor is aware of the address of the message passing signal register 138 and can thereby write the desired set of emissions to them and also read any changes in the receive set. The FSM 115 monitors whether the transmit sets of the messaging signals 136 have changed relative to their previous transmissions and will accordingly trigger the transmission of the transmit set to the remote processor. The MSM 140 monitors whether the receive set has changed as previously discussed and interrupts the corresponding processor so that the changed receive set can be processed.As discussed above, each FSM 115 acts as a serializer/deserializer to serialize each transmit set and deserialize each receive set. Figure 3 is a block diagram of the FSM 115 to better illustrate these operations. The FSM 115 exchanges the virtual GPIO signal 135 and the message passing signal 136 with the corresponding processor through the multiplexing module 300. The multiplexing module interfaces with the corresponding processor for the virtual GPIO signal 135 through the virtual GPIO interface 103 and directly interfaces with the corresponding processor for the messaging signal 136. In one embodiment, each FSM 115 includes a logic circuit that will authorize transmission of a transmit set of virtual GPIO signals 135 or a transmit set of messaging signals 136 on transmit line 110a only if there is a change in any of the transmit sets. 301. Logic circuit 301 thus compares the current state of the transmit set of virtual GPIO signal 135 (or message transfer signal 136) with the previous state of the set of transmit signals stored in corresponding configuration register 107. For example, logic circuit 301 can include an exclusive OR (XOR) gate 310 to perform the comparison. The multiplexing module 300 loads the transmit set into the in-line-out (PISO) shift register 315 in parallel. If the enable signal 320 from the XOR gate 310 goes high (indicating a change between the current state of the transmit set and the previous state), the PISO shift register 315 is enabled to serially place it in response to the cycle of the external clock 120. The content is moved out onto the transmission line 110a.The FSM 115 also deserializes the received set of virtual GPIO signals 135 or messaging signals 136 in a similar manner using a serial in/out (SIPO) shift register 325. The received set of virtual GPIO signals 135 and messaging signals 136 are generated by the remote processor and transmitted by the remote processor onto the receive line 110b. The received set of virtual GPIO signals 135 (or messaging signals 136) are successively moved into SIPO shift register 325 in response to the cycle of external clock 120. As discussed further herein, the FSM 115 is configured to perform the transmission of the transmit and receive sets of the transmit set of the virtual GPIO signal 135 and the messaging signal 136 in frames having separate start and end bits.In one embodiment, FSM 115 may be considered to include means for receiving a transmit set of virtual GPIO signals from a GPIO interface and serially transmitting a virtual GPIO signal to a remote processor on a dedicated transmit pin. The transmit set, and the transmit set of the message passing signal retrieved from the messaging signal register and the transmit set of the messaging signal transmitted to the remote processor on the dedicated transmit pin.These frames have a predefined size. In one embodiment, the frame size is determined by the header to be up to a certain number of bits. An example frame 400 is shown in FIG. The header 405 can include two function bits, fn_0 and fn_1. In one embodiment, if both of the functional bits are zero, then the subsequent bits are virtual GPIO signals 135. If fn_0 is zero and fn_1 is equal to 1, then the subsequent bits are message passing signal 136. If fn_0 is one and fn_1 is equal to 0, the subsequent bits represent the length of the virtual GPIO frame desired by the remote processor. Similarly, if both of these functional bits are one, then the subsequent bits represent the acknowledgment of the desired frame length by the remote processor. If the transmit set of virtual GPIO signal 135 (or the transmit set of messaging signal 136) is less than this fixed frame size, the unused bits in each frame may be a don't care value. Alternatively, each FSM 115 can be configured to vary the size of the transmitted frame depending on the number of bits required for a given application. It will be appreciated that the foregoing discussion of encoding using two functional bits is merely an example, and other headers and encoding protocols may be used to identify whether a frame carries a virtual GPIO signal 135, a messaging signal 136, an identification of a virtual GPIO frame length, a virtual The GPIO frame length is acknowledged, the message signal frame length is identified, or the message transmission signal frame length is acknowledged. In one embodiment, frame 400 may also include a type bit (type_bit) associated with the programmed and acknowledged frames discussed further below. For example, in one embodiment, the type bit can be high to identify a virtual GPIO frame and low to identify a message passing signal frame.The number of frames required to transmit the transmit set of virtual GPIO signal 135 or messaging signal 136 depends on the number of signals in a particular transmit set and the frame size. For example, assume that the frame size is eight bits and there are ten virtual GPIO signals 135 in the transmit set. To use the eight-bit frame to transmit the transmit set, then two frames will be required.To detect a complete frame of the received set of virtual GPIO signals 135 or messaging signals 136, the FSM 115 may include the number of cycles required for the external clock 120 after receiving the start bit of the frame as shown in FIG. Counting logic circuit 350. For example, assume that the receive set includes ten virtual GPIO signals 135 received in response to ten cycles of the external clock 120. After detecting the start bit and waiting for another ten cycles of the external clock 120, the logic circuit 350 will then expect to receive the end bit. If the end bit is detected accordingly, then logic circuit 350 can then gate strobe output latch 351 to receive in parallel the received set of virtual GPIO signals 135 that have been moved into SIPO shift register 325 as a complete frame. The received set of latched virtual GPIO signals can then be presented to GPIO interface 103 by multiplexing module 300. Although the received set of messaging signals is loaded into the messaging signal register 138 instead of being routed through the GPIO interface 103, latching of the received set of messaging signals 136 occurs similarly.Referring again to PISO shift register 315, it will be appreciated that the register is configured to frame the transmit set of virtual GPIO signals and messaging signals with start and end bits. The transmit set of virtual GPIO signals is thus transmitted in frame 400 delimited by start and end bits. Since the transmit set of the transmit processor becomes the receive set of the remote processor, the receive set is also framed accordingly. This frame is advantageous because each processor can thereby monitor the health of the remote processor without requiring any additional dedicated pins. For example, each FSM 115 can be configured to weakly pull its dedicated transmit pin 112 during the default state (the current state of the transmit set of virtual GPIO signals is unchanged compared to the previous state) (and thus weaken transmit line 110a) Pull) to the supply voltage. For such an embodiment, the start bit will be a logic zero such that to transmit this start bit, the FSM 115 grounds the transmit line 110a. In this manner, each FSM 115 can detect the received start bit readily by detecting that the receive line 110b has been pulled to ground. In one embodiment, the start bit and the stop bit are logically complementary. If the start bit is a logic zero, the stop bit will thus be a logic high. The payload of the frame can then be extended from the type bit to the stop bit 410 at the end of the demarcation frame.There is a possibility that the processor malfunctions such that it does not properly pull its emission line 110a to the ground. The remote processor will thereby detect this as a start bit and logic circuit 350 will begin counting towards the end of the frame accordingly. However, if the end bit is a logic one, then each FSM 115 charges the transmit line 110a to the supply voltage to signal the end of the frame transmission. If the processor fails so that the remote FSM 115 detects a signal that is considered a start bit, the logic circuit 350 will not detect the end bit and will accordingly inform its processor about the failure of the remote processor.In order to allow for sufficient setup time for reception, the transfer of frame 400 should occur with reference to the first clock edge and the reception takes place with reference to the remaining clock edges. For example, a bit in the PISO shift register 315 can be shifted out for transmission on the transmit line 110a in response to a falling edge or a negative edge of the external clock 120. Conversely, the received bit on receive line 110b can be shifted into SIPO shift register 325 in response to the rising edge or positive edge of clock 120.In order for a processor to detect an inactive state in the remote processor, each FSM 115 can be configured to weakly pull its transmit line in a default state (where no frames are to be transmitted). As previously discussed, the start and stop bits have opposite logic states. The start bit 406 of the frame 400 of Figure 4 may thus be a logical zero (ground) such that the transmit line 110a is pulled low for the transfer of the bit, and the stop bit 410 may be a binary one value such that the transmit line 110a is This bit is transferred and pulled high to the supply voltage. Referring again to FIG. 3, logic circuit 350 is configured to monitor receive line 110b with reference to the rising edge of external clock 120. The default logic state for frameless transmission is indicated by the receive line 110b simply maintaining high at the weak pull-up discussed above. If logic circuit 350 detects that receive line 110b is pulled low (indicating a zero value of start bit 405) on one of the rising edges of external clock 120, then logic circuit 350 waits for a sufficient number of clock cycles based on the predefined size of frame 400 to The logic high value of stop bit 410 is then detected. Receive stop bit 410 indicates to logic circuit 350 that full frame 400 has been completely moved into SIPO shift register 325. At this point, logic circuit 350 gates SIPO shift register 325 such that the received frames are provided to multiplexing module 300 in parallel by latch 351. The received set of virtual GPIO signals (or messaging signals 136) may then be provided to the processor core via GPIO interface 103 accordingly.A relatively slow external clock 120, such as a 32 KHz sleep clock, is sufficient for the signaling requirements of the IPC. For example, assume that the minimum setup and hold requirements for the transmission of virtual GPIO signal 135 and messaging signal 136 are each two nanoseconds, and that the maximum expected lead or lag of external clock 120 received at FSM 115 is six nanoseconds. It can be easily shown that the resulting maximum frequency of the external clock 120 will be 62 MHz. A 32 KHz frequency, such as from a sleep clock, thus provides a very large margin of safety for such embodiments. An example method of operation of architecture 101 will now be outlined.The method of operation of architecture 101 is summarized in the flow chart of FIG. The method begins at step 500 by receiving a GPIO signal set from a first processor at a GPIO interface. Step 505 includes transmitting a portion of the GPIO signal set from the GPIO interface to the remote processor via the GPIO pin. Step 510 includes transmitting the remainder of the GPIO signal set as a virtual GPIO signal from the GPIO interface to the remote processor on the dedicated transmit pin signal. Finally, the method includes an act 515 of retrieving the message passing signal from the message passing signal register written by the first processor and serially transmitting the retrieved message passing signal to the remote processor on the dedicated transmit pin.Consider the advantages of the disclosed virtual hybrid GPIO architecture: only two pins are needed, and any number of virtual GPIO signals 135 and messaging signals 136 can be serialized and deserialized by a finite state machine. The only limitation is the timing requirement for the virtual GPIO signal to reference the external clock 120 and any expected clock lag or lead. In addition, no other pins are needed to make the health of one processor transparent to the other processor.Frame 400 is also quite advantageous because various messaging signals 136 and virtual GPIO signals 135 can be transmitted on dedicated transmit pins 112, using only the overhead of as few as two functional bits. An example programming frame to set the virtual GPIO frame length (and set the messaging signal frame length) is shown in FIG. The programming frame 600 sets the virtual GPIO frame length. Similarly, programming frame 605 sets the messaging signal frame length. The number of bits used to define the frame length (and thus the length of each programming frame) is predefined. Thus, once the FSM 115 sees a header indicating that the program length is set (such as fn_0 discussed above equals 1 and fn_1 is equal to 0), it will read the frame length from the frame body. In view of this, the FSM 115 needs to know if the length of the virtual GPIO frame or messaging frame is programmed. Thus, each header 405 for programming frames 600 and 605 is followed by a frame type bit 610. For example, frame type bit 610 equal to one may indicate that the virtual GPIO frame length is programmed, and frame type bit 610 equal to zero may indicate that the message passing signal frame length is programmed. In one embodiment, each programming frame 600 and 605 has five programming bits ranging from bit-0 to bit-4. Each bit is a power of two, as identified by its name. In other words, bit-0 is a coefficient multiplied by 20, bit-1 is a coefficient multiplied by 21, bit-2 is a coefficient multiplied by 22, bit-3 is a coefficient multiplied by 23, and bit-4 is multiplied by 24 Coefficient. These five programming bits can thus be programmed from zero to 31 frame lengths. The addition of a programming bit will enable programming of frame lengths up to 63, and so on.When the remote FSM 115 receives a programming frame, such as frame 600 or 605, it can proceed to use the acknowledgment frame to acknowledge the defined frame length. An example acknowledgement frame is shown in FIG. Frame 700 is a virtual GPIO acknowledgement frame and frame 705 is a message delivery signal acknowledgement frame. Each frame 700 and 705 includes a header 405 in which the function bit identifies the frame as an acknowledgement frame. In one embodiment, the header 405, in which both of the functional bits are logical one, identifies the acknowledged frame. The frame type bit 710 following the header 405 identifies the acknowledged frame type. In one embodiment, the virtual GPIO acknowledgment frame 700 is identified by the frame type bit 710 being equal to a logical one. Conversely, the message passing signal acknowledgment frame 705 can be identified by the frame type bit 710 being equal to a logical zero. The programming bits following the frame type bit 710 are equal to the programming bits in the corresponding frame 600 or 605.Once the frame length is thus programmed, the frame 800 of the virtual GPIO signal 136 or the frame 805 of the messaging signal can be transmitted as shown in FIG. Referring again to Figure 1, note that there are n virtual GPIO signals 135 and m message passing signals 136. Each frame 800 can thus be dedicated to only one GPIO port (one of the n GPIO signals 135), or it can include one bit for each of the n GPIO signals 135. In other words, GPIO words can be transmitted serially according to various ports or they can be transmitted in parallel. The same serial/parallel considerations apply to messaging signals. Regardless of whether each frame 800 and 805 is carrying multiple ports or only one port, header 405 identifies whether the frame is a virtual GPIO frame or a messaging signal frame.Instead of using separate frames to transmit virtual GPIO signals 135 and messaging signals 136, these signals may be combined in an alternate embodiment of a hybrid virtual GPIO architecture, where each frame includes both GPIO signal 135 and messaging signal 136. . For example, FIG. 9 illustrates an example hybrid frame 900 that includes a header 405 and an extended header 905. The extended header 905 indicates the location of the message passing signal bit and the bit of the virtual GPIO bit that follows the extended header 905 and before the stop bit 410. Depending on the latency requirement, message bit 910 or virtual GPIO bit 915 may be in front of the frame body. In some embodiments, the extended header 905 can include a misalignment bit, such as a CRC bit. Note that the extended header 912 only needs to identify only the location and length of the virtual GPIO bit 915 or only the location and length of the message bit 910, since by default it is known that the remaining bits belong to the remaining bit class.The use of the shared external clock 120 as discussed above is simple and easy to implement, but it requires that each FSM 115 be associated with a clock pin for receiving the shared clock 120. In order to avoid this additional pin requirement, the external clock 120 can be eliminated as discussed in U.S. Provisional Application Serial No. 61/907,947, the disclosure of which is incorporated herein. Referring again to FIG. 1, architecture 101 can thus be modified by eliminating external clock 120 and its corresponding pins. In order to eliminate any need to reserve pins for receiving the shared clock 120 in each integrated circuit, the transmission of the signal transmission set for the transmitting integrated circuit and the receiving integrated circuit is asynchronous. To enable this advantageous asynchronous transfer and reception, each FSM 115 may include or be associated with an oscillator, such as a ring oscillator. The transmit FSM pulse width modulates the signal transmitted on the dedicated transmit pin in response to each bit in the transmit set by counting the oscillations from the oscillator. The bits in the transmit set can be transmitted in a data frame, each bit in the frame being a pulse width modulated version of the corresponding bit in the transmit set. Each bit in the transmitted data frame has a particular bit period for use in pulse width modulation. For example, if the transmitted bit has a binary state (such as binary zero), the FSM can count the first number of oscillations such that a majority of the bit period expires. When the first number of oscillations are counted, the FSM pulsates the dedicated transmit pin with a first binary voltage, such as with a supply voltage VDD. At the beginning of the count, the dedicated transmit pin is pulsed with an opposite second binary voltage state, such as ground.Conversely, if the transmitted bits have opposite binary states (such as binary one), then the FSM starts transmitting bits with a second binary voltage (such as ground) and proceeds to count the second number of oscillations, thereby A small part expires. When counting the second number of oscillations, the FSM pulsates the dedicated transmit pin with the first binary voltage. In this manner, the voltage of the transmit line coupled to the dedicated transmit pin is pulsed with the first binary voltage in accordance with the variable pulse width. If the current transmit bit has a first binary value, the transmit line is pulsed with the first binary voltage according to the first pulse width. Conversely, if the current transmitted bit has an opposite second binary value, then the transmit line is pulsed with the first binary voltage according to the second pulse width.The transmitted data frames received from the remote processor on the dedicated receive pin at the FSM are demodulated in a similar manner. It is convenient to have the default state (or idle mode) of each transmit line (which is the receive line of the receive processor) charged to the supply voltage VDD. This allows the health of the remote processor to be transparent to the receiving processor as discussed further below. The second binary voltage in such an embodiment can then be grounded. The receiving FSM then identifies the beginning of the received bit by detecting when the dedicated receive pin is discharged. The receiving FSM can then begin counting the oscillations from its oscillator. Two counts will then be generated: a first receive count of how many oscillations occur during the bit portion of the dedicated receive pin being charged to the first binary voltage, and a dedicated receive pin being charged to the second binary voltage The second receive count of how many oscillations occur during the bit portion. By comparing the two receive counts, the receiving FSM can determine whether the first pulse width or the second pulse width is applied to the received bit. The received data frames are demodulated accordingly, eliminating the need for a shared clock to coordinate the transmission of data frames on the transmit line. To distinguish between such FSMs and FSM 115 using an external clock, the following FSMs will be tagged with the internal clock FSM.Figure 10 is a block diagram of internal clock FSM 1015 to better illustrate its transmit and receive operations. The FSM 1015 receives the transmit set of virtual GPIO signals 135 from its GPIO interface 103 (shown in Figure 1) through the multiplexing module 300. Alternatively, multiplexing module 300 can receive the transmit set of messaging signal 136 as discussed earlier for FSM 115. The FSM 1015 includes logic circuitry 301 that will authorize the serial transmission of the signal transmit set on the transmit line 110a as a pulse width modulated signal if there is a change in the transmit set compared to the previous state of the transmit set. In this way, there is no need to retransmit a transmission set that has not changed compared to the previous transmission. Logic circuit 301 thus compares the current transmit set of virtual GPIO signals with the previous transmit set stored in latch or configuration register 107. To perform the comparison, logic circuit 301 can include an exclusive OR gate 310 that compares the current transmit set with a previous transmit set stored in configuration register 107 (the previous transmit set can be recorded as "on" as shown in FIG. A GPIO state") XOR. The multiplexing module 300 loads the current transmit set into the in-line-out (PISO) shift register 315 in parallel. If the enable signal 320 from the XOR gate 310 goes high (indicating a change between the current transmit set and the transmit set stored in the register 107), the PISO shift register 315 is then enabled to respond to the shift signal 120 serially. The content is removed from the transmission line 110a.Each signal fire set includes a data frame stored in PISO shift register 315. The FSM 1015 includes a pulse width modulator 355 that pulse width modulates the bit emission set removed from the PISO shift register 315 into a pulse width modulated output signal that is driven to the remote processor on the transmit line 110a. The modulation is responsive to counting of an oscillation cycle from the oscillator, such as a count of the transmit ring oscillator output signal 360 from the transmit ring oscillator (RO) 361. Modulator 355 and transmit ring oscillator 361 can be triggered by asserting an enable signal from XOR gate 310. In response to the trigger, modulator 355 gates shift signal 120 such that PISO shift register 315 shifts the initial shift of the signal transmit set to modulator 355.Modulator 355 includes at least one counter that counts the loops in ring oscillator output signal 360 (e.g., counters 1105 and 1110 shown in Figure 11 as further described below). Depending on the desired pulse width from pulse width modulation, the counter counts to the first count or counts to a second count greater than the first count. After counting a sufficient number of cycles to satisfy the appropriate one of the first and second counts, the counter re-gated the shift signal 120 such that subsequent bits from the data frame stored in the PISO shift register 315 are Moved into modulator 355. In this manner, one bit of the signal transmission set of the data frame stored in the PISO shift register 315 is shifted into the modulator 355 at a time. Depending on the binary value of each bit shifted out of PISO shift register 315, pulse width modulator 355 pulse width modulates the corresponding pulse transmitted on transmit line 110a. In view of this, each processor can be configured to pull its transmit line 110a weakly to the supply voltage VDD during the default state (no data transfer). In such an embodiment, as shown in the timing diagram of FIG. 11, for data frames, the pulse transfer of the long bit time period begins by discharging the transmit line 110a to ground (VSS). Each pulse width modulation bit transfer begins with discharging the emission line 110a to some initial discharge portion of the ground landing period (such as 25% of the bit time period). Depending on the bit value, modulator 355 maintains discharge line 110a for a majority of the bit period (e.g., 75%), or charges line 110a back to VDD immediately after the initial discharge portion of the bit period expires. In other words, a binary value can be modulated into a relatively narrow high voltage (VDD) pulse in the bit period, and the complement of the binary value can be modulated into a relatively wide high voltage (VDD) pulse in the bit period.The initial bit of the example data frame shown in Figure 11 is a binary zero. In one embodiment, the binary zero can be modulated to a first pulse width, wherein the emission line 110a is maintained at ground for 75% of the bit period. Most of this one bit period corresponds to most of the counter 1110 counting to the second count. The bit to be transmitted is a binary zero, and the pulse width modulator 355 thus keeps the transmission line 110a discharged until the second count is satisfied. When the second count is reached, the pulse width modulator 355 will then pulse the transmit line 110a to the supply voltage VDD for the remainder of the bit period. The pulse duration then corresponds to a fraction of the counter 1105 that counts to the first count (only 25% of the bit period). The resulting voltage pulse transmitted on this line 110a will have a pulse width of only 25% of the bit period.Conversely, binary one can be modulated to a second pulse width, with emission line 110a being grounded only during a small portion of the discharge portion, such as the first 25% of the bit period. The emission line 110a will then be discharged until the first count is met. As determined by resetting most of the counter 410 to zero and counting until it satisfies the second count, once the first count is satisfied, the pulse width modulator 355 pulls the pulsed transmit line 110a up to the supply voltage VDD for up to The remainder of the bit period. The second pulse width at which the voltage of the emission line 110a is charged to the power supply voltage VDD will include 75% of the bit period. However, it will be appreciated that different pulse widths can be used in alternative embodiments to represent the desired binary value.In one embodiment, modulator 355 can include logic circuit 1100. Depending on the bit value, logic circuit 1100 triggers a small portion of counter 1105 or a majority of counter 1110 to begin counting. However, it will be appreciated that a single counter can be used to count to the first or second count depending on the desired pulse width modulation. At the time of triggering by logic circuit 1110, a small portion of counter 1105 or a majority of counter 1110 counts the loop from transmitting ring oscillator (RO) 361. For example, the small portion counter 1105 can be configured to count a sufficient number of cycles corresponding to 25% of the bit time period, at which time it asserts the output signal to indicate that the first count is satisfied. Similarly, most of the counter 1110 can be configured to count a sufficient number of cycles corresponding to 75% of the bit time period, at which time it asserts its output signal. In this embodiment, modulator 355 is configured to discharge transmit line 110a to ground at the beginning of each bit time period. Depending on the bit value, modulator 355 charges transmit line 110a back to supply voltage VDD at the assertion of the output signal from the appropriate counter. For example, the first bit in the data frame is a binary zero, so modulator 355 asserts transmit line 110a as high as VDD when counter 1105 asserts its output signal. Similarly, because the second bit in the data frame is a binary one, modulator 355 asserts transmit line 110a as high as VDD as counter 1110 asserts its output signal. It will be appreciated that the initial 25% low cycle is only an example and other scores for the bit time period can also be achieved.In one embodiment, the combination of logic circuit 41100, counters 1105 and 1110, modulator 355, and SIPO shift register 315 can be considered to include for serially processing each signal in the transmit set into a corresponding pulse width modulated signal. A sequential apparatus, wherein the apparatus is configured to determine each serially processed signal by counting an oscillation from an oscillator to one of a first technique and a second count in response to a binary value of the serially processed signal The pulse width, and wherein the device is further configured to transmit a corresponding pulse width modulated signal sequence to the remote processor over the dedicated transmit pin via the dedicated transmit pin.Referring again to FIG. 9, FSM 1015 also deserializes the signal reception set (virtual GPIO and/or messaging signal) in a similar manner using a serial-to-in and out (SIPO) shift register 325. Demodulator 370 demodulates the received pulse width modulated signal received from remote processor on receive line 110b. The demodulator 370 is configured to detect the beginning of the received data frame from the received pulse width modulated signal, such as by detecting the discharge of the receive line 110b, to trigger the receive ring oscillator 375 to begin oscillating the receive ring oscillator output signal. 380. Note that in an alternative embodiment, oscillators 375 and 361 can include the same oscillator. Similar to demodulator 355, demodulator 370 can include counters, such as low counter 415 and high counter 420. In each bit period, the low counter 415 is triggered to count when the receive line 110b is discharged. Instead, the high counter 420 is triggered to count when the receive line 110b is charged to the supply voltage VDD. In an alternative embodiment, counters 415 and 420 may be implemented using a single shared counter that counts the number of oscillations in each binary voltage state of receive line 110b. By comparing the counts from counters 415 and 420, demodulator 370 can form demodulated data signal 382 accordingly. In particular, if the count from the high counter 420 is greater than the count from the low counter 415 in a given bit period, the demodulator 370 can drive the demodulated data signal 382 up to the supply voltage VDD to indicate the received A relatively wide pulse. Conversely, if the count from the low counter is greater, the demodulator 370 can discharge the demodulated data signal 382 to VSS to indicate that a relatively narrow pulse was received.Demodulator 370 can also assert shift signal 381 into SIPO shift register 325 upon detection of a count from a bit time period boundary. The SIPO shift register 325 will then move from the demodulator 370 into the demodulated data signal 382. The FSM module 1015 can be configured to process a predefined data frame size of the signal transmit and receive sets determined by the programmed frames discussed above. Both counters 415 and 420 are initialized at the beginning of the bit time period. The low counter 415 counts the loop from the receive ring oscillator 375 when the receive line 110b voltage is low, and the high counter 420 counts the loop from the receive ring oscillator 375 when the receive line voltage is high (VDD) . Comparator 425 thus performs a demodulation bit decision by comparing the low count (CL) from low counter 415 with the high count (CH) from high counter 420 at the end of each bit time period. The bit periods may determine that the high counter 420 triggered from whenever the receive line 110b is discharged stops counting and outputs CH. Counter 420 can be initialized at each bit time boundary accordingly. At the end of each bit period, in one embodiment, if CL is greater than CH, comparator 425 drives modulated data signal 382 low, which corresponds to binary zero demodulation. In contrast, in such embodiments, if CH is greater than CL at the end of the bit period, the comparator drives the demodulated data signal 382 high, which corresponds to the demodulation of binary one. The SIPO shift register 325 registers each of the demodulated bits in response to the strobe of the shift signal 381.Many modifications, substitutions and changes can be made in the materials, devices, arrangements and methods of use of the device of the present disclosure without departing from the spirit of the present disclosure, as will be appreciated by those of ordinary skill in the art and in light of the specific application at hand. And scope. In view of the above, the scope of the present disclosure should not be limited to the specific embodiments illustrated and described herein (as they are merely some examples of the present disclosure), but should be completely equivalent to the appended claims and their functional equivalents. quite.
"Aspects described herein include devices, wireless communication apparatuses, methods, and associat(...TRUNCATED)
"\nCLAIMSWhat is claimed is:1. A wireless communication apparatus, comprising: a millimeter wave (mm(...TRUNCATED)
"\nHEATSINK FOR MILLIMETER WAVE (MMW) AND NON-MMW ANTENNA INTEGRATIONFIELD[0001] The present disclos(...TRUNCATED)
"Various embodiments include methods and devices for managing optional commands. Some embodiments ma(...TRUNCATED)
"\nCLAIMSWhat is claimed is:1. A method performed in a processor of a computing device, comprising: (...TRUNCATED)
"\nRead Optional And Write Optional CommandsRELATED APPLICATIONS[0001] This application claims the b(...TRUNCATED)
"Integrated circuits and methods of manufacturing such circuits are disclosed herein that feature me(...TRUNCATED)
"\nCLAIMS1. A method of manufacturing an integrated circuit, the method comprising:performing routin(...TRUNCATED)
"\nMITIGATING ELEC TROMIGRATION, IN-RUSH CURRENT EFFECTS, IR-VOLTAGE DROP, AND JITTER THROUGH METALL(...TRUNCATED)
"Reducing memory fragmentation. Memory is allocated during a preboot phase of a computer system, whe(...TRUNCATED)
"\n1.A method for reducing memory segmentation includes:The firmware module is allocated memory duri(...TRUNCATED)
"\nMethod, equipment and system for reducing memory segmentationTechnical fieldEmbodiments of the pr(...TRUNCATED)
"A method, apparatus, and system are described for rasterizing a triangle. Pixel parameter values ar(...TRUNCATED)
"\nWhat is claimed is: 1. A method of rasterization using a tile-based digital differential analyzer(...TRUNCATED)
"\nThe present application is a continuation-in-part (CIP) based on and claims priority from U.S. pa(...TRUNCATED)
"III-N semiconductor-on-silicon integrated circuit structures and techniques are disclosed. In some (...TRUNCATED)
"\n CLAIMS What is claimed is: 1. An integrated circuit comprising: a crystalline silicon substrat(...TRUNCATED)
"\n III-N SEMICONDUCTOR-ON-SILICON STRUCTURES AND TECHNIQUES BACKGROUND Integrated circuit (IC) desi(...TRUNCATED)
"Embodiments of a silicon-on-insulator (SOI) wafer having an etch stop layer (130) overlying the bur(...TRUNCATED)
"\nCLAIMS What is claimed is: 1. A method comprising forming an etch stop layer in a silicon-on-insu(...TRUNCATED)
"\nMETHOD FOR MANUFACTURING A SILICON-ON-INSULATOR (SOI) WAFERWITH AN ETCH STOP LAYERFIELD OF THE IN(...TRUNCATED)
"An apparatus is described. The apparatus includes a memory controller to interface with a multi-lev(...TRUNCATED)
"\nClaims1. An apparatus, comprising:a memory controller to interface with a multi-level system memo(...TRUNCATED)
"\nMEMORY CONTROLLER FOR MULTI-LEVEL SYSTEM MEMORY HAVINGSECTORED CACHEField of InventionThe field o(...TRUNCATED)

No dataset card yet

Downloads last month
16